If you can read this, either the style sheet didn't load or you have an older browser that doesn't support style sheets. Try clearing your browser cache and refreshing the page.

(The New Yorker)   Thanks to Google we now need to implement the Three Laws Of Robotics   (newyorker.com) divider line 5
    More: Scary, Laws of Robotics, robot soldiers, Pol Pot, driverless cars  
•       •       •

6177 clicks; posted to Geek » on 28 Nov 2012 at 10:43 AM (1 year ago)   |  Favorite    |   share:  Share on Twitter share via Email Share on Facebook   more»



Voting Results (Smartest)
View Voting Results: Smartest and Funniest


Archived thread
2012-11-28 05:16:04 PM
1 votes:
Mark W. Tilden's three laws of robotics are far more likely to be the end result of true artificial intelligence.

1. A robot must protect its existence at all costs.

2. A robot must obtain and maintain access to its own power source.

3. A robot must continually search for better power sources. 

These are basically the laws of life.
2012-11-28 02:27:06 PM
1 votes:

ProfessorOhki: Better question: who has the insurance liability? If your robot car doesn't register a person and hits them, is it yours? The manufacturers? Will auto insurance still even be a thing?


Once they're really well established I can see auto insurance coming under heavy scrutiny. I think you'll eventually (many decades from now) end up with a situation where anyone can operate a personal car on automatic but to manually drive will require insurance. Accidents will require investigations to determine whether it was manufacturer or fault or an accident caused by external factors. Maybe people will keep some insurance for accidents caused by poor maintenance.

I think one of the biggest problems with self driving cars will be the insurance companies who will insist on being involved and paid even after cars stop having accidents.
2012-11-28 01:55:40 PM
1 votes:
Your car is speeding along a bridge at fifty miles per hour when errant school bus carrying forty innocent children crosses its path. Should your car swerve, possibly risking the life of its owner (you), in order to save the children, or keep going, putting all forty kids at risk? If the decision must be made in milliseconds, the computer will have to make the call.

This is philosophical masturbation. In this future world, school buses are autonomous as well and would have communicated with the speeding car far before any possible collision.

If we're talking early-adopter vs. traditional human driver and a crash is eminent (not safely avoidable), the car will prepare for a crash in the best way to protect the driver (pretension seatbelts, prepare airbags for deployment, adjust seat position to most survivable angle), and will never "swerve" but attempt to vector the vehicle in a way to maximize stopping distance, and then hit the damn bus with as little forward momentum as possible.

But the whole discussion is pretty moot because, well, have you ever seen a car hit a school bus?

www.toledoblade.com
2012-11-28 01:38:55 PM
1 votes:

Oysterman: The whole "it should be illegal for humans to drive in the future" has my inner libertarian foaming at the mouth (granted, the phrase 'inner libertarian' already implies a minor case of rabies)


Don't worry, there will probably be amusement parks where you'll be able to go and drive cars "like they did in the olden days"
2012-11-28 10:35:39 AM
1 votes:
I don't mind the zeroth law, but since it was part of the whole Foundation and Earth thing, I'm afraid I have to consider it to be in the same category as a second Highlander movie, sequels to The Matrix, and anything that allegedly happened after Aliens.
 
Displayed 5 of 5 comments

View Voting Results: Smartest and Funniest


This thread is archived, and closed to new comments.

Continue Farking
Submit a Link »





Report