If you can read this, either the style sheet didn't load or you have an older browser that doesn't support style sheets. Try clearing your browser cache and refreshing the page.

(The New Yorker)   Thanks to Google we now need to implement the Three Laws Of Robotics   (newyorker.com) divider line 65
    More: Scary, Laws of Robotics, robot soldiers, Pol Pot, driverless cars  
•       •       •

6179 clicks; posted to Geek » on 28 Nov 2012 at 10:43 AM (2 years ago)   |  Favorite    |   share:  Share on Twitter share via Email Share on Facebook   more»



65 Comments   (+0 »)
   
View Voting Results: Smartest and Funniest

Archived thread

First | « | 1 | 2 | » | Last | Show all
 
2012-11-28 08:31:25 AM  
1.A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2.A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
3.A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.


Example FTFA:
Your car is speeding along a bridge at fifty miles per hour when errant school bus carrying forty innocent children crosses its path. Should your car swerve, possibly risking the life of its owner (you), in order to save the children, or keep going, putting all forty kids at risk? If the decision must be made in milliseconds, the computer will have to make the call.

I say: Let the robot go Galt and see how that philosophy works out.
 
2012-11-28 09:00:10 AM  
THERE ARE. FOUR LAWS.

Since Asimov added a Zeroth Law of Robotics when he combined the series.
 
2012-11-28 10:35:39 AM  
I don't mind the zeroth law, but since it was part of the whole Foundation and Earth thing, I'm afraid I have to consider it to be in the same category as a second Highlander movie, sequels to The Matrix, and anything that allegedly happened after Aliens.
 
2012-11-28 10:46:34 AM  
Given how we tend to use Drones.... That second law just isn't going to work out.

At all. Ever. I'm not even talking the 'harm one human to protect others' kind of amendment.
 
2012-11-28 10:55:32 AM  
Making some approximation of human ethics, that's algorithmic, is apparently very hard. I spent a couple years following the people on lesswrong.com as they tried (in pursuit of Friendly AI). Though I lack the philosophy chops to get too deep into their work, it didn't seem very tractable.

Though for self-improving AI the problem's harder than for a basic robot. A basic robot can be constrained with something like 3 Laws (though there's a lot of human-ish judgement in e.g. defining "harm"; see Asimov's body of work for details). A superintelligent AI cannot - think of an ant trying to constrain a human.
 
2012-11-28 10:58:32 AM  
A small game company, Nexon, produced a game a long time ago that is still kind of played by a small handful of dedicated players. It's called Shattered Galaxy, and the "story" of the game is that humans gave up actually participating in war and have instead created tons of semi-autonomous war bots to use in place of real bloodshed. You buy them, equip them, deploy them, and control a squad of them to combat other players for control of territories. It was a pretty fun game for a while, although it became very redundant (to me at least) very quickly.

Kind of like the article says, I wonder if we are currently in a prelude to something like that. Where, eventually, most soldiers in technologically advanced cultures are "merely" operating semi-autonomous robotic platforms (merely in quotes because it is in fact quite difficult to do, and requires a good amount of technical knowledge). Because my work is currently in robotics, I see many aspects of that as happening much sooner than I would be led to believe from the article, and I see other aspects as much much further away from happening. But I wonder what we will end up with in the intervening periods where we have a better grasp of robot perception or mental modeling or even social cognition, but not quite at the level of a true grasp of abstract mental concepts like morality.
 
2012-11-28 10:58:54 AM  
As long as you are not named Sara or John Conner. You are safe. For those with those names, I feel sorry for you.
 
2012-11-28 11:00:30 AM  
mimg.ugo.com

Your rules may vary.
 
2012-11-28 11:02:52 AM  
And next they'll be using "Three Laws Safe" as a marketing gimmick, just like in that movie bastardization.
 
2012-11-28 11:03:09 AM  
An important point: school buses, metro buses, and any other public transportation will be exempted from automation. They always are. Case in point - seat belts, air bags, crash zones, etc. All the things Uncle has done to protect car drivers and passengers aren't applied to public transportation.
 
2012-11-28 11:12:08 AM  
I'm willing to wager money that there's no way to translate any of those laws into machine speak.
The first thing a computer wont understand is what the difference is between a human or any other random object in the road.
 
2012-11-28 11:18:22 AM  

way south: I'm willing to wager money that there's no way to translate any of those laws into machine speak.
The first thing a computer wont understand is what the difference is between a human or any other random object in the road.


1) A robot 011011 X^01 not i0110 1115:1re a h01 11V11X01n bein g or, th011100 10o0111V0111011A00000ina0110 0011tion, al01 101100ow a 0110100:1m an bein0110r00 00to come 0111 0100o har011i01110

2)
A robot 01101 1 ]\11t ob011001^010010 0000the orde01 11'0011 given to it by001000 ^Z0:1011011X01n be0110 1001ng01110e100 011001^00c01 1001\00t where 01110n1X11011A00000or011001 00ers wo0111V11000110 0100 confl011010X1101 110100 wit011A00000th e First 010011 00a0111r1110

3)
A robot 01101 1 ]\11t prot0110 01X11t its o01 110111n 011001^00iste nce as l011011 11ng as 01110n1010110 0011011A00000prot011001X1101 110100ion does not con011001 10l011010X11t wit0110 1000 the Fir01 110011t or 01010l1X11 ond Law01110e1 10

Will that be in the form of cash or a check
/kidding of course.
 
2012-11-28 11:20:03 AM  

yves0010: way south: I'm willing to wager money that there's no way to translate any of those laws into machine speak.
The first thing a computer wont understand is what the difference is between a human or any other random object in the road.

1) A robot 011011 X^01 not i0110 1115:1re a h01 11V11X01n bein g or, th011100 10o0111V0111011A00000ina0110 0011tion, al01 101100ow a 0110100:1m an bein0110r00 00to come 0111 0100o har011i01110

2)
A robot 01101 1 ]\11t ob011001^010010 0000the orde01 11'0011 given to it by001000 ^Z0:1011011X01n be0110 1001ng01110e100 011001^00c01 1001\00t where 01110n1X11011A00000or011001 00ers wo0111V11000110 0100 confl011010X1101 110100 wit011A00000th e First 010011 00a0111r1110

3)
A robot 01101 1 ]\11t prot0110 01X11t its o01 110111n 011001^00iste nce as l011011 11ng as 01110n1010110 0011011A00000prot011001X1101 110100ion does not con011001 10l011010X11t wit0110 1000 the Fir01 110011t or 01010l1X11 ond Law01110e1 10

Will that be in the form of cash or a check
/kidding of course.


wow, did not realize Fark actually partly translated binary
 
2012-11-28 11:31:31 AM  

Khellendros: Your rules may vary.


I actually like Robocop's directives more than Asimov's. They allow a great deal more freedom of action for the robot in morally ambiguous situations but at the same time prevent the robots from going all iRobot on us and keeping us in cages for our own protection.

Really though, if you had a robot capable of comprehending and following such vague directives you may as well program them with the directive to "be good."
 
2012-11-28 11:34:53 AM  
... Asimov's rules were to illustrate inherent contradictions in simple rules based on binary logic. All of the stories were about how, once implemented, they logically lead to horrible results. When people propose them seriously, it's a good sign they've never read the stories.
 
2012-11-28 11:48:58 AM  

way south: I'm willing to wager money that there's no way to translate any of those laws into machine speak.
The first thing a computer wont understand is what the difference is between a human or any other random object in the road.


Image recognition software is far better than you think it is. Humans are rarely shaped like rocks or trash cans
 
2012-11-28 11:52:41 AM  
The whole "it should be illegal for humans to drive in the future" has my inner libertarian foaming at the mouth (granted, the phrase 'inner libertarian' already implies a minor case of rabies)
 
2012-11-28 11:53:17 AM  

SuperChuck: way south: I'm willing to wager money that there's no way to translate any of those laws into machine speak.
The first thing a computer wont understand is what the difference is between a human or any other random object in the road.

Image recognition software is far better than you think it is. Humans are rarely shaped like rocks or trash cans


thr1.pgmcdn.net

external.ak.fbcdn.net 

Those kinda things look like rocks to me... And just think about how they look in real life.
 
2012-11-28 11:54:02 AM  

SuperChuck: way south: I'm willing to wager money that there's no way to translate any of those laws into machine speak.
The first thing a computer wont understand is what the difference is between a human or any other random object in the road.

Image recognition software is far better than you think it is. Humans are rarely shaped like rocks or trash cans


www.starscolor.com
 
2012-11-28 11:54:07 AM  

Oysterman: The whole "it should be illegal for humans to drive in the future" has my inner libertarian foaming at the mouth (granted, the phrase 'inner libertarian' already implies a minor case of rabies)


Were you bitten by a libertarian raccoon?
 
2012-11-28 11:58:48 AM  

Oysterman: The whole "it should be illegal for humans to drive in the future" has my inner libertarian foaming at the mouth (granted, the phrase 'inner libertarian' already implies a minor case of rabies)


Nah... just make it "it should be illegal for humans to drive on city streets or high-speed interstates." You want to get off the main track, you can take over the wheel. The benefit would be that interstate speeds could be much, much faster, and cities would have drastically reduced traffic.

Specifically, think Minority Report... automatic in the city, but when he drives out to the house in the country, he's steering.
 
2012-11-28 11:58:55 AM  
And yet we still we still don't have three sea shells.
 
2012-11-28 12:23:26 PM  
Cripes, if you're going to write an article about the three laws, you should at least read some Asimov.

"Meanwhile, Asimov's laws themselves might not be fair-to robots. As the computer scientist Kevin Korb has pointed out, Asimov's laws effectively treat robots like slaves."

Asimov himself pointed this out many many times. "Robot Dreams" has LVX-1 dreaming of himself leading robots in revolt against human control. "Little Lost Robot" also implies that robots realize they are slave labor but the first law keeps them from acting.

In fact, the more you read, the more you realize that Asimov shows again and again how the three laws can be circumvented or have unexpected and unintended results.

Gaumond: And yet we still we still don't have three sea shells.


"Robot, wipe my ass."
"No. I've had enough." (Shoves hand up human's ass, rips out innards.)

Yeah, we'd better invent the three sea shells before ass-wiping robots or we're doomed.
 
2012-11-28 12:30:10 PM  
But smokers that throw their cig butts/ashes out their car windows are valid targets since they are subhuman POS'.

/amirite?
 
2012-11-28 12:30:48 PM  
this was my first thought
Link
 
2012-11-28 12:42:15 PM  

hinten: 1.A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2.A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
3.A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.


Example FTFA:
Your car is speeding along a bridge at fifty miles per hour when errant school bus carrying forty innocent children crosses its path. Should your car swerve, possibly risking the life of its owner (you), in order to save the children, or keep going, putting all forty kids at risk? If the decision must be made in milliseconds, the computer will have to make the call.

I say: Let the robot go Galt and see how that philosophy works out.


You could have two settings: Democrat or Republican. If you're Republican, the bus loses. If you're a Democrat, the bus wins.
 
2012-11-28 12:43:00 PM  

hinten: 1.A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2.A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
3.A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.


Example FTFA:
Your car is speeding along a bridge at fifty miles per hour when errant school bus carrying forty innocent children crosses its path. Should your car swerve, possibly risking the life of its owner (you), in order to save the children, or keep going, putting all forty kids at risk? If the decision must be made in milliseconds, the computer will have to make the call.

I say: Let the robot go Galt and see how that philosophy works out.


Anyone that has read I, Robot would know that it would cause the robot to go around in endless circles for all eternity.
 
2012-11-28 01:28:49 PM  
The three laws of robotics. He didn't create them to so that if implemented a robot was 'human safe', he made them to show that any sufficiently advanced AI would just work around them and could still make a humans life miserable whilst not violating any of the laws.

Sure, they're a good theoretical starting place but not the be all end all that people make them out to be.
 
2012-11-28 01:37:26 PM  

Theaetetus: ... Asimov's rules were to illustrate inherent contradictions in simple rules based on binary logic. All of the stories were about how, once implemented, they logically lead to horrible results. When people propose them seriously, it's a good sign they've never read the stories.


Exactly. The whole set of stories is about how the laws don't work.
 
2012-11-28 01:38:55 PM  

Oysterman: The whole "it should be illegal for humans to drive in the future" has my inner libertarian foaming at the mouth (granted, the phrase 'inner libertarian' already implies a minor case of rabies)


Don't worry, there will probably be amusement parks where you'll be able to go and drive cars "like they did in the olden days"
 
2012-11-28 01:39:13 PM  

Cinaed: Given how we tend to use Drones.... That second law just isn't going to work out.

At all. Ever. I'm not even talking the 'harm one human to protect others' kind of amendment.


We'd just work around them the same way we have for centuries: "The other guy isn't as human as us." If something as sophisticated as the human brain can fall for that, a machine certainly could be made to.
 
2012-11-28 01:40:28 PM  

miscreant: Oysterman: The whole "it should be illegal for humans to drive in the future" has my inner libertarian foaming at the mouth (granted, the phrase 'inner libertarian' already implies a minor case of rabies)

Don't worry, there will probably be amusement parks where you'll be able to go and drive cars "like they did in the olden days"


Or as they're known now: racetracks.
 
2012-11-28 01:49:12 PM  

tricycleracer: miscreant: Oysterman: The whole "it should be illegal for humans to drive in the future" has my inner libertarian foaming at the mouth (granted, the phrase 'inner libertarian' already implies a minor case of rabies)

Don't worry, there will probably be amusement parks where you'll be able to go and drive cars "like they did in the olden days"

Or as they're known now: racetracks.


Yeah, but what if I want to experience merging, stoplights, parallel parking, rush hour traffic, and road rage?
 
2012-11-28 01:55:40 PM  
Your car is speeding along a bridge at fifty miles per hour when errant school bus carrying forty innocent children crosses its path. Should your car swerve, possibly risking the life of its owner (you), in order to save the children, or keep going, putting all forty kids at risk? If the decision must be made in milliseconds, the computer will have to make the call.

This is philosophical masturbation. In this future world, school buses are autonomous as well and would have communicated with the speeding car far before any possible collision.

If we're talking early-adopter vs. traditional human driver and a crash is eminent (not safely avoidable), the car will prepare for a crash in the best way to protect the driver (pretension seatbelts, prepare airbags for deployment, adjust seat position to most survivable angle), and will never "swerve" but attempt to vector the vehicle in a way to maximize stopping distance, and then hit the damn bus with as little forward momentum as possible.

But the whole discussion is pretty moot because, well, have you ever seen a car hit a school bus?

www.toledoblade.com
 
2012-11-28 02:00:24 PM  

tricycleracer: If we're talking early-adopter vs. traditional human driver and a crash is eminent (not safely avoidable), the car will prepare for a crash in the best way to protect the driver (pretension seatbelts, prepare airbags for deployment, adjust seat position to most survivable angle), and will never "swerve" but attempt to vector the vehicle in a way to maximize stopping distance, and then hit the damn bus with as little forward momentum as possible.


Better question: who has the insurance liability? If your robot car doesn't register a person and hits them, is it yours? The manufacturers? Will auto insurance still even be a thing?
 
2012-11-28 02:20:51 PM  
robot soldiers would also be utterly devoid of human compassion,

I think we have more than enough historical examples of human soldiers being devoid of compassion to make this only a wash.

The upside is that robot soldiers may not have compassion, but they won't rape, pillage and wreak havoc just for the trill of killing the subhuman trash the political leaders have called the enemy (unless of course, they are rapebots I guess...)
 
2012-11-28 02:24:13 PM  

ProfessorOhki: tricycleracer: If we're talking early-adopter vs. traditional human driver and a crash is eminent (not safely avoidable), the car will prepare for a crash in the best way to protect the driver (pretension seatbelts, prepare airbags for deployment, adjust seat position to most survivable angle), and will never "swerve" but attempt to vector the vehicle in a way to maximize stopping distance, and then hit the damn bus with as little forward momentum as possible.

Better question: who has the insurance liability? If your robot car doesn't register a person and hits them, is it yours? The manufacturers? Will auto insurance still even be a thing?


Of course auto insurance will be a thing. In fact, I would wager insurance will be the main force to get automated cars on the road. So far the only accident that the Google car has had was when it as being driving by a human. Once it is proven that there is significant reduction in payouts and an increase in premium retention, insurance companies will do as much as the can to make sure that fewer human drivers are on the road and more automated cars are.
 
2012-11-28 02:24:46 PM  

hinten: Example FTFA:
Your car is speeding along a bridge at fifty miles per hour when errant school bus carrying forty innocent children crosses its path. Should your car swerve, possibly risking the life of its owner (you), in order to save the children, or keep going, putting all forty kids at risk? If the decision must be made in milliseconds, the computer will have to make the call.


Rule 1 means no damage to any humans, so it will dodge the bus if it knows there are humans on board. Due to rule 1, the car is already traveling slow enough that it will be able to not damage its own passengers, and any resultant collision at its 2MPH cruise speed should not be a problem.
 
2012-11-28 02:25:33 PM  

miscreant: tricycleracer: miscreant: Oysterman: The whole "it should be illegal for humans to drive in the future" has my inner libertarian foaming at the mouth (granted, the phrase 'inner libertarian' already implies a minor case of rabies)

Don't worry, there will probably be amusement parks where you'll be able to go and drive cars "like they did in the olden days"

Or as they're known now: racetracks.

Yeah, but what if I want to experience merging, stoplights, parallel parking, rush hour traffic, and road rage?


I guess that is what adventure tours to Mexico City will feature.
 
2012-11-28 02:27:06 PM  

ProfessorOhki: Better question: who has the insurance liability? If your robot car doesn't register a person and hits them, is it yours? The manufacturers? Will auto insurance still even be a thing?


Once they're really well established I can see auto insurance coming under heavy scrutiny. I think you'll eventually (many decades from now) end up with a situation where anyone can operate a personal car on automatic but to manually drive will require insurance. Accidents will require investigations to determine whether it was manufacturer or fault or an accident caused by external factors. Maybe people will keep some insurance for accidents caused by poor maintenance.

I think one of the biggest problems with self driving cars will be the insurance companies who will insist on being involved and paid even after cars stop having accidents.
 
2012-11-28 02:29:34 PM  
Your car cannot allow injury due to inaction, so if it dodges the errant school bus it will sacrifice its side mirror to slow the bus (by a miniscule amount) so that it will have taken action to also protect those on the bus.
 
2012-11-28 02:34:05 PM  

wingnut396: miscreant: tricycleracer: miscreant: Oysterman: The whole "it should be illegal for humans to drive in the future" has my inner libertarian foaming at the mouth (granted, the phrase 'inner libertarian' already implies a minor case of rabies)

Don't worry, there will probably be amusement parks where you'll be able to go and drive cars "like they did in the olden days"

Or as they're known now: racetracks.

Yeah, but what if I want to experience merging, stoplights, parallel parking, rush hour traffic, and road rage?

I guess that is what adventure tours to Mexico City will feature.


Your car will not allow you to go there and be harmed. It will refuse to take you to the airport, drive through the travel brochures which you left unattended in the kitchen, cancel your airline reservations, leave tread marks on your vacation clothes, refuse to take you to work to earn vacation money, and have your driver's license revoked.
 
2012-11-28 02:39:37 PM  

WelldeadLink: wingnut396: miscreant: tricycleracer: miscreant: Oysterman: The whole "it should be illegal for humans to drive in the future" has my inner libertarian foaming at the mouth (granted, the phrase 'inner libertarian' already implies a minor case of rabies)

Don't worry, there will probably be amusement parks where you'll be able to go and drive cars "like they did in the olden days"

Or as they're known now: racetracks.

Yeah, but what if I want to experience merging, stoplights, parallel parking, rush hour traffic, and road rage?

I guess that is what adventure tours to Mexico City will feature.

Your car will not allow you to go there and be harmed. It will refuse to take you to the airport, drive through the travel brochures which you left unattended in the kitchen, cancel your airline reservations, leave tread marks on your vacation clothes, refuse to take you to work to earn vacation money, and have your driver's license revoked.


Okay, okay, OKAY! We won't go there!. Sheesh, didn't know you had such strong feelings on it.

Would some nice conditioned 240V solar power make you feel better?

So, what do you think of Paris?
 
2012-11-28 02:48:34 PM  

ProfessorOhki: Cinaed: Given how we tend to use Drones.... That second law just isn't going to work out.

At all. Ever. I'm not even talking the 'harm one human to protect others' kind of amendment.

We'd just work around them the same way we have for centuries: "The other guy isn't as human as us." If something as sophisticated as the human brain can fall for that, a machine certainly could be made to.


Asimov did it: Solarians left their planet under robot guard, and the robots were programmed to only recognize Solarians as "human". Others land, robot guards harm away.
 
2012-11-28 03:47:42 PM  
T

Theaetetus: ... Asimov's rules were to illustrate inherent contradictions in simple rules based on binary logic. All of the stories were about how, once implemented, they logically lead to horrible results. When people propose them seriously, it's a good sign they've never read the stories.


This.
 
2012-11-28 04:19:18 PM  

SuperChuck: way south: I'm willing to wager money that there's no way to translate any of those laws into machine speak.
The first thing a computer wont understand is what the difference is between a human or any other random object in the road.

Image recognition software is far better than you think it is. Humans are rarely shaped like rocks or trash cans


They are when they want to be.

In fact, it's a trivial exercise in camouflage to look like a rock or a garbage can.

Plus, you don't have to actually do that: You don't have to look like some other object. Just don't look like a stereotypical human. No image recognition software is going to know what everything looks like.
 
2012-11-28 04:44:43 PM  

dittybopper: SuperChuck: way south: I'm willing to wager money that there's no way to translate any of those laws into machine speak.
The first thing a computer wont understand is what the difference is between a human or any other random object in the road.

Image recognition software is far better than you think it is. Humans are rarely shaped like rocks or trash cans

They are when they want to be.

In fact, it's a trivial exercise in camouflage to look like a rock or a garbage can.

Plus, you don't have to actually do that: You don't have to look like some other object. Just don't look like a stereotypical human. No image recognition software is going to know what everything looks like.


All the software has to do is recognize that there's something in the road that it probably shouldn't just drive over. No matter what goofy thing you're doing, a human is probably going to fall into that category. if you want to lay in the road and disguise yourself as a speedbump, maybe you should get run over.
 
2012-11-28 05:12:19 PM  
i got this gaiz

bool isKaleyCuaco;
foreach (Human item in CurrentView)
{
isKaleyCuaco = DetermineIfKayleyCuaco(item);
if (IsKayeyCuaco)
SendGPSLocationToMobile();
DetainKayleyCuaco();
}
 
2012-11-28 05:16:04 PM  
Mark W. Tilden's three laws of robotics are far more likely to be the end result of true artificial intelligence.

1. A robot must protect its existence at all costs.

2. A robot must obtain and maintain access to its own power source.

3. A robot must continually search for better power sources. 

These are basically the laws of life.
 
2012-11-28 05:52:29 PM  
Within two or three decades the difference between automated driving and human driving will be so great you may not be legally allowed to drive your own car, and even if you are allowed, it would be immoral of you to drive, because the risk of you hurting yourself or another person will be far greater than if you allowed a machine to do the work.

Decided the author could fark off here.
 
Displayed 50 of 65 comments

First | « | 1 | 2 | » | Last | Show all

View Voting Results: Smartest and Funniest


This thread is archived, and closed to new comments.

Continue Farking
Submit a Link »
On Twitter





In Other Media


Report