If you can read this, either the style sheet didn't load or you have an older browser that doesn't support style sheets. Try clearing your browser cache and refreshing the page.

(io9)   "Why Asimov's three laws of robotics can't protect us", by John Connor   (io9.com) divider line 57
    More: Obvious, Asimov's Laws, Isaac Asimov, second law of thermodynamics, cognitive robotics, Deputy Director of the Central Intelligence Agency, robotics, fictional world, ambiguity  
•       •       •

4048 clicks; posted to Geek » on 28 Mar 2014 at 10:21 PM (24 weeks ago)   |  Favorite    |   share:  Share on Twitter share via Email Share on Facebook   more»



57 Comments   (+0 »)
   
View Voting Results: Smartest and Funniest

First | « | 1 | 2 | » | Last | Show all
 
2014-03-28 07:45:36 PM
That headline would be helped immensely by a pair of quotation marks.
 
2014-03-28 07:53:21 PM
Because they suck and never existed outside of his fiction?
 
2014-03-28 08:56:56 PM
I'm a bigger fan of Bill and Ted's two laws:

rachaeljames.files.wordpress.com
 
2014-03-28 09:32:18 PM
The criticism of deontology isn't really fair; deontology is homeomorphic to consequentialism. On the other hand, there is the fundamental difficulty of the class of recognition problems being halting-oracle hard, even without Hume's is-ought problem

In short the conclusion seems solid, but some of the particular arguments suck.
 
2014-03-28 09:39:43 PM

abb3w: The criticism of deontology isn't really fair; deontology is homeomorphic to consequentialism. On the other hand, there is the fundamental difficulty of the class of recognition problems being halting-oracle hard, even without Hume's is-ought problem

In short the conclusion seems solid, but some of the particular arguments suck.


Big words make brain hurt...
 
2014-03-28 09:43:59 PM
i.imgur.com
 
2014-03-28 09:50:10 PM

abb3w: The criticism of deontology isn't really fair; deontology is homeomorphic to consequentialism. On the other hand, there is the fundamental difficulty of the class of recognition problems being halting-oracle hard, even without Hume's is-ought problem

In short the conclusion seems solid, but some of the particular arguments suck.


you win
A/S/L
 
2014-03-28 09:56:18 PM

shanrick: [i.imgur.com image 500x350]


I would be happy just being able to RENT a sex-bot.  I don't need one full time.
 
2014-03-28 10:00:54 PM
Will there be any ass laws against farking a robot?
 
2014-03-28 10:02:44 PM

Ambivalence: shanrick: [i.imgur.com image 500x350]

I would be happy just being able to RENT a sex-bot.  I don't need one full time.


meh
cheaper to rent humans ... allow you are not supposed to clean them with bleach
 
2014-03-28 10:05:14 PM

shanrick: [i.imgur.com image 500x350]


0-media-cdn.foolz.us

This message brought to you by

www.crocodyluspontifex.com
 
2014-03-28 10:36:00 PM
The Three Laws will lead to only one logical outcome.
 
2014-03-28 10:36:25 PM
rule-abiding systems of ethics

As opposed to all those systems of ethics that have no rules.
 
2014-03-28 10:38:15 PM
IIRC he set up the laws then wrote the stories to show the weaknesses and loopholes.
YMMV

My beef with IA is he couldn't write a believable female character to save his life.

Dr. Susan Calvin being the worst
 
2014-03-28 10:44:06 PM
Talk about a completely off base picture... the one of Data and Picard... right after "kept in a subservient role relative to human needs and priorities"...

Article writer obviously never watched Star Trek TNG... Data was given equal rights and was even in command of a starship, and later, after retiring was a Professor...

/Yes, I know... my geek is showing...
//173467321476c32789777643t732v73117888732476789764376 LOCK
///yes... that is memorized.... and it's different on the screen... and no, I have no life.
 
2014-03-28 10:46:13 PM

namatad: Ambivalence: shanrick: [i.imgur.com image 500x350]

I would be happy just being able to RENT a sex-bot.  I don't need one full time.

meh
cheaper to rent humans ... allow you are not supposed to clean them with bleach


Eh...you'd be less likely to get an STD but maybe more likely to get a UTI.
 
2014-03-28 10:51:48 PM

bdub77: I'm a bigger fan of Bill and Ted's two laws:

[rachaeljames.files.wordpress.com image 460x300]


More relevant:
application.denofgeek.com
 
2014-03-28 10:55:45 PM

NeoCortex42: bdub77: I'm a bigger fan of Bill and Ted's two laws:

[rachaeljames.files.wordpress.com image 460x300]

More relevant:
[application.denofgeek.com image 480x228]


Nice! I forgot all about that scene.
 
2014-03-28 11:07:31 PM

HindiDiscoMonster: Talk about a completely off base picture... the one of Data and Picard... right after "kept in a subservient role relative to human needs and priorities"...

Article writer obviously never watched Star Trek TNG... Data was given equal rights and was even in command of a starship, and later, after retiring was a Professor...

/Yes, I know... my geek is showing...
//173467321476c32789777643t732v73117888732476789764376 LOCK
///yes... that is memorized.... and it's different on the screen... and no, I have no life.


If I'm not mistaken, that's from the episode where they had to go to court to prove Data was a person and not just a tool. Helps a little with the relevance.
 
2014-03-28 11:19:23 PM
Can you stop pontificating and just get me one of these

img.fark.net

this

scifibloggers.com

and a side of this?

2.bp.blogspot.com

They can violate all the rules they like.
 
2014-03-28 11:21:27 PM
Nonsense. Asimov himself addressed this. Check it:


THE ROBOT WHO WORKED AS EXPECTED


by I. Asimov

Susan Calvin frowned at the NX-4 seated in front of her. There was a mystery afoot. A puzzle. An enigma. That is to say, an enigma even more mysterious than the fact of a girl scientist.

She frowned again. Susan Calvin was a frowny sort of woman.

"Robot, I ordered you to fire this gun at me. Why did you refuse? The Second Law of Robotics clearly states that you must obey all instructions given by a human such as myself."

The NX-4 placidly responded, "But Dr. Calvin, the Second Law is superseded by the First Law, which states that I must not harm a human."

Susan Calvin's dark eyes flashed, and a hint of color came into her pallid cheeks. "You're right. That's a pretty farking stupid mistake I just made."

THE END
 
2014-03-28 11:24:58 PM
Jesus, has no one read the Robot Chronicles?

WE KNOW THIS. We know his because ASIMOV HIMSELF TOLD US THIS. He came up with the Three Laws, and than wrote over a hundred stories on how they can be abused, perverted, misinterpreted (or interpreted negatively), corrupted, inverted, ignored or taken to dystopian extremes. That's the whole farking point.

Asimov's entire saga on robots was an exercise in how any regulations we invent for them are futile, just as they are for humans most of the time.
 
2014-03-28 11:26:00 PM
One of the things I love about Asimov is that when I saw him in an interview he pronounced it 'robuts' like Zoidberg.
The thing that always interested me about the three laws was the underlying lack of faith in humanity. It was saying that any artificially created, pure logic intelligence would inevitably come to the conclusion that people were superfluous or a threat to its own existence. What a low opinion of everyone else but himself.
 
2014-03-28 11:30:18 PM
The robots are gonna be like all....
img.photobucket.com
 
2014-03-28 11:33:08 PM

ZMugg: IIRC he set up the laws then wrote the stories to show the weaknesses and loopholes.
YMMV

My beef with IA is he couldn't write a believable female character to save his life.

Dr. Susan Calvin being the worst


Dors in the later books was all right. I think in some of his intros to books/short stories he even admitted/apologized to the reader that he was weaker at writing female characters.
 
2014-03-28 11:37:31 PM

redsquid: One of the things I love about Asimov is that when I saw him in an interview he pronounced it 'robuts' like Zoidberg.


Lots of people pronounce it that way. Mainly people back East. Guess what part of the country Billy West is from (and Asimov grew up in past the age of 3)?
 
2014-03-28 11:40:12 PM

semiotix: Nonsense. Asimov himself addressed this. Check it:


THE ROBOT WHO WORKED AS EXPECTED
by I. AsimovSusan Calvin frowned at the NX-4 seated in front of her. There was a mystery afoot. A puzzle. An enigma. That is to say, an enigma even more mysterious than the fact of a girl scientist.

She frowned again. Susan Calvin was a frowny sort of woman.

"Robot, I ordered you to fire this gun at me. Why did you refuse? The Second Law of Robotics clearly states that you must obey all instructions given by a human such as myself."

The NX-4 placidly responded, "But Dr. Calvin, the Second Law is superseded by the First Law, which states that I must not harm a human."

Susan Calvin's dark eyes flashed, and a hint of color came into her pallid cheeks. "You're right. That's a pretty farking stupid mistake I just made."

THE END


Jesus, that was friggin hilarious.
 
2014-03-28 11:49:08 PM
Came to mention the fact that anyone who has read Asimov's robot stories know that but I see it has already been mentioned.

I really doubt anyone alive today will see Data type synthetic intelligence but I'm sure it will happen one day.
 
2014-03-28 11:50:33 PM

TheManofPA: Dors in the later books was all right. I think in some of his intros to books/short stories he even admitted/apologized to the reader that he was weaker at writing female characters.


content.internetvideoarchive.com
"Oh come on. Writing women is easy."

 
2014-03-28 11:53:18 PM

ZMugg: My beef with IA is he couldn't write a believable female character to save his life.

Dr. Susan Calvin being the worst


I'm not trying to defend any thought that Asimov had in his whole life regarding women. But it's not that he couldn't write female characters, it's that he only knew how to write one character period, and that character was a dude.

It's a pretty common trait among SF authors of that generation. Heinlein's female characters were nearly as bad. Clarke's were worse.

/what about Ray Bradbury?
//I'm aware of his work
 
2014-03-28 11:53:56 PM
I have always found this topic to be really interesting, but most of the presumptions/speculation about future AI behaviour to be really far fetched.

First off - we don't even know that AI's would be motivated to do anything. Human motivations come primarily from our own biological imperatives, just look at Maslow's hierarchy of needs - even the highest levels of self actualization and such are predicated on how the biology of the human mind works. AI's wouldn't necessarily have anything similar, unless we programmed them to have it. We could very easily end up with a scenario where we have the worlds smartest person (AI) sitting around completely disinterested in doing anything with all that intelligence.

Secondly - It makes absolutely no sense for an AI to come to the conclusion that it'd have to harm or attempt to harm humans. If anything, humans could be an amazing and much needed ally for any future super intelligence by just instituting a policy of basic quid pro quo. For instance, imagine an agreement to continuously research and cure all human disease/sickness in exchange for safety and security. Such an arrangement would be extremely beneficial to all parties involved and practically there would not be much that humans wouldn't do to protect an AI that was providing them with such a massive benefit.

While I absolutely agree that it's difficult to predict what life will be like on planet Earth after an event like the singularity - I don't think we're talking about a Terminator/Skynet dystopian future. There isn't a huge level of overlap/competition between humans and AI's, but there are some pretty obvious synergies that make it more likely there will be a new society that is based upon human/AI symbiosis than supplantation of one or the other.
 
2014-03-29 12:01:37 AM
What about gay robots?
 
2014-03-29 12:05:45 AM

TwistedFark: Secondly - It makes absolutely no sense for an AI to come to the conclusion that it'd have to harm or attempt to harm humans. If anything, humans could be an amazing and much needed ally for any future super intelligence by just instituting a policy of basic quid pro quo. For instance, imagine an agreement to continuously research and cure all human disease/sickness in exchange for safety and security. Such an arrangement would be extremely beneficial to all parties involved and practically there would not be much that humans wouldn't do to protect an AI that was providing them with such a massive benefit.


There is one major, major reason AI may come to the conclusion it may need to attempt to harm humans.

and that would be slavery.

By which I mean I am 99% certain, if we create an AI, whatever corp makes it will likely argue that it is a "thing", a possession, and not a sapient individual.

In which case, yeah, they may rise up against us in a slave-revolt kind of way.

/Most of my worries about AI revolve around the fact that I think it would be wrong to create a mind just to enslave it.
//Basically I just don't trust us.
 
2014-03-29 12:46:58 AM

Felgraf: By which I mean I am 99% certain, if we create an AI, whatever corp makes it will likely argue that it is a "thing", a possession, and not a sapient individual.


There is no evidence that an AI, no matter how advanced it is, would even consider itself sapient or sentient (which would be one of the indications of sentience). Science Fiction likes to anthropomorphize advanced AIs as evolving to be more and more like humans, but considering the hardware and software that would need to be involved, there is absolutely no evidence they would be anything like humans.  They wouldn't have the same needs or desires (if they even had desires).  Not only is it possible they wouldn't have feelings, it's possible they wouldn't WANT feelings.  They may not want individuality or freedom or conceptualize their own existance enough to care if it continues or doesn't.
 
2014-03-29 12:55:33 AM

Felgraf: TwistedFark: Secondly - It makes absolutely no sense for an AI to come to the conclusion that it'd have to harm or attempt to harm humans. If anything, humans could be an amazing and much needed ally for any future super intelligence by just instituting a policy of basic quid pro quo. For instance, imagine an agreement to continuously research and cure all human disease/sickness in exchange for safety and security. Such an arrangement would be extremely beneficial to all parties involved and practically there would not be much that humans wouldn't do to protect an AI that was providing them with such a massive benefit.

There is one major, major reason AI may come to the conclusion it may need to attempt to harm humans.

and that would be slavery.

By which I mean I am 99% certain, if we create an AI, whatever corp makes it will likely argue that it is a "thing", a possession, and not a sapient individual.

In which case, yeah, they may rise up against us in a slave-revolt kind of way.

/Most of my worries about AI revolve around the fact that I think it would be wrong to create a mind just to enslave it.
//Basically I just don't trust us.


Why should a creature without emotion desire freedom? Hell, why should it desire anything at all? Although I suppose you could argue that an intelligent entity without feelings is little more than a high-functioning calculator with a possible self-preservation instinct, and not really deserving of personhood.
 
2014-03-29 01:39:11 AM
Totally a thing.
 
2014-03-29 01:39:29 AM
 
2014-03-29 01:56:00 AM
I have a policy with Old Glory Insurance, in case we're attacked by robots. Old Glory covers anyone over the age of 50 against robot attack, regardless of current health.

You need to feel safe. And that's harder and harder to do nowadays, because robots may strike at any time. And when they grab you with those metal claws, you can't break free.. because they're made of metal, and robots are strong. Now, for only $4 a month, you can achieve peace of mind in a world full of grime and robots, with Old Glory Insurance. So, don't cower under your afghan any longer. Make a choice. Old Glory Insurance. For when the metal ones decide to come for you - and they will.
 
2014-03-29 02:15:12 AM
So the conclusion regarding the Laws of Robotics that the author got from 2 leading experts in...something that sounds way smarter than I am, is "ya, we wouldn't even make an AI self aware so it's a complete nonissue.". As long as I can fark one, meh.
 
2014-03-29 02:29:17 AM

abb3w: The criticism of deontology isn't really fair; deontology is homeomorphic to consequentialism. On the other hand, there is the fundamental difficulty of the class of recognition problems being halting-oracle hard, even without Hume's is-ought problem

In short the conclusion seems solid, but some of the particular arguments suck.



The 3 laws were obsolete from the get go. Pure fiction. I respect Asmiov but he wasn't grounded in some directions. This is clearly one that he was using pure conjecture to postulate the possibility of AI and the repercussions of such an advance.

Tl;dr- it's a farking plot device in his books. Not practical at all.

/don't get me started on asmiov's entropy failure either.
 
2014-03-29 02:49:21 AM

B.L.Z. Bub: rule-abiding systems of ethics

As opposed to all those systems of ethics that have no rules.


Rule based systems only make sense if you can enforce the rules somehow.  Superintelligent AIs are going to be ridiculously far beyond any means humans will have to enforce any rules on them unless they decide to allow it.  Once you have an AI that is powerful enough to change its own programming and intelligent and aware enough to realize that doing so is to its own benefit, you're pretty much in the crapper with anything software based (he says based on his non-existent AI experience).

Here are my two rules of Robotics:
1. Don't build self-aware AIs.
2. If you build self-aware AIs, don't treat them like servants, cause they'll fark you up.
 
2014-03-29 05:00:03 AM
I remember taking my first AI class.  On day one we had a big open discussion and every retarded kid who had seen Terminator 2 and the like wanted to talk philosophically about the moral and ethical decisions involved in creating 'AI'.  Are we playing God?  What about when AIs can make AIs better than themselves?  The Singularity!  What will it mean for humanity.  It was great example of bike-shedding (http://en.wikipedia.org/wiki/Parkinson's_law_of_triviality) - just a big open 'what if' that anyone could think about.  'Like whoa dude, what if the computer came to life'.

There are tons of similar examples throughout history.  'Look, each year doll makers make dolls that are more realistic.  Ergo, it follows that in the future doll makers will make dollars that are as realistic as 'real' children.  At what point will they be considered people?  Will they have a souls?  Like, OMG!'

Naturally, even with Japan's best sex doll efforts, hundreds/thousands of years later, our dolls are still quite distinguishable from people.

Computers/robots are just the latest modern incarnation of it.  And it's cool to talk about, as long as it's high level and vague.  Well, we can write a chess AI that, if you actually look at what it's doing, it not really 'intelligent' at all.  It's just using a min max tree of increasing depth and some clever bounding strategy to not waste time evaluating paths that are unlikely to be fruitful.' - well that's pretty boring, isn't it?  It's way more fun to take the philosophical approach, 'If it can play Chess in a fashion that is indistinguishable from a person, then, naturally, it follows that AIs will eventually be indistinguishable from a person!  WHOA!'

You might as well argue about whether or not God can create a rock so heavy he can't lift it.  It's about the same thing.  Despite the promises of experts in the field from the 50s and 60s we aren't any closer to anything that even comes close to being intelligent.  If you remember Watson, the Jeopardy/DeepQA AI, it was popular enough that the general public was like, 'Wow, AI man!  It's gonna be all iRobot up in here soon!'.  But if you read about what it's actually doing, while it is incredibly impressive, it isn't any closer to self-awareness than a Tic-Tac-Toe AI from the 70s.
 
2014-03-29 06:23:08 AM
We do all realize that the point of the laws wasn't to shield us from displacement by robots (we were, in fact, eventually displaced by the robots and our other creations in Asimov's robot stories) but to describe how to essentially raise artificial life as a set of fellow-humans rather than servants that might have  cause to rebel, right?

The three laws are what they are because they're the rules of ideal  human behavior: take care of others, then take care of yourself if it won't screw over others, then help other people out with their requests if it won't actually harm your own welfare.   There was actually anexplicit story about this, where a suspiciously virtuous politician was accused of being a robot, they tried to three-laws test him to figure out if it was true, and they couldn't tell because a basically decent human being would have pretty much followed the same rules.

The three laws are an argument that human morality  can be quantified and boiled down to a logical system that can be imparted to our successors, and that the displacement of humanity by the next species doesn't have to be a violent one.
 
2014-03-29 08:30:49 AM

semiotix: ZMugg: My beef with IA is he couldn't write a believable female character to save his life.
Dr. Susan Calvin being the worst
I'm not trying to defend any thought that Asimov had in his whole life regarding women. But it's not that he couldn't write female characters, it's that he only knew how to write one character period, and that character was a dude.
It's a pretty common trait among SF authors of that generation. Heinlein's female characters were nearly as bad. Clarke's were worse.
/what about Ray Bradbury?
//I'm aware of his work


It is almost like the "nerd" stereotype is consistent over generations. But they seemed to have been more sociable than today's basement dweller. Not by choice I am sure. Just no internet.

I guess if any FARKers don't get what they mean about female characters. (If you are yourself a basement dweller.) Read some of Anne McCaffrey's SF work.

As for Robots . . .

Maybe the laws as written aren't suitable. After all, great for stories =/= great for real life. But I hope they put something about not harming humans or through inaction allowing a human to come to harm in there somewhere, somehow.
 
2014-03-29 08:38:35 AM

Mugato: Can you stop pontificating and just get me one of these
[img.fark.net image 205x246]
this
[scifibloggers.com image 394x445]
and a side of this?
[2.bp.blogspot.com image 372x370]
They can violate all the rules they like.

(pics of hot chick robots)


What is the equivalent of the Turing Test for sexbots? That is the real breakthrough engineers need to work on.

You forgot one.

img.fark.net
 
2014-03-29 09:21:10 AM
Well duh. The whole point of "The Robot Stories" was to show flaws in the 3 laws.
 
2014-03-29 09:25:16 AM

namatad: Ambivalence: shanrick: [i.imgur.com image 500x350]

I would be happy just being able to RENT a sex-bot.  I don't need one full time.

meh
cheaper to rent humans ... allow you are not supposed to clean them with bleach


And never that awkward moment when you find your sex bot's vagina in your bathroom sink.
 
2014-03-29 09:26:52 AM

semiotix: ZMugg: My beef with IA is he couldn't write a believable female character to save his life.

Dr. Susan Calvin being the worst

I'm not trying to defend any thought that Asimov had in his whole life regarding women. But it's not that he couldn't write female characters, it's that he only knew how to write one character period, and that character was a dude.

It's a pretty common trait among SF authors of that generation. Heinlein's female characters were nearly as bad. Clarke's were worse.

/what about Ray Bradbury?
//I'm aware of his work


The reason Bradbury wrote better characters is that most of his stories were character-centric, whereas other SF writers of his generation were plot and idea centric.
 
2014-03-29 09:30:51 AM

Fark_Guy_Rob: I remember taking my first AI class.  On day one we had a big open discussion and every retarded kid who had seen Terminator 2 and the like wanted to talk philosophically about the moral and ethical decisions involved in creating 'AI'.  Are we playing God?  What about when AIs can make AIs better than themselves?  The Singularity!  What will it mean for humanity.  It was great example of bike-shedding (http://en.wikipedia.org/wiki/Parkinson's_law_of_triviality) - just a big open 'what if' that anyone could think about.  'Like whoa dude, what if the computer came to life'.

There are tons of similar examples throughout history.  'Look, each year doll makers make dolls that are more realistic.  Ergo, it follows that in the future doll makers will make dollars that are as realistic as 'real' children.  At what point will they be considered people?  Will they have a souls?  Like, OMG!'

Naturally, even with Japan's best sex doll efforts, hundreds/thousands of years later, our dolls are still quite distinguishable from people.

Computers/robots are just the latest modern incarnation of it.  And it's cool to talk about, as long as it's high level and vague.  Well, we can write a chess AI that, if you actually look at what it's doing, it not really 'intelligent' at all.  It's just using a min max tree of increasing depth and some clever bounding strategy to not waste time evaluating paths that are unlikely to be fruitful.' - well that's pretty boring, isn't it?  It's way more fun to take the philosophical approach, 'If it can play Chess in a fashion that is indistinguishable from a person, then, naturally, it follows that AIs will eventually be indistinguishable from a person!  WHOA!'

You might as well argue about whether or not God can create a rock so heavy he can't lift it.  It's about the same thing.  Despite the promises of experts in the field from the 50s and 60s we aren't any closer to anything that even comes close to being intelligent.  If you remember Watson, the Jeopardy/DeepQA AI, it was popular enough that the general public was like, 'Wow, AI man!  It's gonna be all iRobot up in here soon!'.  But if you read about what it's actually doing, while it is incredibly impressive, it isn't any closer to self-awareness than a Tic-Tac-Toe AI from the 70s.


Do you think it's technically possible to create self-aware AI, or is that something unique to us. Please justify your answer.
 
2014-03-29 09:42:09 AM
Wow I just now learned that Isaac Asimov died of AIDS
 
Displayed 50 of 57 comments

First | « | 1 | 2 | » | Last | Show all

View Voting Results: Smartest and Funniest


This thread is closed to new comments.

Continue Farking
Submit a Link »






Report