If you can read this, either the style sheet didn't load or you have an older browser that doesn't support style sheets. Try clearing your browser cache and refreshing the page.

(The Atlantic)   Office of Naval Research awards $7.5 million in grants to researchers looking for a way to build " a sense of moral consequence into autonomous robots". Pretty sure you can pick up an Asimov paperback collection for a lot less than that   (theatlantic.com) divider line 42
    More: Interesting, Office of Naval Research, Isaac Asimov, autonomous robot, moral consequence, X-47B, Ronald Arkin, RPI, United States Department of Defense  
•       •       •

325 clicks; posted to Geek » on 14 May 2014 at 4:44 PM (27 weeks ago)   |  Favorite    |   share:  Share on Twitter share via Email Share on Facebook   more»



42 Comments   (+0 »)
   
View Voting Results: Smartest and Funniest
 
2014-05-14 02:45:22 PM  
Well I dunno, he DID write  a LOT of farking books, I am sure a complete collection everything he ever published would set you back a pretty penny
 
2014-05-14 02:53:38 PM  
The problem here is that the military doesn't really want their robots to have a conscience.  That's part of what makes them better soldiers than humans.
Sure, they think they can give it an on/off switch, but that invariably leads to the robots turning it off themselves.
 
2014-05-14 02:54:15 PM  
Isn't most of his stuff public domain? So... download it?

Or just read Phillip K Dick and leave that overrated twat for the unwashed masses.
 
2014-05-14 03:16:57 PM  
Most of the early robot stories were about the various ways that the 3 Laws break down in the field.
 
2014-05-14 03:41:34 PM  
If there is anything I learned from movies like I, Robot, is that you can control that sort of thing by the color of the robots's chest-light.

If you do not want a bad robot, do not use red lights.  It seems rather simple.
 
2014-05-14 04:57:28 PM  
mentalfloss.com
 
2014-05-14 04:58:49 PM  
Make the robot so it can trade off profit vs. human lives lost + government fines and we can replace all those expensive CEOs.
 
2014-05-14 05:02:05 PM  
May I purchase my piece of Solarian real estate now or do I still have to wait?
 
2014-05-14 05:05:53 PM  
The Three Laws:

1) Don't be evil.

2) *smirk*

3) *giggle*
 
2014-05-14 05:06:15 PM  
we should just paint frowns on them so they appear sorry about collateral damage

/ works for politicians
 
2014-05-14 05:11:12 PM  

Magorn: Well I dunno, he DID write  a LOT of farking books, I am sure a complete collection everything he ever published would set you back a pretty penny


Download the books from your library for free. I have been on disability and listening to him and Heinlein since February. On "The Robots of Dawn" now forgot how long it was... excellent on audio.
 
2014-05-14 05:22:35 PM  
There's a huge body of work out there discussing the development of robot emotion and morality. A good place to start is at  http://aitopics.org/topic/emotion

The Three laws won't cut it, though. Asimov acknowledged from the beginning that the three laws don't work, and the zeroth law patch causes

*Spoilers for multi-decade old revelation*

mass genocide of all non-human sapients in the galaxy.

However, the Terminator TV series did a good job showing

*Spoilers for the best damn show you should have watched on air the first time*

John Henry and Cameron learning ethics and religion in order to skew Skynet into being more moral.

/Blatant plug: I did a talk on this at the most recent DragonCon on this issue. You can find a poorly shot video of it here.
 
2014-05-14 05:26:41 PM  

Tr0mBoNe: Isn't most of his stuff public domain? So... download it?


Life plus 70, so no.

But then I shouldn't expect someone who considers one of the most brilliant and prolific writers of the 20th century to be an "overrated twat" to know much.
 
2014-05-14 05:55:36 PM  

timujin: Tr0mBoNe: Isn't most of his stuff public domain? So... download it?

Life plus 70, so no.

But then I shouldn't expect someone who considers one of the most brilliant and prolific writers of the 20th century to be an "overrated twat" to know much.


He wasn't a "twat", but he was overrated like CRAZY.

Asimov was a "big idea"-type sci-fi author, meaning his ideas were great, but his writing sucked ass. Same goes for Clarke.
 
2014-05-14 05:56:25 PM  

luidprand: There's a huge body of work out there discussing the development of robot emotion and morality. A good place to start is at  http://aitopics.org/topic/emotion

The Three laws won't cut it, though. Asimov acknowledged from the beginning that the three laws don't work, and the zeroth law patch causes

*Spoilers for multi-decade old revelation*

mass genocide of all non-human sapients in the galaxy.

However, the Terminator TV series did a good job showing

*Spoilers for the best damn show you should have watched on air the first time*

John Henry and Cameron learning ethics and religion in order to skew Skynet into being more moral.

/Blatant plug: I did a talk on this at the most recent DragonCon on this issue. You can find a poorly shot video of it here.


Unfortunately, such a strategy risks that robots will suffer a psychotic break should they ever learn that Silicon Heaven does not actually exist.
 
2014-05-14 05:57:26 PM  
They can even get their men to stop raping people, we're obviously going to end up with all seeing rapebots.
 
2014-05-14 05:58:18 PM  
Good thing our smartest philosophers have never figured out perfect ethics or morals either, since there are infinite edge cases for every rule.
 
2014-05-14 06:11:53 PM  
Got to see Azimov give a lecture at the local college back in the 80s. Very well spoken and incredibly funny guy.

RIP

/whose revolution?
 
2014-05-14 06:13:44 PM  
It's the military, so the first of the three laws is right out.
 
2014-05-14 06:20:42 PM  

Dimensio: luidprand: There's a huge body of work out there discussing the development of robot emotion and morality. A good place to start is at  http://aitopics.org/topic/emotion

The Three laws won't cut it, though. Asimov acknowledged from the beginning that the three laws don't work, and the zeroth law patch causes

*Spoilers for multi-decade old revelation*

mass genocide of all non-human sapients in the galaxy.

However, the Terminator TV series did a good job showing

*Spoilers for the best damn show you should have watched on air the first time*

John Henry and Cameron learning ethics and religion in order to skew Skynet into being more moral.

/Blatant plug: I did a talk on this at the most recent DragonCon on this issue. You can find a poorly shot video of it here.

Unfortunately, such a strategy risks that robots will suffer a psychotic break should they ever learn that Silicon Heaven does not actually exist.


Whose to say it doesn't? After all, the many, many, human versions can't be disproven. Theres no reason to assume that they can't be the same "place", either.
 
2014-05-14 07:16:30 PM  
Oh, subby, you have no idea how these things work, do you?

$3.5 million for the ONR, $3.5 million for the researchers, and half a million to pay some Senator's dumb nephew to outsource to some temps in Bangalore the compiling of Cliff's Notes and Wikipedia articles into a report.
 
2014-05-14 07:25:29 PM  

realmolo: timujin: Tr0mBoNe: Isn't most of his stuff public domain? So... download it?

Life plus 70, so no.

But then I shouldn't expect someone who considers one of the most brilliant and prolific writers of the 20th century to be an "overrated twat" to know much.

He wasn't a "twat", but he was overrated like CRAZY.

Asimov was a "big idea"-type sci-fi author, meaning his ideas were great, but his writing sucked ass. Same goes for Clarke.


Look at his work outside of sci-fi and then get back with me.
 
2014-05-14 07:40:03 PM  

realmolo: timujin: Tr0mBoNe: Isn't most of his stuff public domain? So... download it?

Life plus 70, so no.

But then I shouldn't expect someone who considers one of the most brilliant and prolific writers of the 20th century to be an "overrated twat" to know much.

He wasn't a "twat", but he was overrated like CRAZY.

Asimov was a "big idea"-type sci-fi author, meaning his ideas were great, but his writing sucked ass. Same goes for Clarke.


Well, we have to disagree on both, but if you have been raised on the swords and sorcerer's books that pass for science-fiction, I can see where you might be confused.
 
2014-05-14 07:48:47 PM  
Started reading lots of Asimov's short stories recently. Amusing to see things he got wrong in what, to us, is a blindingly obvious way. Like the robot police officer with a computer brain who studied police files by going into the file room and reading all the paper cards in filing cabinets. Or the robot who proof read a book by picking up the paper manuscript and thumbing through the pages.
In his future walking, talking robots came first. Word processors and the PC never happened.
 
2014-05-14 08:00:29 PM  

timujin: realmolo: timujin: Tr0mBoNe: Isn't most of his stuff public domain? So... download it?

Life plus 70, so no.

But then I shouldn't expect someone who considers one of the most brilliant and prolific writers of the 20th century to be an "overrated twat" to know much.

He wasn't a "twat", but he was overrated like CRAZY.

Asimov was a "big idea"-type sci-fi author, meaning his ideas were great, but his writing sucked ass. Same goes for Clarke.

Look at his work outside of sci-fi and then get back with me.


One of the few writers to have published a book in every category of the Dewey decimal system, so I sense your snide implication can screw off.
 
2014-05-14 08:17:24 PM  

Dimensio: luidprand: There's a huge body of work out there discussing the development of robot emotion and morality. A good place to start is at  http://aitopics.org/topic/emotion

The Three laws won't cut it, though. Asimov acknowledged from the beginning that the three laws don't work, and the zeroth law patch causes

*Spoilers for multi-decade old revelation*

mass genocide of all non-human sapients in the galaxy.

However, the Terminator TV series did a good job showing

*Spoilers for the best damn show you should have watched on air the first time*

John Henry and Cameron learning ethics and religion in order to skew Skynet into being more moral.

/Blatant plug: I did a talk on this at the most recent DragonCon on this issue. You can find a poorly shot video of it here.

Unfortunately, such a strategy risks that robots will suffer a psychotic break should they ever learn that Silicon Heaven does not actually exist.


Well, that's just ridiculous! Where would all the pocket calculators go?
 
2014-05-14 09:45:35 PM  
http://m.barnesandnoble.com/w/the-tale-of-the-wicked-john-scalzi/11084 28439?ean=2940014031431
 
2014-05-14 09:48:19 PM  

Flint Ironstag: Word processors and the PC never happened.


I've been re-reading Asimov and the part in Second Foundation and giggled when he writes about Arkady Darell having to replace the paper in her voice transcriber each time she makes a mistake.

My biggest complaint about the Foundation series is the constant babbling by Bliss.  Trevize should have punched her/them/Gaia square in the face and told her/them/Gaia to shut her/their/Gaia's whore mouth.
 
2014-05-14 10:27:15 PM  

luidprand: The Three laws won't cut it, though. Asimov acknowledged from the beginning that the three laws don't work


It wasn't so much that they  didn't work as that they didn't immediately solve all our problems.  In fact, it worked pretty much perfectly, by about the 2200s or so in his timeline the obvious kinks were worked out and the robots were pretty unarguably morally superior to most people.

The problem was that, having succeeded, it didn't really occur to us that even with a well-designed morality and the best of intentions things could  still go wrong-- mistakes, confusion, and simple misguidedness, he was arguing, were the ultimate cause of our moral failings more than "evil".

Basically it was a multi-book argument for never giving up on self-examination and philosophy, no matter how good your system seems.

// It was also a really optimistic defense of humanity, where most of our destructiveness, as noted above, results from  mistakes that can be corrected rather than any real evil intent.  Pretty impressive for the son of people that had to flee Russia in fear of being murdered by the nation's government.
 
2014-05-15 12:10:32 AM  
darthmojo.files.wordpress.com
 
2014-05-15 12:43:54 AM  
OH FFS, KEITH LAUMER and his BOLOS.
 
2014-05-15 01:17:03 AM  
Asimov's entire Robot series was basically about his Three Laws failing.
 
2014-05-15 01:21:14 AM  

Tr0mBoNe: Isn't most of his stuff public domain? So... download it?

Or just read Phillip K Dick and leave that overrated twat for the unwashed masses.


Now there's a way overrated and horrible author.  About what you can expect from speed fueled writing marathons.
 
2014-05-15 01:35:09 AM  
Just teach them patriotism, much more reliable.

static.comicvine.com
 
2014-05-15 07:17:49 AM  

Tr0mBoNe: Isn't most of his stuff public domain? So... download it?

Or just read Phillip K Dick and leave that overrated twat for the unwashed masses.


Popular stuff is always bad.

How's Phillip K Dick's non fiction?
 
2014-05-15 07:23:21 AM  
One thing I don't understand is why would we give robots emotions anyway? Don't do that, no problem in the first place.
 
2014-05-15 07:31:04 AM  
Asimovs laws work, partially, on a robot with human like understanding.
Real robots have insect tier capabilities.  You can't easily define things like "harm" or "human".

You can stand there and pointificate about all these fancy moral laws, and you can also think of a dozen ways or circumvent them, but the real danger is that computers just don't care and don't understand any of this.
They'll drop the bomb when the right bit flips on.
 
2014-05-15 08:14:09 AM  

Flint Ironstag: Started reading lots of Asimov's short stories recently. Amusing to see things he got wrong in what, to us, is a blindingly obvious way. Like the robot police officer with a computer brain who studied police files by going into the file room and reading all the paper cards in filing cabinets. Or the robot who proof read a book by picking up the paper manuscript and thumbing through the pages.
In his future walking, talking robots came first. Word processors and the PC never happened.


Well he was a very bright man who did have his prophetic moments, and yet he did need to write scenes his readers could relate to.

I think the most jarring part is a lot of "golden age" SF is that PC never happened.  White men stil run almost everything, and except for the occasional brilliant anomaly (eg. Susan Calvin), Great Man's Daughter, or Adventuring Sex Kitten, women mostly still stay home and bake bundt cakes.
 
2014-05-15 08:16:29 AM  
They want Robocop but we'll end up with Bender, at best.
 
2014-05-15 09:16:04 AM  

Shakin_Haitian: One thing I don't understand is why would we give robots emotions anyway? Don't do that, no problem in the first place.


You actually can't. In a sufficiently complex AI, they arise naturally as the AI has to develop preferences between tasks and users that are otherwise equivalent or have similar priorities. If anything, the emotions would be more present than in humans, as the AI can be fully aware of their cause and how they are tied directly to every function they have.

From a practical perspective, emotions also serve to help the AI adapt to unforeseen situations, where the directives contradict or are unclear. Also, if they have to associate with people, emotions help provide an interface boost and prevent (or at least ameliorate) uncanny valley situations.
 
2014-05-15 09:20:41 AM  

No Such Agency: Flint Ironstag: Started reading lots of Asimov's short stories recently. Amusing to see things he got wrong in what, to us, is a blindingly obvious way. Like the robot police officer with a computer brain who studied police files by going into the file room and reading all the paper cards in filing cabinets. Or the robot who proof read a book by picking up the paper manuscript and thumbing through the pages.
In his future walking, talking robots came first. Word processors and the PC never happened.

Well he was a very bright man who did have his prophetic moments, and yet he did need to write scenes his readers could relate to.

I think the most jarring part is a lot of "golden age" SF is that PC never happened.  White men stil run almost everything, and except for the occasional brilliant anomaly (eg. Susan Calvin), Great Man's Daughter, or Adventuring Sex Kitten, women mostly still stay home and bake bundt cakes.


You said it yourself with "Scenes readers could relate to".
The sexual revolution was still in the happening and the idea of a woman taking charge didn't fit into male writers ideals.
Often times when you get big changes in social order, its not possible for the prior generation to fully comprehend the outcome. Let alone predict it.

/Kind of like reading Orson Scott Card describe the internet in the eighties.
/A network where children log in under fake user names to discuss big political ideals, while under the watchful eye of the government, seems absurd compared to what we actually... wait a sec!
 
2014-05-15 11:23:18 AM  

luidprand: From a practical perspective, emotions also serve to help the AI adapt to unforeseen situations, where the directives contradict or are unclear. Also, if they have to associate with people, emotions help provide an interface boost and prevent (or at least ameliorate) uncanny valley situations.


They also prevent having to interact with robots who all have severe Aspurger's. Imagine, if you will, your home health worker robot is Sheldon Cooper.
 
Displayed 42 of 42 comments

View Voting Results: Smartest and Funniest


This thread is closed to new comments.

Continue Farking
Submit a Link »






Report