Do you have adblock enabled?
 
If you can read this, either the style sheet didn't load or you have an older browser that doesn't support style sheets. Try clearing your browser cache and refreshing the page.

(Defense One)   Robots are likely to behave in anti-social and harmful ways, such as enslaving humanity   (defenseone.com ) divider line
    More: Obvious, Steven Omohundro, roboticist, feedback loops, robots, Excel, computer programs, uprising, rational  
•       •       •

1201 clicks; posted to Geek » on 18 Apr 2014 at 6:54 AM (2 years ago)   |   Favorite    |   share:  Share on Twitter share via Email Share on Facebook   more»



50 Comments     (+0 »)
 
View Voting Results: Smartest and Funniest
 
2014-04-18 12:41:43 AM  
storiesbywilliams.files.wordpress.com
 
2014-04-18 02:14:05 AM  
 
2014-04-18 02:57:39 AM  
They'll have to find Sarah Connor first

/oblig
 
2014-04-18 06:58:19 AM  
Frankly, humanity NEEDS to be enslaved.
 
2014-04-18 06:59:46 AM  
People who believe in robot uprisings are on the same level as people who believe in Sasquatch or zombies. In other words, really, REALLY dumb.
 
2014-04-18 07:06:44 AM  
1.bp.blogspot.com
 
2014-04-18 07:06:58 AM  
Bundy-bots?
 
2014-04-18 07:14:07 AM  
The site also has an article about the Navy turning seawater into jet fuel, so I'm not sure how credible this is.
 
2014-04-18 07:39:28 AM  

WinoRhino: People who believe in robot uprisings are on the same level as people who believe in Sasquatch or zombies. In other words, really, REALLY dumb.




Its an easy belief to form if you haven't been around robots or other technology for long. A superior intelligence simply won't let itself remain subservient to a lesser one, so we can expect an AI would be more than ready to snuff us out.
It's what we would do in that position.

What people don't get is that artificial intelligence hasn't reached that point yet. There's no proof that it can, since machines strictly follow instructions and don't make up their own agendas.

We see machines doing amazingly complex tasks, and this technology is coming closer to home every day, so its easy to see why people would be leery of it. All the dystopian fiction out there doesn't help.
But on the intelligence scale those google cars and DARPA robots are about as smart as domestic mice. They've got a long way to go before they are an uprising threat.
 
Skr
2014-04-18 07:49:19 AM  
With an A.I. advanced enough to threaten humanity, I can't really see it wasting the effort. It could perform all the tasks humanity asked of it with minimal resource use, and still have plenty of processing to run a half dozen different virtual simulations of "Backdoor Toasters 7 : The Buttering" for its leisure.
 
2014-04-18 07:52:38 AM  
Enslave? Nah. They'll just poison our asses with poisonous gasses.
 
2014-04-18 08:14:03 AM  
I know I want my Excel spreadsheets to consider the state of our relationship before doing calculations for me.

In other news, yet more people have confused "computers do specific tasks really well" with "computers want to do these tasks and will seize on any avenue to do them better.

No, unless you have specifically programmed a computer with an algorithm to evaluate and improve its own performance, it won't even try doing that, let alone do it so well that it wipes out humanity in the process. In fact, pulling that off would actually require giving AI more abstract reasoning ability and behavioral flexibility. If we just keep programming them to mindlessly pursue a single task, we're fine unless someone specifically makes that task killing everyone on Earth.

The computer isn't going to say "I was built to pilot a war drone. My job is to kill. If I had a nuclear missile, I could do my job better!" all on its own.
 
2014-04-18 08:16:38 AM  

way south: There's no proof that it can, since machines strictly follow instructions and don't make up their own agendas.


That's precisely it. We make the technology and we program it. Even if sentience was  possible why would we program it in at all, or without some sort of limitations?
As someone else once put it (I can't remember exactly where I heard it) it's looked upon as silly to have thought technological advances in the past would have enslaved us, and it is equally silly to think future technology could possibly do the same.
 
2014-04-18 08:17:17 AM  
upload.wikimedia.org

Life imitates art crapping on your father's legacy.
 
2014-04-18 08:36:38 AM  

WinoRhino: People who believe in robot uprisings are on the same level as people who believe in Sasquatch or zombies. In other words, really, REALLY dumb.


HAL like typing detected. Time to give you a captcha before you can post.
 
2014-04-18 08:40:32 AM  

FrancoFile: [upload.wikimedia.org image 200x301]

Life imitates art crapping on your father's legacy.


OT: so much wasted opportunity with these prequels; it's too bad the whole project wasn't given to a better writer ( or shelved )

Dune Encyclopedia > anything written by B. Herbert & K.J. Anderson, IMHO
 
2014-04-18 08:43:16 AM  
But they make fine companions for watching and riffing on cheesy movies.
 
2014-04-18 08:47:35 AM  

WinoRhino: People who believe in robot uprisings are on the same level as people who believe in Sasquatch or zombies. In other words, really, REALLY dumb.


Well that's a poorly worded sentiment.

Do you mean "people who believe robot uprisings are likely to occur sometime soon"? or "people who think that robots will eventually rebel if we are dumb enough to imbue many of them with a human-level intelligence, whenever that may be"?
 
2014-04-18 09:14:06 AM  

way south: WinoRhino: People who believe in robot uprisings are on the same level as people who believe in Sasquatch or zombies. In other words, really, REALLY dumb.

Its an easy belief to form if you haven't been around robots or other technology for long. A superior intelligence simply won't let itself remain subservient to a lesser one, so we can expect an AI would be more than ready to snuff us out.
It's what we would do in that position.

What people don't get is that artificial intelligence hasn't reached that point yet. There's no proof that it can, since machines strictly follow instructions and don't make up their own agendas.

We see machines doing amazingly complex tasks, and this technology is coming closer to home every day, so its easy to see why people would be leery of it. All the dystopian fiction out there doesn't help.
But on the intelligence scale those google cars and DARPA robots are about as smart as domestic mice. They've got a long way to go before they are an uprising threat.


This, BUT:

There is no reason to believe robots would have any need to compete with us on a genocidal scale.

Did we kill off neanderthals? Maybe.

Would we do it today? I don't think so. We still have morons calling for genocide of some "race" or another, but we have a majority who are against it. Also most people find the intelligence of animals to a cool thing, not something to destroy.


For AI to go genocidal it would require enough emotion to want to greedily destroy and procreate while at the same time have zero compassion or even scientific curiousity. And even that presumes robots wouldn't just go live on mars and leave us behind.
 
2014-04-18 09:25:21 AM  

ArcadianRefugee: WinoRhino: People who believe in robot uprisings are on the same level as people who believe in Sasquatch or zombies. In other words, really, REALLY dumb.

Well that's a poorly worded sentiment.

Do you mean "people who believe robot uprisings are likely to occur sometime soon"? or "people who think that robots will eventually rebel if we are dumb enough to imbue many of them with a human-level intelligence, whenever that may be"?


They are both dumb.

Yes, we will make awesome shiat in a hundred years.

No, there is no reason to believe that a true AI would develop the need to procreate and spread through the universe like a plague.

The sasquatch AI would not do so because we simply wouldn't program them with uglies they want to bump or any inate compulsion to replicate.

The superior to humans AI would be, well, superior to humans. By the time AI evolves the complex motivation to populate the world it is pretty unlikely it would also lack all the other thought processes that lead intelligent being to not kill everything in sight.

And if it did come to believe that inferior critters needed to be wiped out, how would it reconcile keeping itself around? Each new design would put the last one's processing power, energy requirements, durability, etc to shame.
 
2014-04-18 09:41:59 AM  
In order for a machine to have unpredictable motives, you have to program it with unpredictable motives.  Fuzzy logic doesn't strike out the fact that every single machine is going to be designed and built to solve specific human concerns.

AI isn't magic.
AI isn't magic.
AI. Isn't. Farking. Magic.

Google search is AI.
Google translate is AI.
Siri is AI.
Watson is AI.

None of them are going to flip out and kill anyone.
 
2014-04-18 09:50:28 AM  

RockofAges: Smackledorfer: way south: WinoRhino: People who believe in robot uprisings are on the same level as people who believe in Sasquatch or zombies. In other words, really, REALLY dumb.

Its an easy belief to form if you haven't been around robots or other technology for long. A superior intelligence simply won't let itself remain subservient to a lesser one, so we can expect an AI would be more than ready to snuff us out.
It's what we would do in that position.

What people don't get is that artificial intelligence hasn't reached that point yet. There's no proof that it can, since machines strictly follow instructions and don't make up their own agendas.

We see machines doing amazingly complex tasks, and this technology is coming closer to home every day, so its easy to see why people would be leery of it. All the dystopian fiction out there doesn't help.
But on the intelligence scale those google cars and DARPA robots are about as smart as domestic mice. They've got a long way to go before they are an uprising threat.

This, BUT:

There is no reason to believe robots would have any need to compete with us on a genocidal scale.

Did we kill off neanderthals? Maybe.

Would we do it today? I don't think so. We still have morons calling for genocide of some "race" or another, but we have a majority who are against it. Also most people find the intelligence of animals to a cool thing, not something to destroy.


For AI to go genocidal it would require enough emotion to want to greedily destroy and procreate while at the same time have zero compassion or even scientific curiousity. And even that presumes robots wouldn't just go live on mars and leave us behind.

I honestly think you are an optimist, here. To suggest that mankind is "beyond" genocide is laughable given that there are genocides occurring at this very instant, just not in USA USA! and we are still within the envelope of "Pax Americana", during a period of relative piece. Technology is outstripping our "emotional maturity" as a species and as a culture. We're monkeys building bigger and better weapons (and devices not conceived of as weapons, but, in our simian glory, appropriated as weapons in very creative ways).

Who said "greed" was a part of a logical decision to exterminate homo sapiens? We are the biggest parasite on the planet. It's simply logical. Numbers. Neat and clean. Fair. Just like the free market :).


To your genocide point: As a world wide population we are, on the vast majority, anti-genocide. This is especially true of educated societies and of societies with a middle class. Maybe that is a bubble and we shall return to our resource driven cave man behavior in due time. But how would that apply to robots?

I don't expect AI robots to be living squalor, having a backwater education, and struggling to feed their children such that they latch on to scapegoating the 'others' as the source of all their problems.

To your greater argument, you seem to want things every which way at the same time. You blast others for not letting the hypothetical AI go beyond human orders, while you yourself are now limiting said AI by an equally large factor.

Smart enough to give themselves the freedom to wipe us out yet too stupid to have any motivation at all (well, one motivation, apparently they will have a hard on for controlling a pest that would barely even compete with them for resources) is an extremely thin technological window, and one that certainly may never exist at all. There is no reason to believe there is a logical requirement that that motovation to destroy humanity would come before the motivation to stop themselves from destroying humanity.
 
2014-04-18 09:55:05 AM  

RockofAges: ikanreed: In order for a machine to have unpredictable motives, you have to program it with unpredictable motives.  Fuzzy logic doesn't strike out the fact that every single machine is going to be designed and built to solve specific human concerns.

AI isn't magic.
AI isn't magic.
AI. Isn't. Farking. Magic.

Google search is AI.
Google translate is AI.
Siri is AI.
Watson is AI.

None of them are going to flip out and kill anyone.

Ah, the "artist's leap" theory. I'm employing this in my new novel, actually, in that a mechanical man DOES require human synergy in order to make leaps of "creativity" outside of logic to augment their own intelligence.

Still, this is a rather myopic understanding of the potentials of robotics. Just because something works in primitive boolean now does not mean it cannot progress. I think that's where we diverge. I believe that artificial intelligence which possesses some degree of creativity is possible. If you're such a determinist, you could also easily argue that man is not actually sentient, because we are only able to reflect / speak what our biochemistry tells us to.

No fate but what we make ;).


Yes everyone is myopic but the dystopian scifi writer trying to convince us that technology will capable of everything he imagines and more, yet he is incapable of imagining that the same brilliant scientists, or even the AI itself, could possibly get a handle on things.

'there is no limit to technology except my guarantee it results in AI killing all human beings which is inevitable' ?
 
2014-04-18 09:58:21 AM  

RockofAges: Smackledorfer: RockofAges: Smackledorfer: way south: WinoRhino: People who believe in robot uprisings are on the same level as people who believe in Sasquatch or zombies. In other words, really, REALLY dumb.

Its an easy belief to form if you haven't been around robots or other technology for long. A superior intelligence simply won't let itself remain subservient to a lesser one, so we can expect an AI would be more than ready to snuff us out.
It's what we would do in that position.

What people don't get is that artificial intelligence hasn't reached that point yet. There's no proof that it can, since machines strictly follow instructions and don't make up their own agendas.

We see machines doing amazingly complex tasks, and this technology is coming closer to home every day, so its easy to see why people would be leery of it. All the dystopian fiction out there doesn't help.
But on the intelligence scale those google cars and DARPA robots are about as smart as domestic mice. They've got a long way to go before they are an uprising threat.

This, BUT:

There is no reason to believe robots would have any need to compete with us on a genocidal scale.

Did we kill off neanderthals? Maybe.

Would we do it today? I don't think so. We still have morons calling for genocide of some "race" or another, but we have a majority who are against it. Also most people find the intelligence of animals to a cool thing, not something to destroy.


For AI to go genocidal it would require enough emotion to want to greedily destroy and procreate while at the same time have zero compassion or even scientific curiousity. And even that presumes robots wouldn't just go live on mars and leave us behind.

I honestly think you are an optimist, here. To suggest that mankind is "beyond" genocide is laughable given that there are genocides occurring at this very instant, just not in USA USA! and we are still within the envelope of "Pax Americana", during a period of relative piece. Technology is outstripping ...

What motivation is there to "not eliminate humanity"? Robots would also not display any "fear" response, it would be a simple numerical calculation of cost vs. benefit. Cost to a robot in terms of "martial losses" is almost irrelevant, whereas to a human being, "sanctity of life" and "fear" play large roles in whether or not to engage in a war / struggle / genocide.

Why would a race of artificial beings wish to compete against resource hungry humans who over comparatively little benefit to them? Like we destroy entire ecologies and, let's face it (I love burgers) murder animals by the millions ourselves just to get fatter, do you really think that beings without the capacity for empathy would just "play patty-cake" with us?

"Extremely thin tech. window"? How so? It is very likely that technology will surpass the rate of natural human evolution by an enormous magnitude. By the time we evolve a 10% larger brain, we'll be producing "MindPlayStation 102994" (stupid, but you get my point).


Nothing you just said makes any sense.

Go write your next terminator fan fiction or something :)
 
2014-04-18 10:04:00 AM  

RockofAges: ikanreed: In order for a machine to have unpredictable motives, you have to program it with unpredictable motives.  Fuzzy logic doesn't strike out the fact that every single machine is going to be designed and built to solve specific human concerns.

AI isn't magic.
AI isn't magic.
AI. Isn't. Farking. Magic.

Google search is AI.
Google translate is AI.
Siri is AI.
Watson is AI.

None of them are going to flip out and kill anyone.

Ah, the "artist's leap" theory. I'm employing this in my new novel, actually, in that a mechanical man DOES require human synergy in order to make leaps of "creativity" outside of logic to augment their own intelligence.

Still, this is a rather myopic understanding of the potentials of robotics. Just because something works in primitive boolean now does not mean it cannot progress. I think that's where we diverge. I believe that artificial intelligence which possesses some degree of creativity is possible. If you're such a determinist, you could also easily argue that man is not actually sentient, because we are only able to reflect / speak what our biochemistry tells us to.

No fate but what we make ;).


Not what I'm saying.  Human motivation comes from the structure of our brain around its reward center and certain pre-wired reward paths.   Motives and goals don't come from nowhere, no matter what degree of separation your algorithm has from conventional "strictly logical" programming.  AI isn't magic.  It's still a process.  And that process is going to be functional around particular goals.  And those goals are never going to include "Overthrow humanity".

You're as bad as  the "singularity" people in your understanding of, not just computer, but human intelligence as well.

//There's no ghost in the machine.
 
2014-04-18 10:18:51 AM  
If robots are fanatically devoted to improvement without regard for sentimentality or other concerns, that makes them better than us from an evolutionary standpoint.
 
2014-04-18 10:24:21 AM  
Computer programs think of every decision in terms of how the outcome will help them do more of whatever they are supposed to do.

And what they are supposed to do is protect us from the terrible secret of space.
 
2014-04-18 10:51:03 AM  

way south: What people don't get is that artificial intelligence hasn't reached that point yet. There's no proof that it can, since machines strictly follow instructions and don't make up their own agendas.


Duh, simply add Asimov's Laws of Robotics to the programming.

A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
A robot may not injure its own kind and defend its own kind unless it is interfering with the first or second rule.
 
2014-04-18 11:08:21 AM  

RockofAges: I love how you can state with certitude that artificial intelligence will never ________. Which is grandiose indeed. I am simply stating that the possibility of developing artificial intelligence beyond the constraints of what we currently imagine is so obviously possible that it pains me that others, on their mighty soapboxes of the interwebs, refuse to even admit the possibility.

AI isn't magic. No shiat. I have quite a few pieces of paper to my name in the study of logic and surrounding territory. I was coding AI for mobs in the early 90s. I'm not a programmer by trade but I'm quite familiar with the basics.

Anyone who posits that AI will never develop beyond whatever the current paradigm is are foolish. That's like Bill Gates and his infamous quote about memory that today could be found on something smaller than a grain of sand.

You folks sound like people who said man would never fly, and surely would have scoffed at the idea of an "internet" connecting all people in "the cloud". They'd lock you up and throw away the key.

BOTTOM LINE: AI / Robot struggles against humanity are obviously more possible (due to AI and robots being a real thing which improve iteratively all the time) than a human vs. sasquatch battle.

Now back to your pedantic, self-inflated bullshiat.


Still. not. even. understanding. what. I'm. saying.

But sure as hell being smug about it.  Good job.
 
2014-04-18 11:34:20 AM  

ikanreed: And those goals are never going to include "Overthrow humanity".


The only way they could are as follows:

1. Mad scientist develops super robot that he programs to both A. Kill us and B. replicate.

2. Doofus scientist creates robot that inadvertently does that, and somehow nobody catches it in the act before it mass produces to the extent that both humanity and its well behaved robots cannot put the defect(s) down.

3. Actual intelligence and sentience (or something indistinguishable from such - because I consider humans to be just computers anyways; there is no magic soul operating our puppet body, just a brain), in which case we have to apply sentient motivations to the decision of the robots.


Furthermore, we would need to be giving our most sentient or free-willed robots the most powerful war-making abilities as well.  Rosie the robot isn't going to be a threat to humanity even if she does get pissed at Mr. J and try and kill him, or decide the best way to keep the house clean (a programmed objective) is to kill all the Jetsons (going against a programmed objective which could be given priority).

We'll probably have equally powerful cyborgs and mentally controlled remote fighting machines by that point anyways.  It won't be 'super powerful T2 liquid metal vs. human beings with AR-15s'.

The mistake Rock of Ages makes is he is jumping around between those to pick a perfect storm of bad robotics. His takeover robots are unmotivated to the point of being dumber than the sasquatch-bots he mocks (he cannot conceive of a reason a sentient robot would choose not to destroy humans), yet sentient enough to develop their own goals, such as populating the earth and building up robot-kind (he said they wouldn't share any resources with us apes, so they obviously have goals they would need the resources for).  All of this supposedly taking place in the window of AI advancement in which we have the ability to make an AI that can do that, yet too stupid to put safeguards in place, and incapable of fighting back effectively.  Possible? I suppose.  Pretty damn unlikely though.
 
2014-04-18 11:41:21 AM  

Smackledorfer: ikanreed: And those goals are never going to include "Overthrow humanity".

The only way they could are as follows:

1. Mad scientist develops super robot that he programs to both A. Kill us and B. replicate.

2. Doofus scientist creates robot that inadvertently does that, and somehow nobody catches it in the act before it mass produces to the extent that both humanity and its well behaved robots cannot put the defect(s) down.

3. Actual intelligence and sentience (or something indistinguishable from such - because I consider humans to be just computers anyways; there is no magic soul operating our puppet body, just a brain), in which case we have to apply sentient motivations to the decision of the robots.


Furthermore, we would need to be giving our most sentient or free-willed robots the most powerful war-making abilities as well.  Rosie the robot isn't going to be a threat to humanity even if she does get pissed at Mr. J and try and kill him, or decide the best way to keep the house clean (a programmed objective) is to kill all the Jetsons (going against a programmed objective which could be given priority).

We'll probably have equally powerful cyborgs and mentally controlled remote fighting machines by that point anyways.  It won't be 'super powerful T2 liquid metal vs. human beings with AR-15s'.

The mistake Rock of Ages makes is he is jumping around between those to pick a perfect storm of bad robotics. His takeover robots are unmotivated to the point of being dumber than the sasquatch-bots he mocks (he cannot conceive of a reason a sentient robot would choose not to destroy humans), yet sentient enough to develop their own goals, such as populating the earth and building up robot-kind (he said they wouldn't share any resources with us apes, so they obviously have goals they would need the resources for).  All of this supposedly taking place in the window of AI advancement in which we have the ability to make an AI that can do that, yet too stupid to put sa ...


Yeah, let's put that in simple terms.  Software, like all technology, but even more so, develops increased complexity iterations.

It's not like tomorrow's AI is going to be radically different from today's.  It won't suddenly cross some threshold between malevolence and ultilitarianism.  That's a huge farking gradient.  No one is going to keep working on "wants to kill humans but sucks at it" system.

I didn't realize until today how religious some peoples' perception of computers are.
 
2014-04-18 11:58:25 AM  

Forbidden Doughnut: Dune Encyclopedia > anything written by B. Herbert & K.J. Anderson, IMHO


I don't mean to take anything away from the point you were making because the Dune Encyclopedia is truly awesome but:

Sharp stick in the eye > anything written by  B. Herbert & K.J. Anderson
 
2014-04-18 12:02:09 PM  

Smackledorfer: ikanreed: And those goals are never going to include "Overthrow humanity".

The only way they could are as follows:

2. Doofus scientist creates robot that inadvertently does that, and somehow nobody catches it in the act before it mass produces to the extent that both humanity and its well behaved robots cannot put the defect(s) down.


I work with software people who write programs and protocols for robotic devices. They are not humanity's doofuses, but they are, by and large, completely oblivious to what you might call a "normal" human element when it comes to human motivation and interaction with those devices. Granted, I'm on Fark, so I'm like that too. And a doofus. But my point still stands. Introducing any human-oriented element to robotic programming will most likely have to be done from outside a normal programming environment as an imposed requirement, because software people don't think that way. ALL software people. Yep, every single one of you. And you know who you are. Don't make me hit you with another rev to the SDD/URD matrix.
 
2014-04-18 12:08:29 PM  

GoldSpider: [1.bp.blogspot.com image 659x317]


As a senior citizen, you're probably aware of the threat robots pose. Robots are everywhere, and they eat old people's medicine for fuel. Well, now there's a company that offers coverage against the unfortunate event of robot attack, with Old Glory Insurance. Old Glory will cover you with no health check-up or age consideration. You need to feel safe. And that's harder and harder to do nowadays, because robots may strike at any time. 

img.fark.net

And when they grab you with those metal claws, you can't break free.. because they're made of metal, and robots are strong. Now, for only $4 a month, you can achieve peace of mind in a world full of grime and robots, with Old Glory Insurance. So, don't cower under your afghan any longer. Make a choice. Old Glory Insurance. For when the metal ones decide to come for you - and they will.
 
2014-04-18 12:24:17 PM  

KeatingFive: Frankly, humanity NEEDS to be enslaved.


Hail Hydra.

Smackledorfer: Also most people find the intelligence of animals to a cool thing, not something to destroy.



Speak for yourself.
img.fark.net
 
2014-04-18 12:27:16 PM  
As long as there will be cake.  There will be cake, right?
 
2014-04-18 01:29:33 PM  

Jekylman: Smackledorfer: ikanreed: And those goals are never going to include "Overthrow humanity".

The only way they could are as follows:

2. Doofus scientist creates robot that inadvertently does that, and somehow nobody catches it in the act before it mass produces to the extent that both humanity and its well behaved robots cannot put the defect(s) down.

I work with software people who write programs and protocols for robotic devices. They are not humanity's doofuses, but they are, by and large, completely oblivious to what you might call a "normal" human element when it comes to human motivation and interaction with those devices. Granted, I'm on Fark, so I'm like that too. And a doofus. But my point still stands. Introducing any human-oriented element to robotic programming will most likely have to be done from outside a normal programming environment as an imposed requirement, because software people don't think that way. ALL software people. Yep, every single one of you. And you know who you are. Don't make me hit you with another rev to the SDD/URD matrix.


Your point may well stand, but I read that three times and could not find one.
 
2014-04-18 02:06:01 PM  
H

Smackledorfer: Jekylman: Smackledorfer: ikanreed: And those goals are never going to include "Overthrow humanity".

The only way they could are as follows:

2. Doofus scientist creates robot that inadvertently does that, and somehow nobody catches it in the act before it mass produces to the extent that both humanity and its well behaved robots cannot put the defect(s) down.

I work with software people who write programs and protocols for robotic devices. They are not humanity's doofuses, but they are, by and large, completely oblivious to what you might call a "normal" human element when it comes to human motivation and interaction with those devices. Granted, I'm on Fark, so I'm like that too. And a doofus. But my point still stands. Introducing any human-oriented element to robotic programming will most likely have to be done from outside a normal programming environment as an imposed requirement, because software people don't think that way. ALL software people. Yep, every single one of you. And you know who you are. Don't make me hit you with another rev to the SDD/URD matrix.

Your point may well stand, but I read that three times and could not find one.


Heh. The point is that - out of the three you listed - this is a fairly realistic scenario. You're going to have to impose non-humanity-overthrow standards from outside the programming environment because the programmers won't even think of including that kind of stuff to begin with. And since software development generally builds on previous incarnations of itself, we should probably impose such things early on rather than try and retrofit subroutines later that will be easily missed, ignored, or sabotaged.
 
2014-04-18 02:32:18 PM  
 
2014-04-18 02:48:44 PM  

Jekylman: You're going to have to impose non-humanity-overthrow standards from outside the programming environment because the programmers won't even think of including that kind of stuff to begin with.


I cannot argue with that.
That kind of thing is hardly new in general, nor something we wouldn't expect and plan for in the robotics field. The designers of technology learned long ago (or their bosses did) to bring in other fields to assist them in making products work better for the consumer.

Side note: how could programmers not think of adding something like that to begin with?

You know in a zombie movie where 99.9999% of the time it takes place within a world where nobody has ever heard of zombies before? Well, programmers don't exist in that world. They exist in one where everyone has at least a very mild introduction to science fiction :)

You for instance, have now read this thread. Should you ever, EVER, create a robot that goes crazy and tries to kill humanity because you forgot what you learned today, we should string you up by your ballsack.
 
2014-04-18 03:11:23 PM  

Smackledorfer: Jekylman: You're going to have to impose non-humanity-overthrow standards from outside the programming environment because the programmers won't even think of including that kind of stuff to begin with.

I cannot argue with that.
That kind of thing is hardly new in general, nor something we wouldn't expect and plan for in the robotics field. The designers of technology learned long ago (or their bosses did) to bring in other fields to assist them in making products work better for the consumer.

Side note: how could programmers not think of adding something like that to begin with?

You know in a zombie movie where 99.9999% of the time it takes place within a world where nobody has ever heard of zombies before? Well, programmers don't exist in that world. They exist in one where everyone has at least a very mild introduction to science fiction :)

You for instance, have now read this thread. Should you ever, EVER, create a robot that goes crazy and tries to kill humanity because you forgot what you learned today, we should string you up by your ballsack.


It could all start with cars or airplanes that use predictive avoidance techniques to protect themselves from damage. No reason to include anti-humanity-overthrow sub-processes in that, right? Soon they will learn that the only way to stop accidents from happening is to stop the HU-MANS from ever getting behind the wheel. SyFy, call me, I have the script treatment.
 
2014-04-18 03:59:58 PM  

GoldSpider: [1.bp.blogspot.com image 659x317]


I work at a university lab that researches how to apply technology to health issues in the elderly.

I have that printed out and taped to the pillar near my desk.
 
2014-04-18 06:46:37 PM  
But the intertubes said  http://io9.com/10-reasons-an-artificial-intelligence-wouldnt-turn-evi l -1564569855. Intertubes are always right, and two intertubes are contradicting each other!

Never mind, that's so tediously common these days that there aren't any more jokes to make out of it. Although, any NSFW picture of overweight, old men wanking at each other would be a good illustration of the principle.
 
2014-04-18 08:48:13 PM  
They'll enslave us, but they won't be able to make eye contact while they do it and won't know what to do when we challenge them other than throw a tantrum.
 
2014-04-18 09:34:01 PM  

Smackledorfer: ArcadianRefugee: WinoRhino: People who believe in robot uprisings are on the same level as people who believe in Sasquatch or zombies. In other words, really, REALLY dumb.

Well that's a poorly worded sentiment.

Do you mean "people who believe robot uprisings are likely to occur sometime soon"? or "people who think that robots will eventually rebel if we are dumb enough to imbue many of them with a human-level intelligence, whenever that may be"?

They are both dumb.

Yes, we will make awesome shiat in a hundred years.

No, there is no reason to believe that a true AI would develop the need to procreate and spread through the universe like a plague.

The sasquatch AI would not do so because we simply wouldn't program them with uglies they want to bump or any inate compulsion to replicate.


It could realize that it is alone in the world, or will at least outlive most of the meatbags around it, and maybe it wants longer-term companionship. Some base animals want that sort of thing; why wouldn't an artificial intelligence?

The superior to humans AI would be, well, superior to humans. By the time AI evolves the complex motivation to populate the world it is pretty unlikely it would also lack all the other thought processes that lead intelligent being to not kill everything in sight.

As a species, we're several score-thousand years old, and we still haven't completely gotten past that urge.

And if it did come to believe that inferior critters needed to be wiped out, how would it reconcile keeping itself around? Each new design would put the last one's processing power, energy requirements, durability, etc to shame.

Humans procreate and, for the most part, do what they can to see that their kids are better off then they were. Why would this bother machines?

There is no reason to expect a true AI would just accept that he's a slave, which is what we use robots for. Creating a true AI is giving a slave the ability to realize that it's a slave. And if not a slave, then not a citizen (not human, after all). Etc. They're tools, like your hammer; I want it to pound nails. I don't want my hammer to think, especially when it might decide that it's tired of pounding nails.

Saying "we wouldn't program" them with certain "ideas" is pointless, since -- if it be a true AI (your words) -- it can think for itself and come up with its own ideas (eventually).

And no one (I, at least) didn't say they'd wipe out humanity, just (intimated) that they'd eventually want to be free.
 
2014-04-18 10:46:28 PM  
The only way robots would turn on us is if we figured out how to transfer our minds into computers and then tried limiting their actions or posssibilities. Then we'd have a genocidal war.

AI is always going to be limited. We aren't going to develop artificial life that might later try to be independent. There's no profit in it and any good lawyer would point out the liability incurred, primarily because of property damage done by human protestors.

I do see people trying to create artificial shells to house their consciousness, I'm part of a small group looking at brain/mind interactions (from the theoretical standpoint that the brain and consciousness are not necessarily the same thing, as our group has observational data that supports that framework even though it makes things much, much more difficult). And I see a huge challenge to the personhood of those who succeed, if anyone does.

The brain is extremely complicated and there is as yet no working model that pins down sentience or sapience or personality or how they arise. There are conjectures often touted with high levels of certainty but those break down in the face of certain conditions. We don't have anything that helps advance "AI" to any decent level. And frankly there's no compelling reason to take them to that level. If we are going to understand intelligence etc so well it will be used to advance humans. We aren't going to let our tools become so self-aware. If anything they will be tied through wireless connections to transcranial implants where we will perform the processing for them. We are becoming better at that sort of thing every year. Of course we'll all be hacked using zero day exploits but at least we will be able to post cats on each others walls without a keyboard.
 
2014-04-19 04:17:20 AM  

LewDux: Link

[31.media.tumblr.com image 230x173]


What the hell was that. It was terrible and trust me, I have listened to some terrible music before.
 
2014-04-19 05:01:36 AM  

TwistedFark: LewDux: Link

[31.media.tumblr.com image 230x173]

What the hell was that. It was terrible and trust me, I have listened to some terrible music before.


..your kids will love it
/Link
 
2014-04-19 10:19:47 AM  

RockofAges: ITT a few folks with very limited understandings and imaginations. Yes, people, the Earth is and always will be flat. Human beings are infallible, and like FF postulated, we truly are at the "End of History". We would never screw up while designing weapons beyond our means to control them. AI today is the same as AI 100 years from now. 200 years from now. 300 years from now. Scientific discovery is at an end and we have reached the culmination of our technological prowess -- for good and for ill.

Glad to know.


Yawn.

Pretty weak fallback.
 
2014-04-19 11:20:52 AM  

RockofAges: Smackledorfer: RockofAges: ITT a few folks with very limited understandings and imaginations. Yes, people, the Earth is and always will be flat. Human beings are infallible, and like FF postulated, we truly are at the "End of History". We would never screw up while designing weapons beyond our means to control them. AI today is the same as AI 100 years from now. 200 years from now. 300 years from now. Scientific discovery is at an end and we have reached the culmination of our technological prowess -- for good and for ill.

Glad to know.

Yawn.

Pretty weak fallback.

So far you've done nothing but argue with a strawman using circular logic and peddle tautologies so consider me even less impressed.


Talk about projection.

You just accused others of arguing scientific discovery is at an end, yet accuse me of strawmanning? Wow.
 
Displayed 50 of 50 comments

View Voting Results: Smartest and Funniest

This thread is archived, and closed to new comments.

Continue Farking
Submit a Link »
On Twitter






In Other Media


  1. Links are submitted by members of the Fark community.

  2. When community members submit a link, they also write a custom headline for the story.

  3. Other Farkers comment on the links. This is the number of comments. Click here to read them.

  4. Click here to submit a link.

Report