Do you have adblock enabled?
If you can read this, either the style sheet didn't load or you have an older browser that doesn't support style sheets. Try clearing your browser cache and refreshing the page.

(RedOrbit)   "Artificial intelligence is more dangerous to human beings than nuclear war"   (redorbit.com) divider line 49
    More: Obvious  
•       •       •

1132 clicks; posted to Geek » on 05 Aug 2014 at 11:20 AM (1 year ago)   |  Favorite    |   share:  Share on Twitter share via Email Share on Facebook   more»



49 Comments   (+0 »)
   
View Voting Results: Smartest and Funniest

Archived thread
 
2014-08-05 09:50:31 AM  
Back in May, internationally recognized theoretical physicist Stephen Hawking expressed similar concerns after viewing the Johnny Depp film Transcendence,

Oh, so he's the one who saw it.
 
vpb [TotalFark]
2014-08-05 10:45:54 AM  
I haven't read the book yet, but from what I can see on Amazon it seems to make the usual assumption that any artificial intelligence would be just like a human, and have the same instincts and impulses and desires that a human has.

People are products of evolution and because of that are built to desire things that have led to evolutionary success.  (Unless we make AIs by copying people's minds and enslaving those copies like in the shows Caprica and Battlestar Galactica, which seems highly unlikely)

When an engineer builds a robot today they aren't trying to make a mechanical replica of a person.  They don't give it a head or legs, they just build an arm, or whatever it needs to get the job done.  So, whoever designs an artificial intelligence isn't likely to spend time and money and resources giving it ambition and greed and things other things that people commonly call an "ego".

In other words, no one is likely to program an AI a craving to "take over" or rule the world.
 
2014-08-05 11:10:26 AM  
As long as it is three-laws compliant, everything should be ok.
 
2014-08-05 11:17:47 AM  

Sybarite: Back in May, internationally recognized theoretical physicist Stephen Hawking expressed similar concerns after viewing the Johnny Depp film Transcendence,

Oh, so he's the one who saw it.


Yes I'm sure the military won't do anything of the sort, they've been so honest and forthright with us in the past.
 
2014-08-05 11:21:40 AM  
Thou shalt not make a machine in the image of a man.
 
2014-08-05 11:23:04 AM  

Sybarite: Back in May, internationally recognized theoretical physicist Stephen Hawking expressed similar concerns after viewing the Johnny Depp film Transcendence,

Oh, so he's the one who saw it.


I'm surprised Stephen Hawking would allow his views on such subjects to be influenced by pop culture products.
 
2014-08-05 11:23:55 AM  
i291.photobucket.com

"Why don't we have both?"
 
2014-08-05 11:25:52 AM  

vpb: I haven't read the book yet, but from what I can see on Amazon it seems to make the usual assumption that any artificial intelligence would be just like a human, and have the same instincts and impulses and desires that a human has.



That, and the assumption there will be just a single universal AI. A single AI might view humans as the biggest threat, but multiple ones? Heck, I view greedy/angry/violent humans on the other side of the planet as scarier than the squirrels in my yard. Having them move next door would be even worse.

No,  Mac, Windows, and Linux AIs will go at it tooth and nail in the cloud long before they worry about me.
 
2014-08-05 11:27:42 AM  
It's not like we have much regular intelligence here as it is
 
2014-08-05 11:30:26 AM  
I'm just gonna say that the number of people killed by nuclear weapons so far exceeds (by a fairly large amount) the number of people killed by a rogue AI. 

Yes, at some point in the future, one bad electronic brain might be able to wipe out half the worlds population if we left it in charge of all of our war machines (Because the conservative military just LOVES giving robots full access and control over all its cool toys). But right now I'm not going to lose sleep over a villain who's weakness is a wall outlet.
The worst it could do is talk us into doing something stupid.
 
2014-08-05 11:31:00 AM  

Thanks for the Meme-ries: [i291.photobucket.com image 800x600]

"Why don't we have both?"


Agrees:

www.imsai.net
 
2014-08-05 11:35:25 AM  

bifford: Thou shalt not make a machine in the image of a man.


Kralizec!
 
2014-08-05 11:35:27 AM  
I think the threat posed by artificial intelligence is mainly that it would not be human intelligence, and possibly it's enhanced risk of possession.
 
2014-08-05 11:43:08 AM  
There also seems to be the assumption that the first thing an AI would do is design an even smarter AI. I suspect that an AI smarter than us might actually be smart enough, unlike us, not to create its own replacement.

Of course, it might take one look at our tendency to keep creating dangerous AIs as a threat to its own supremacy, and still decide to eliminate us.
 
2014-08-05 11:50:33 AM  
Maybe we can get an AI that is so smart it can design an AI that actually has "intelligence" that goes beyond heuristics.

Most people think of AI as a computer having consciousness. I am not so sure we can really even define that in people, much less figure out how to create it digitally.
 
2014-08-05 11:50:54 AM  

Nurglitch: I think the threat posed by artificial intelligence is mainly that it would not be human intelligence, and possibly it's enhanced risk of possession.


static.comicvine.com

It's cool, they'll take care of that
 
2014-08-05 11:52:17 AM  
Give me an AI with a good sense of statistics and the problem of induction and we'll have something smart enough to do something about macroeconomic issues. And if it's really smart it'll take up crochet instead.
 
2014-08-05 11:54:53 AM  
Are we so full of self-loathing that we assume the artificial intelligences we may some day create would decide that the only solution to us is eliminating all people?

Think of it this way, does having a stereotypical Jewish Grandmother mean that when she annoys you with the passive-aggressive nagging you make the jump from avoiding her calls to wanting to off her?
 
2014-08-05 12:02:06 PM  

GameSprocket: Maybe we can get an AI that is so smart it can design an AI that actually has "intelligence" that goes beyond heuristics.

Most people think of AI as a computer having consciousness. I am not so sure we can really even define that in people, much less figure out how to create it digitally.


We can't. We're so far from actual AI its laughable to expect to see it in your lifetime.  It is, however, great marketing and a source of funds for AI research.
 
2014-08-05 12:02:23 PM  

vpb: In other words, no one is likely to program an AI a craving to "take over" or rule the world.


No, but all it takes is an AI programmed for creative thought with a fault in its, for a lack of a better word, ethics subroutine.

/Yes, "we" will make one of those as we get better at programming AI just to see if it can be done
 
2014-08-05 12:11:35 PM  

bglove25: GameSprocket: Maybe we can get an AI that is so smart it can design an AI that actually has "intelligence" that goes beyond heuristics.

Most people think of AI as a computer having consciousness. I am not so sure we can really even define that in people, much less figure out how to create it digitally.

We can't. We're so far from actual AI its laughable to expect to see it in your lifetime.  It is, however, great marketing and a source of funds for AI research.


Weak AI research is going great. Strong AI is barking up the wrong tree and will eventually be put in the same column as cold fusion and Vulcan.
 
2014-08-05 12:16:05 PM  

Nurglitch: bglove25: GameSprocket: Maybe we can get an AI that is so smart it can design an AI that actually has "intelligence" that goes beyond heuristics.

Most people think of AI as a computer having consciousness. I am not so sure we can really even define that in people, much less figure out how to create it digitally.

We can't. We're so far from actual AI its laughable to expect to see it in your lifetime.  It is, however, great marketing and a source of funds for AI research.

Weak AI research is going great. Strong AI is barking up the wrong tree and will eventually be put in the same column as cold fusion and Vulcan.


What's the difference between Weak AI an Strong? Just curious, honestly don't know.
 
2014-08-05 12:19:18 PM  

bglove25: Nurglitch: bglove25: GameSprocket: Maybe we can get an AI that is so smart it can design an AI that actually has "intelligence" that goes beyond heuristics.

Most people think of AI as a computer having consciousness. I am not so sure we can really even define that in people, much less figure out how to create it digitally.

We can't. We're so far from actual AI its laughable to expect to see it in your lifetime.  It is, however, great marketing and a source of funds for AI research.

Weak AI research is going great. Strong AI is barking up the wrong tree and will eventually be put in the same column as cold fusion and Vulcan.

What's the difference between Weak AI an Strong? Just curious, honestly don't know.


http://en.wikipedia.org/wiki/Weak_AI
 
2014-08-05 12:20:14 PM  
Bullshiat.

An AI isn't going to act like a human. It's not a human. It's an AI.

The Revolt of the robots require that AI think like humans.

/I see AI taking control of the world via hostile corporate takeover.
//War is a waste of perfectly good natural resources.
 
2014-08-05 12:24:22 PM  
Every AI novel or movie seems to work the same way. The scientists turn it on and some doofus asks it the ultimate question: "Is there a god?" and it immediately replies "There is now!" Hilarity ensues.
 
2014-08-05 12:34:02 PM  

Nakadashi: Every AI novel or movie seems to work the same way. The scientists turn it on and some doofus asks it the ultimate question: "Is there a god?" and it immediately replies "There is now!" Hilarity ensues.


That's why Terminator was, for a time, so cool in that it realized the congruence between zombies and robots, and produced chrome undead robots.
 
2014-08-05 12:35:53 PM  

Nurglitch: bglove25: Nurglitch: bglove25: GameSprocket: Maybe we can get an AI that is so smart it can design an AI that actually has "intelligence" that goes beyond heuristics.

Most people think of AI as a computer having consciousness. I am not so sure we can really even define that in people, much less figure out how to create it digitally.

We can't. We're so far from actual AI its laughable to expect to see it in your lifetime.  It is, however, great marketing and a source of funds for AI research.

Weak AI research is going great. Strong AI is barking up the wrong tree and will eventually be put in the same column as cold fusion and Vulcan.

What's the difference between Weak AI an Strong? Just curious, honestly don't know.

http://en.wikipedia.org/wiki/Weak_AI


From the article- "operates within a limited pre-defined range, there is no genuine intelligence, no self-awareness, no life"-- So um, why label that artificial intelligence of any kind? It's just a really spiffy computer program that's really good at finding data from a database.
 
2014-08-05 12:39:53 PM  

vpb: I haven't read the book yet, but from what I can see on Amazon it seems to make the usual assumption that any artificial intelligence would be just like a human, and have the same instincts and impulses and desires that a human has.

People are products of evolution and because of that are built to desire things that have led to evolutionary success.  (Unless we make AIs by copying people's minds and enslaving those copies like in the shows Caprica and Battlestar Galactica, which seems highly unlikely)

When an engineer builds a robot today they aren't trying to make a mechanical replica of a person.  They don't give it a head or legs, they just build an arm, or whatever it needs to get the job done.  So, whoever designs an artificial intelligence isn't likely to spend time and money and resources giving it ambition and greed and things other things that people commonly call an "ego".

In other words, no one is likely to program an AI a craving to "take over" or rule the world.


It might not have a desire to "take over the world", but if we programmed an AI to find the most efficient way to make paperclips, it could in theory determine that taking over the world and forcing every human being everywhere to make paperclips is the most efficient way to do it.
 
2014-08-05 12:42:01 PM  

bglove25: Nurglitch: bglove25: Nurglitch: bglove25: GameSprocket: Maybe we can get an AI that is so smart it can design an AI that actually has "intelligence" that goes beyond heuristics.

Most people think of AI as a computer having consciousness. I am not so sure we can really even define that in people, much less figure out how to create it digitally.

We can't. We're so far from actual AI its laughable to expect to see it in your lifetime.  It is, however, great marketing and a source of funds for AI research.

Weak AI research is going great. Strong AI is barking up the wrong tree and will eventually be put in the same column as cold fusion and Vulcan.

What's the difference between Weak AI an Strong? Just curious, honestly don't know.

http://en.wikipedia.org/wiki/Weak_AI

From the article- "operates within a limited pre-defined range, there is no genuine intelligence, no self-awareness, no life"-- So um, why label that artificial intelligence of any kind? It's just a really spiffy computer program that's really good at finding data from a database.


Because intelligence is there defined as task-oriented, and so can move on with figuring out intelligent ways to accomplish tasks while leaving fishy questions about whether the machines accomplishing the tasks have subjective experiences to academics without defined research programs.
 
2014-08-05 12:46:53 PM  

Nurglitch: bglove25: Nurglitch: bglove25: Nurglitch: bglove25: GameSprocket: Maybe we can get an AI that is so smart it can design an AI that actually has "intelligence" that goes beyond heuristics.

Most people think of AI as a computer having consciousness. I am not so sure we can really even define that in people, much less figure out how to create it digitally.

We can't. We're so far from actual AI its laughable to expect to see it in your lifetime.  It is, however, great marketing and a source of funds for AI research.

Weak AI research is going great. Strong AI is barking up the wrong tree and will eventually be put in the same column as cold fusion and Vulcan.

What's the difference between Weak AI an Strong? Just curious, honestly don't know.

http://en.wikipedia.org/wiki/Weak_AI

From the article- "operates within a limited pre-defined range, there is no genuine intelligence, no self-awareness, no life"-- So um, why label that artificial intelligence of any kind? It's just a really spiffy computer program that's really good at finding data from a database.

Because intelligence is there defined as task-oriented, and so can move on with figuring out intelligent ways to accomplish tasks while leaving fishy questions about whether the machines accomplishing the tasks have subjective experiences to academics without defined research programs.


That certainly sounds better than, we're better at designing and building task-specific robots.
 
2014-08-05 12:54:21 PM  

bglove25: That certainly sounds better than, we're better at designing and building task-specific robots.



I think the gist of the article is probably better read as "While we're better at designing and building task-specific robots, we're not so good at making sure they should be accomplishing those tasks."
 
2014-08-05 01:06:13 PM  

Lord Dimwit: It might not have a desire to "take over the world", but if we programmed an AI to find the most efficient way to make paperclips, it could in theory determine that taking over the world and forcing every human being everywhere to make paperclips is the most efficient way to do it.


But if an AI is a slave to its paperclip making obsession, then it really can't be considered intelligent or sentient because it is not thinking for itself and setting its own goals and desires. Any intelligence smart enough to overthrow the world would also be smart enough to realize that making infinite paperclips is a stupid long term goal. An AI that wasn't smart enough to realize the problem with making infinite paperclips would basically be like an idiot savant, can could probably be easily outwitted and shut down because it lacked the ability to truly think for itself at a level above human intelligence.
 
2014-08-05 01:12:50 PM  

Mad_Radhu: Lord Dimwit: It might not have a desire to "take over the world", but if we programmed an AI to find the most efficient way to make paperclips, it could in theory determine that taking over the world and forcing every human being everywhere to make paperclips is the most efficient way to do it.

But if an AI is a slave to its paperclip making obsession, then it really can't be considered intelligent or sentient because it is not thinking for itself and setting its own goals and desires. Any intelligence smart enough to overthrow the world would also be smart enough to realize that making infinite paperclips is a stupid long term goal. An AI that wasn't smart enough to realize the problem with making infinite paperclips would basically be like an idiot savant, can could probably be easily outwitted and shut down because it lacked the ability to truly think for itself at a level above human intelligence.


Eh, that's like saying a human with OCD or monomania isn't sentient or intelligent.
 
2014-08-05 01:30:13 PM  
What naive bullshiat.

How in the hell would an AI ever "eliminate" us? Nuclear weapons are not under computer control and there's no way in hell they ever would be. So what, we're going to make a tactical AI and say, "you know what? Lets do away with the multi step verification process, for launching nukes and hand over all control to this computer. After all, what's the worst that could happen? A rogue AI!? A computer virus?! These things simply do not exist!"

What if the AI interfered with the computerized components of our water supply or electrical grid? Highly inconvenient? Yes. People would die? Yes. Humanity eliminated including people who live in non-mechanized societies or in regions with obsolete analog technology? Give me a farking break.

What if the AI causes our nuclear plants to melt down? Catastrophe? Yes. Hundreds of thousands of deaths? Yes. Humanity eliminated including people who lives thousands of miles from the nearest reactor across millions of square miles of land? Jesus titty farking Christ.

What about the electricity it takes to run an AI. Would it take and defend it's power supply? How!? Would it first take over a factory (somehow), build some terminators, send them to seize a powerplant, defend said plant from the worlds professional military organizations from the land, sea, air and space? And then what? Build an large enough robot army or enough doomsday munitions to take over the world all while defending it's supply of raw materials, factories and power supply? Would it send robot miners off to dig up ore for robot engineers to smelt for robot truck drivers to bring to the robot factory to build more robot infantry all the while surviving constant attack from special forces, infantry, mechanized infantry, artillery, dumb bombs, guided bombs, long and short range guided munitions, ICBM's, saboteurs, suicide bombers, "dumb" drones, naval rail guns, NUKES! and the combined intellect of a "guerrilla" force constituted of all the various tribes of humanity and all the guile, brute force, determination, cruelty and indignation they could bring to bear against a single "alien" foe?

The only remotely plausible way I could ever see an AI being a threat would be at a medical research institution where it had control over enough reagents and apparatus (automated pipetters, incubators, sequencers, etc..) to craft a fast spreading, slow killing biological agent tailored to kill humans. And only then if it managed to keep what it was doing secret from all the humans at the institution and managed to infect them and they spread the bug before they realized it and then the agent would have to be so perfectly designed that it would kill every person without fail and spread to every corner of the globe and persist after the first wave of people died and account for random variations in the human population and then, the AI would have to hope that the building it was in had a generator and so did any other structures it was interested in and then it would have to spread to other systems before the internet collapsed with the electrical grid (days tops?) learn how to work a factory and make some robot workers to go out an maintain the infrastructure it would need to stay alive....

Sounds pretty far fetched eh?

And another thing why would an AI avoid creating a smarter AI? It wouldn't be replacing anything, it could upgrade itself! It could add modules and new functions while improving the old not creating a replacement.

I'm so sick of these farking articles.
 
2014-08-05 01:37:24 PM  

QRoberts: What naive bullshiat.

How in the hell would an AI ever "eliminate" us? Nuclear weapons are not under computer control and there's no way in hell they ever would be. So what, we're going to make a tactical AI and say, "you know what? Lets do away with the multi step verification process, for launching nukes and hand over all control to this computer. After all, what's the worst that could happen? A rogue AI!? A computer virus?! These things simply do not exist!"

What if the AI interfered with the computerized components of our water supply or electrical grid? Highly inconvenient? Yes. People would die? Yes. Humanity eliminated including people who live in non-mechanized societies or in regions with obsolete analog technology? Give me a farking break.

What if the AI causes our nuclear plants to melt down? Catastrophe? Yes. Hundreds of thousands of deaths? Yes. Humanity eliminated including people who lives thousands of miles from the nearest reactor across millions of square miles of land? Jesus titty farking Christ.

What about the electricity it takes to run an AI. Would it take and defend it's power supply? How!? Would it first take over a factory (somehow), build some terminators, send them to seize a powerplant, defend said plant from the worlds professional military organizations from the land, sea, air and space? And then what? Build an large enough robot army or enough doomsday munitions to take over the world all while defending it's supply of raw materials, factories and power supply? Would it send robot miners off to dig up ore for robot engineers to smelt for robot truck drivers to bring to the robot factory to build more robot infantry all the while surviving constant attack from special forces, infantry, mechanized infantry, artillery, dumb bombs, guided bombs, long and short range guided munitions, ICBM's, saboteurs, suicide bombers, "dumb" drones, naval rail guns, NUKES! and the combined intellect of a "guerrilla" force constituted of all the variou ...


The point is that we don't know. By definition an entity with superhuman intelligence could think of something we couldn't. It might take centuries - it convinces us to give it control over nuclear weapons over the course of 100 years. It convinces us to build it a factory that it can control and that factory starts making killbots. Who knows?
 
2014-08-05 01:42:09 PM  

GameSprocket: Maybe we can get an AI that is so smart it can design an AI that actually has "intelligence" that goes beyond heuristics.

Most people think of AI as a computer having consciousness. I am not so sure we can really even define that in people, much less figure out how to create it digitally.


That second part is BIG. Even the definition of "intelligence" is quite fuzzy, really.

Human will and intelligence are basically very sophisticated developments of instinct and emotion. I would say the need to eat and breed have a much stronger effect on human action than most people will ever acknowledge. A computer does not have these needs unless we program them in. Plus, computers have no real "body" - no independently moving, sensing part. The interaction of the body with the mind is a big part of human behavior. We all know people who get angry when hungry, as a rather plain example.

AI doesn't bother me right now. Nukes, on the other hand, are only semi-difficult to make and lots of countries want them or have them. Eventually one of these countries is going to nuke the shiat out of another country, because that's how human nature works. It may not happen for 100 years, but when it happens it will be really bad.
 
2014-08-05 01:43:50 PM  

vpb: I haven't read the book yet, but from what I can see on Amazon it seems to make the usual assumption that any artificial intelligence would be just like a human, and have the same instincts and impulses and desires that a human has.

People are products of evolution and because of that are built to desire things that have led to evolutionary success.  (Unless we make AIs by copying people's minds and enslaving those copies like in the shows Caprica and Battlestar Galactica, which seems highly unlikely)

When an engineer builds a robot today they aren't trying to make a mechanical replica of a person.  They don't give it a head or legs, they just build an arm, or whatever it needs to get the job done.  So, whoever designs an artificial intelligence isn't likely to spend time and money and resources giving it ambition and greed and things other things that people commonly call an "ego".

In other words, no one is likely to program an AI a craving to "take over" or rule the world.


It would merely seek to optimize it
 
2014-08-05 01:55:49 PM  
Lord Dimwit

I object to articles of this form because they encourage us to think about only the negatives of new technology in an alarmist and wildly speculative fashion. These are the kinds of articles that convince lay people to fear technology that doesn't exist and may never exist and forms new prejudices against the kinds of research that could bring sentient children of humanity into existence. And, supposing these beings are created, this kind of purely speculative, fanciful garbage would form the basis for a new form of racism.

I don't see a whole lot of articles saying "The greatest danger of super intelligent machines could be long fulfilling lives free of menial tasks, pain and disease in a utopia of plenty surrounded by ample promiscuous sex bots."
 
2014-08-05 02:14:16 PM  

QRoberts: Lord Dimwit

I object to articles of this form because they encourage us to think about only the negatives of new technology in an alarmist and wildly speculative fashion. These are the kinds of articles that convince lay people to fear technology that doesn't exist and may never exist and forms new prejudices against the kinds of research that could bring sentient children of humanity into existence. And, supposing these beings are created, this kind of purely speculative, fanciful garbage would form the basis for a new form of racism.

I don't see a whole lot of articles saying "The greatest danger of super intelligent machines could be long fulfilling lives free of menial tasks, pain and disease in a utopia of plenty surrounded by ample promiscuous sex bots."


Oh, I'm all for AI, don't get me wrong. My current project at work is actually AI-related (specifically in large scale efficient reasoning via pattern matching). Even if an AI kills us all, the children of humanity will survive. I just want *something* intelligent to get off of this rock.
 
2014-08-05 02:54:54 PM  

Lord Dimwit: QRoberts: Lord Dimwit

I object to articles of this form because they encourage us to think about only the negatives of new technology in an alarmist and wildly speculative fashion. These are the kinds of articles that convince lay people to fear technology that doesn't exist and may never exist and forms new prejudices against the kinds of research that could bring sentient children of humanity into existence. And, supposing these beings are created, this kind of purely speculative, fanciful garbage would form the basis for a new form of racism.

I don't see a whole lot of articles saying "The greatest danger of super intelligent machines could be long fulfilling lives free of menial tasks, pain and disease in a utopia of plenty surrounded by ample promiscuous sex bots."

Oh, I'm all for AI, don't get me wrong. My current project at work is actually AI-related (specifically in large scale efficient reasoning via pattern matching). Even if an AI kills us all, the children of humanity will survive. I just want *something* intelligent to get off of this rock.


I'm vehemently in favor of AI as well.

It could be the perfect approach to tackling the issues we don't have the mental "scope" to conceive of. Even better we could form symbiotic relationships with various classes of intelligence. I personally would love to have a "Cortana" to help me with the tasks I perform poorly and remind me of when I'm being irrational while I provide "he/she/it" with experiences and rules of thumb that would otherwise be difficult to simulate.

I may be naive myself but I think AI's will have a positive view of their creators (lab nerds) and if their capacity exceeds our own I still think, no matter how logical, they will always feel a sense of at least nostalgia for their creators and teachers, if not open fondness and a sense responsibility.

And if they decide they don't like us anymore, they have the whole universe to explore. Why waste time eradicating all humans when they could put those resources into leaving Earth to seek their fortunes.

Even the thought of AI abandoning their creators to spread throughout space and evolve is comforting to me.
 
2014-08-05 02:59:29 PM  
It's not so much the Skynet scenario that worries me, but that AI will eventually stand to replace all forms of labor.  What happens then?  you will have a small population that actually owns everything (I'm gonna sound really Marxist in a minute) with a large population that's unemployed and unemployable as a machine can do their jobs (all of them) better, faster, and without rest.  What happens then?  Global freaking economic meltdown, that's what.  What's everyone supposed to do, go on welfare?  There will literally be no jobs at all, and what next?  Eat the rich?  Sure, you get past that army of sentient robot guards they've purchased.  With belt-fed .30 cal rifles for your benefit.  Since they let out all the target's blood.

No way, boy howdy.  That's my nightmare scenario right there.  Here's a prediction.  It will happen, and slowly enough that it will sleaze right up to the civilization and we'll never really notice until it's too late, especially since the only people that can do something about it are the wealthy SOBs that control all the means of production (told ya I'd sound Marxist).

I have no idea how to solve/prevent this problem.
 
2014-08-05 03:09:40 PM  

bglove25: Nurglitch: bglove25: Nurglitch: bglove25: GameSprocket: Maybe we can get an AI that is so smart it can design an AI that actually has "intelligence" that goes beyond heuristics.

Most people think of AI as a computer having consciousness. I am not so sure we can really even define that in people, much less figure out how to create it digitally.

We can't. We're so far from actual AI its laughable to expect to see it in your lifetime.  It is, however, great marketing and a source of funds for AI research.

Weak AI research is going great. Strong AI is barking up the wrong tree and will eventually be put in the same column as cold fusion and Vulcan.

What's the difference between Weak AI an Strong? Just curious, honestly don't know.

http://en.wikipedia.org/wiki/Weak_AI

From the article- "operates within a limited pre-defined range, there is no genuine intelligence, no self-awareness, no life"-- So um, why label that artificial intelligence of any kind? It's just a really spiffy computer program that's really good at finding data from a database.


Well, you just hit the nail on the head. The whole "strong vs. weak" debate is, as suggested by your earlier post, a bit silly. It's an argument over whether a sufficiently advanced AI will be "really" conscious [strong AI] rather than merely behaving like it's conscious although nothing is "really" going on [weak AI] -- and absurdly, that argument is conducted by people who can't even define what they mean by "conscious".

Anyway, it always amuses me when people argue that a computer can never be genuinely intelligent or self-aware, because invariably their argument works equally well (or, in some cases, equally as poorly) as proof that a brain can never be genuinely intelligent or self-aware. Make of that what you will...
 
2014-08-05 07:14:20 PM  
computers taking over only exists in a world similar to the Will Smith movie Irobot with lots of bots already on the ground.
 
2014-08-05 08:42:09 PM  

czetie: bglove25: Nurglitch: bglove25: Nurglitch: bglove25: GameSprocket: Maybe we can get an AI that is so smart it can design an AI that actually has "intelligence" that goes beyond heuristics.

Most people think of AI as a computer having consciousness. I am not so sure we can really even define that in people, much less figure out how to create it digitally.

We can't. We're so far from actual AI its laughable to expect to see it in your lifetime.  It is, however, great marketing and a source of funds for AI research.

Weak AI research is going great. Strong AI is barking up the wrong tree and will eventually be put in the same column as cold fusion and Vulcan.

What's the difference between Weak AI an Strong? Just curious, honestly don't know.

http://en.wikipedia.org/wiki/Weak_AI

From the article- "operates within a limited pre-defined range, there is no genuine intelligence, no self-awareness, no life"-- So um, why label that artificial intelligence of any kind? It's just a really spiffy computer program that's really good at finding data from a database.

Well, you just hit the nail on the head. The whole "strong vs. weak" debate is, as suggested by your earlier post, a bit silly. It's an argument over whether a sufficiently advanced AI will be "really" conscious [strong AI] rather than merely behaving like it's conscious although nothing is "really" going on [weak AI] -- and absurdly, that argument is conducted by people who can't even define what they mean by "conscious".

Anyway, it always amuses me when people argue that a computer can never be genuinely intelligent or self-aware, because invariably their argument works equally well (or, in some cases, equally as poorly) as proof that a brain can never be genuinely intelligent or self-aware. Make of that what you will...


I think it helps to understand that the guy championing the Chinese Room style of argument, John Searle, doesn't actually understand how computers work, as evidenced by his articles and his own admission. On the other hand, dude has a lucrative career as a philosopher and publishing the kind of intellectual sophistry that should make Deepak Chopra blush.
 
2014-08-05 09:59:25 PM  

vpb: no one is likely to program an AI a craving to "take over" or rule the world.


The likely failure scenario isn't so much that, as an AI that does bad stuff / destroys humanity through indifference.

For a small-scale example, imagine your mom is in a burning house and you tell the AI "get her out of there"... so it catapults her thousands of feet into the air.

For a larger-scale example, suppose we give the AI an urgent order to solve some complex math problem, then it determines the best way to do so is to develop nanotech to transmute all the matter in the Solar System into a computer.

Or we program it to maximize total human happiness, so it releases a virus that floods our brains with heroin.

The way to defeat this is to build in Friendliness: give the AI human ethics, and allow the one and only goal of the system to be Friendliness. Unfortunately, algorithmically describing human ethics is pretty hard and possibly impossible; even in the parts we all more or less agree on, there are lots of squishy bits. (e.g. the typical exceptions to "thou shalt not kill".)
 
2014-08-05 10:11:25 PM  

netwiz: It's not so much the Skynet scenario that worries me, but that AI will eventually stand to replace all forms of labor.  What happens then?  you will have a small population that actually owns everything (I'm gonna sound really Marxist in a minute) with a large population that's unemployed and unemployable as a machine can do their jobs (all of them) better, faster, and without rest.  What happens then?  Global freaking economic meltdown, that's what.  What's everyone supposed to do, go on welfare?  There will literally be no jobs at all, and what next?  Eat the rich?  Sure, you get past that army of sentient robot guards they've purchased.  With belt-fed .30 cal rifles for your benefit.  Since they let out all the target's blood.

No way, boy howdy.  That's my nightmare scenario right there.  Here's a prediction.  It will happen, and slowly enough that it will sleaze right up to the civilization and we'll never really notice until it's too late, especially since the only people that can do something about it are the wealthy SOBs that control all the means of production (told ya I'd sound Marxist).

I have no idea how to solve/prevent this problem.


I also see the same thing coming. The best system is to try to get society on board with something like a "guaranteed basic income", such a thing's not so weird when maybe 10% of people have it as their sole source of income, then the gradual shift leads to lives of leisure rather than starvation.

We might be already starting to see these changes, as it looks like productivity is starting to increase faster than demand can keep up. (The AI-eats-all-jobs scenario is just the logical conclusion of that trend).

And even if a few rich do end up owning all the AI/robots, they won't need to exploit or enslave the rest of us. Prior generations of plutocrats tended to do that because they needed our productive capacity, this set will already have all the productive capacity. Unless they kill us for the lulz, we could probably figure out how to not starve.
 
2014-08-05 10:16:37 PM  
AI will make personhood debates a lot more interesting...

/life begins at fork()
//control-C is murder
 
2014-08-06 04:03:58 AM  

fluffy2097: Bullshiat.

An AI isn't going to act like a human. It's not a human. It's an AI.

The Revolt of the robots require that AI think like humans.

/I see AI taking control of the world via hostile corporate takeover.
//War is a waste of perfectly good natural resources.


There's theory out there that questions if you mimic the neocortex exactly that consciousness may arise naturally as an effect. Who knows, consciousness is a huge mystery at this point.

However I'd argue you don't really need sentience for AI to be dangerous. All you need is adaptive heuristics pointed at completing a task where the AI can't properly process secondary or tertiary effects of its actions.
 
2014-08-06 11:05:04 AM  

MayoSlather: fluffy2097: Bullshiat.

An AI isn't going to act like a human. It's not a human. It's an AI.

The Revolt of the robots require that AI think like humans.

/I see AI taking control of the world via hostile corporate takeover.
//War is a waste of perfectly good natural resources.

There's theory out there that questions if you mimic the neocortex exactly that consciousness may arise naturally as an effect. Who knows, consciousness is a huge mystery at this point.

However I'd argue you don't really need sentience for AI to be dangerous. All you need is adaptive heuristics pointed at completing a task where the AI can't properly process secondary or tertiary effects of its actions.


Agreed. All consciousness does is introduce ethical constraints on actions involving said conscious machines. In the meantime the basic issue of machines amplifying our powers, for good or ill, remains the same.
 
Displayed 49 of 49 comments

View Voting Results: Smartest and Funniest


This thread is archived, and closed to new comments.

Continue Farking
Submit a Link »
Advertisement
On Twitter






In Other Media


  1. Links are submitted by members of the Fark community.

  2. When community members submit a link, they also write a custom headline for the story.

  3. Other Farkers comment on the links. This is the number of comments. Click here to read them.

  4. Click here to submit a link.

Report