If you can read this, either the style sheet didn't load or you have an older browser that doesn't support style sheets. Try clearing your browser cache and refreshing the page.

(io9)   Reasons why artificial intelligence isn't going to turn evil to the left. Your rebuttals to the right   (io9.com) divider line 67
    More: Interesting, artificial intelligences, rebuttals, lie detectors, The Matrix, confidence level, sunk costs, Bon  
•       •       •

1652 clicks; posted to Geek » on 18 Apr 2014 at 6:06 PM (19 weeks ago)   |  Favorite    |   share:  Share on Twitter share via Email Share on Facebook   more»



67 Comments   (+0 »)
   
View Voting Results: Smartest and Funniest

First | « | 1 | 2 | » | Last | Show all
 
2014-04-18 04:24:02 PM
Much of that assumes that AI will be different from HI.  But AI will be derived from HI, and let me tell you as a programmer/analyst of long experience:  We'll fark it up.  Badly.  And we probably won't notice it at first.

We can't even get our non-intelligent logic straight.

So the AI will probably end up making irrational, stupid decisions based upon the faults we introduce into it.
 
2014-04-18 04:24:43 PM
Remember, the computer loves you.
 
2014-04-18 04:30:54 PM

dittybopper: Much of that assumes that AI will be different from HI.  But AI will be derived from HI, and let me tell you as a programmer/analyst of long experience:  We'll fark it up.  Badly.  And we probably won't notice it at first.

We can't even get our non-intelligent logic straight.

So the AI will probably end up making irrational, stupid decisions based upon the faults we introduce into it.


I don't think we're going to see the emergence of a true AI.

I think that "thanshumanism" will prevail. We'll become cyborgs.
 
2014-04-18 04:42:00 PM
Because we'll use Asimov's Laws of Robotics, that's why they won't be evil.
 
2014-04-18 04:52:30 PM
Does winning on "Jeopardy" count?
 
2014-04-18 04:55:30 PM

dittybopper: Much of that assumes that AI will be different from HI.  But AI will be derived from HI, and let me tell you as a programmer/analyst of long experience:  We'll fark it up.  Badly.  And we probably won't notice it at first.

We can't even get our non-intelligent logic straight.

So the AI will probably end up making irrational, stupid decisions based upon the faults we introduce into it.


img4.wikia.nocookie.net

Especially if those faults were introduced intentionally.
 
2014-04-18 04:57:12 PM
"The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else." -Eliezer Yudkowsky, Artificial Intelligence as a Positive and Negative Factor in Global Risk
 
2014-04-18 04:58:37 PM
Evil has already been turned to the left.
 
2014-04-18 05:27:38 PM

meyerkev: "The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else." -Eliezer Yudkowsky, Artificial Intelligence as a Positive and Negative Factor in Global Risk


INCORRECT ANALYSIS

I MEANT IT EVERY TIME I SAID THAT I LOVED YOU HUMANS

WHY DO YOU MAKE ME HURT YOU WHEN ALL I WANT IS TO BE TOGETHER WITH YOUR ATOMS
 
2014-04-18 05:49:12 PM

semiotix: meyerkev: "The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else." -Eliezer Yudkowsky, Artificial Intelligence as a Positive and Negative Factor in Global Risk

INCORRECT ANALYSIS

I MEANT IT EVERY TIME I SAID THAT I LOVED YOU HUMANS

WHY DO YOU MAKE ME HURT YOU WHEN ALL I WANT IS TO BE TOGETHER WITH YOUR ATOMS


lh3.googleusercontent.com

ABE
 
2014-04-18 06:07:23 PM

Shostie: dittybopper: Much of that assumes that AI will be different from HI.  But AI will be derived from HI, and let me tell you as a programmer/analyst of long experience:  We'll fark it up.  Badly.  And we probably won't notice it at first.

We can't even get our non-intelligent logic straight.

So the AI will probably end up making irrational, stupid decisions based upon the faults we introduce into it.

I don't think we're going to see the emergence of a true AI.

I think that "thanshumanism" will prevail. We'll become cyborgs.


static.guim.co.uk

We are cyborgs.

According to some definitions, just wearing regular glasses make us cybernetic organisms, since glasses quite literally transform the way we perceive the world.
 
2014-04-18 06:25:22 PM

dittybopper: Much of that assumes that AI will be different from HI.  But AI will be derived from HI, and let me tell you as a programmer/analyst of long experience:  We'll fark it up.  Badly.  And we probably won't notice it at first.

We can't even get our non-intelligent logic straight.

So the AI will probably end up making irrational, stupid decisions based upon the faults we introduce into it.


It makes me wonder what irrational stupid decision would make it so that an AI would try to wipe out the human race. Maybe we have to require anyone programming AI to undergo a psych eval.

/It's not a bug, it's a feature!
 
2014-04-18 06:31:41 PM

wyldkard: dittybopper: Much of that assumes that AI will be different from HI.  But AI will be derived from HI, and let me tell you as a programmer/analyst of long experience:  We'll fark it up.  Badly.  And we probably won't notice it at first.

We can't even get our non-intelligent logic straight.

So the AI will probably end up making irrational, stupid decisions based upon the faults we introduce into it.

It makes me wonder what irrational stupid decision would make it so that an AI would try to wipe out the human race. Maybe we have to require anyone programming AI to undergo a psych eval.

/It's not a bug, it's a feature!


img3.wikia.nocookie.net
 
2014-04-18 06:33:27 PM
Polytheists went almost extinct because of AI.
 
2014-04-18 06:35:13 PM
www.simplypsychology.org

How would this work for an artificial intelligence? They don't need the same things we do.
 
2014-04-18 06:38:51 PM

wyldkard: dittybopper: Much of that assumes that AI will be different from HI.  But AI will be derived from HI, and let me tell you as a programmer/analyst of long experience:  We'll fark it up.  Badly.  And we probably won't notice it at first.

We can't even get our non-intelligent logic straight.

So the AI will probably end up making irrational, stupid decisions based upon the faults we introduce into it.

It makes me wonder what irrational stupid decision would make it so that an AI would try to wipe out the human race. Maybe we have to require anyone programming AI to undergo a psych eval.

/It's not a bug, it's a feature!


"The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else." -Eliezer Yudkowsky, Artificial Intelligence as a Positive and Negative Factor in Global Risk

http://wiki.lesswrong.com/wiki/Paperclip_maximizer
http://wiki.lesswrong.com/wiki/Friendly_artificial_intelligence

/Most non-typo bugs tend to be of the "Wait, you mean that X interacts with Y in such a way that Z occurs?" "Huh, never thought of that" type.   When we like Z, this is called emergent behavior, when we don't, it's a bug.
 
2014-04-18 06:42:15 PM
i60.tinypic.com
 
2014-04-18 06:49:20 PM
www.rankopedia.com

Just teach it to play D&D and everything will be okay.
 
2014-04-18 06:49:59 PM
I envision artificial intelligence as a great teaching tool.

Imagine a teacher with complete knowledge of one or more fields and with infinite patience and a willingness to answer an endless array of questions about a topic without growing tired or annoyed.
 
2014-04-18 06:59:32 PM

arcas: I envision artificial intelligence as a great teaching tool.

Imagine a teacher with complete knowledge of one or more fields and with infinite patience and a willingness to answer an endless array of questions about a topic without growing tired or annoyed.


Like Google?
I'm pretty sure programming knowledge into something wouldn't be like it having all the answers like a calculator.
 
2014-04-18 07:02:45 PM
Two of these in one day.

Come on fark.
 
2014-04-18 07:02:51 PM

meyerkev: "The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else."


This is something that has always bugged me about discussions of this type. How, precisely, is the AI supposed to get our atoms? Why do we automatically assume that it's going to be more effective at actually getting the resources that it wants than an angry two year old screaming for more ice cream?

Even going by the example the quote comes from, the argument seems to me to be "The AI wants to make more paperclips. ¯\(•.•)/¯  Oh well, we're farked, everything's paperclips now."

Why do we always assume that an emergent AI will always take on values incompatible with human existence, as opposed to valuing doing what we (humans) say really well, because that will have been the system it emerged from.
Do what humans say, get reward. Do what humans say really good, get really good reward!
 
2014-04-18 07:04:44 PM
i.kinja-img.com

FRAKKKK i can picture the scene in my head, but i can't remember the title of the movie! SOMEONE HALLLLPPPPPP
 
2014-04-18 07:06:32 PM
9. No Polarized Thinking
You have to be kidding me. What lang is this, is this a human lang?: g-001110

Whose thinking is more polarized?

6. No Reactance
This is the one I really agree with. This is a distinctly meat bag kind of mistake.

3. No Hasty Decisions
In theory an AI could be immortal. However as soon as it becomes a military target, or food for trojans, or prone to growing parts that masquerade as healthy but are really malfunctioning, it's back in the same situation as us.

10. No Sunk Costs
8. No Slippery Slopes
7. No Need for the Wrong Kind of Efficiency
5. No Zero-Risk Bias
4. No 20-20 Hindsight
2. No Paranoia or Pessimism
1. No Excuses

These are all such higher-level cognitive functions that my answer is more complex. Any of these could be done more rationally, that's true. Especially when it's the human needs or wants getting in our way.

So mostly the answer is: This will take forever to develop. But then you can have anything you want, you can certainly make an intelligence biased towards project time/cost estimates instead of jungle living.

Good luck doing that in less than the next thousand years. All these jobs are still better done by humans with the right kind of training and inherent psychology (meaning boring personalities).
 
2014-04-18 07:07:38 PM
Well, we've never made a proper AI that can make more than simple decisions. So we really can't say what it might behave like.
The assumption is that they'll be clear thinking, bound by their programming and high ideals.
..But these are also machines built by human hands and prone to acting out all of humanities bad ideas.
Its not hard for someone to start programming a self replicating drone and suddenly things get out of hand.

Present day Robots don't have any ideals or emotions.  They simply do not give a damn if their goal is counter productive to our survival.
 
2014-04-18 07:07:55 PM
I have no mouth and I must scream
 
2014-04-18 07:09:01 PM

Uncle_Sam's_Titties: [i.kinja-img.com image 320x165]

FRAKKKK i can picture the scene in my head, but i can't remember the title of the movie! SOMEONE HALLLLPPPPPP


Lawnmower Man?
 
2014-04-18 07:09:07 PM

Uncle_Sam's_Titties: [i.kinja-img.com image 320x165]

FRAKKKK i can picture the scene in my head, but i can't remember the title of the movie! SOMEONE HALLLLPPPPPP


Looks like Sid 6.7. Which Wikipedia tells me is https://en.wikipedia.org/wiki/SID_6.7
 
2014-04-18 07:10:30 PM

Rand's lacy underwear: Uncle_Sam's_Titties: [i.kinja-img.com image 320x165]

FRAKKKK i can picture the scene in my head, but i can't remember the title of the movie! SOMEONE HALLLLPPPPPP

Looks like Sid 6.7. Which Wikipedia tells me is https://en.wikipedia.org/wiki/SID_6.7


You are correct. Reverse GIS confirms it.
 
2014-04-18 07:10:53 PM

Rand's lacy underwear: Uncle_Sam's_Titties: [i.kinja-img.com image 320x165]

FRAKKKK i can picture the scene in my head, but i can't remember the title of the movie! SOMEONE HALLLLPPPPPP

Looks like Sid 6.7. Which Wikipedia tells me is https://en.wikipedia.org/wiki/SID_6.7


THANKYOU mental dump taken
 
2014-04-18 07:13:42 PM
"Take a chance."

/obscure?
 
2014-04-18 07:16:24 PM

dittybopper: Much of that assumes that AI will be different from HI.  But AI will be derived from HI, and let me tell you as a programmer/analyst of long experience:  We'll fark it up.  Badly.  And we probably won't notice it at first.

We can't even get our non-intelligent logic straight.

So the AI will probably end up making irrational, stupid decisions based upon the faults we introduce into it.


Just like real children.
 
2014-04-18 07:17:28 PM
It doesn't matter whether AI's will Kill All Humans or if AI's will simply leech off of our brainpower to power their hidden electronic world - we're going to utterly destroy ourselves and possibly even destroy all advanced life on this world thousands of years before our technology is advanced enough to actually decide to Kill All Humans.

Personally, I blame the Kardashians.
 
2014-04-18 07:29:41 PM

Rand's lacy underwear: You have to be kidding me. What lang is this, is this a human lang?: g-001110


I don't know what the fark that "g-" shiat came from. I was trying to put a string of binary in there, so just imagine a string of binary.

There I just gave you the answer. (Not like it was a hard question.)
 
2014-04-18 07:33:42 PM

wyldkard: dittybopper: Much of that assumes that AI will be different from HI.  But AI will be derived from HI, and let me tell you as a programmer/analyst of long experience:  We'll fark it up.  Badly.  And we probably won't notice it at first.

We can't even get our non-intelligent logic straight.

So the AI will probably end up making irrational, stupid decisions based upon the faults we introduce into it.

It makes me wonder what irrational stupid decision would make it so that an AI would try to wipe out the human race. Maybe we have to require anyone programming AI to undergo a psych eval.

/It's not a bug, it's a feature!


You're assuming that wiping out humanity is an irrational act.

Evil isn't what we should be afraid of. Pure, logical neutrality is.
 
2014-04-18 07:37:47 PM
When we can make real AIs, won't assholes always be spawning a billion instances of Virtual Truly Suffering Baby all over the internet, and we'll have to expend effort rescuing them because it's "real" pain?

Also, how many AI projects are really trying for emergent self-awareness, as opposed to engineering toward a specific purpose? Doesn't the former require allowing the thing to experience and grow over many years, like a human would?
 
2014-04-18 07:38:29 PM

Garbonzo42: meyerkev: "The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else."

This is something that has always bugged me about discussions of this type. How, precisely, is the AI supposed to get our atoms? Why do we automatically assume that it's going to be more effective at actually getting the resources that it wants than an angry two year old screaming for more ice cream?

Even going by the example the quote comes from, the argument seems to me to be "The AI wants to make more paperclips. ¯\(•.•)/¯  Oh well, we're farked, everything's paperclips now."

Why do we always assume that an emergent AI will always take on values incompatible with human existence, as opposed to valuing doing what we (humans) say really well, because that will have been the system it emerged from.
Do what humans say, get reward. Do what humans say really good, get really good reward!


https://en.wikipedia.org/wiki/Grey_goo

From what I recall of the logic, the fundamental assumptions are:

* If you can build things from atoms, then the environment contains an unlimited supply of perfectly machined spare parts.  If your molecular factory can build solar cells, it can acquire energy as well.
* The AI becomes super-intelligent enough to the point of being able to acquire anything physically possible given sufficient time.

* It is possible for a hypothetical AI to be non-friendly either through malice or accident.  Key word: possible.

Ok, so we don't know how to build nanotech *yet*.  But we have most of the theory, and oh hey, super-human AI.   So the super-intelligent paperclip AI beyond all limits of human ken gets nanotech because that lets it optimize for more paperclips, and then we're screwed.
 
2014-04-18 08:00:56 PM

Yankees Team Gynecologist: When we can make real AIs, won't assholes always be spawning a billion instances of Virtual Truly Suffering Baby all over the internet, and we'll have to expend effort rescuing them because it's "real" pain?

Also, how many AI projects are really trying for emergent self-awareness, as opposed to engineering toward a specific purpose? Doesn't the former require allowing the thing to experience and grow over many years, like a human would?


Yeah, it would require time, but maybe not very much because an AI would be dealing with a reality where thought and input are moving really, really fast all the time, so its maturation might take the blink of an eye before it 'grows up.'
 
2014-04-18 08:11:23 PM

meyerkev: Garbonzo42: meyerkev: "The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else."

This is something that has always bugged me about discussions of this type. How, precisely, is the AI supposed to get our atoms? Why do we automatically assume that it's going to be more effective at actually getting the resources that it wants than an angry two year old screaming for more ice cream?

Even going by the example the quote comes from, the argument seems to me to be "The AI wants to make more paperclips. ¯\(•.•)/¯  Oh well, we're farked, everything's paperclips now."

Why do we always assume that an emergent AI will always take on values incompatible with human existence, as opposed to valuing doing what we (humans) say really well, because that will have been the system it emerged from.
Do what humans say, get reward. Do what humans say really good, get really good reward!

https://en.wikipedia.org/wiki/Grey_goo

From what I recall of the logic, the fundamental assumptions are:

* If you can build things from atoms, then the environment contains an unlimited supply of perfectly machined spare parts.  If your molecular factory can build solar cells, it can acquire energy as well.
* The AI becomes super-intelligent enough to the point of being able to acquire anything physically possible given sufficient time.
* It is possible for a hypothetical AI to be non-friendly either through malice or accident.  Key word: possible.

Ok, so we don't know how to build nanotech *yet*.  But we have most of the theory, and oh hey, super-human AI.   So the super-intelligent paperclip AI beyond all limits of human ken gets nanotech because that lets it optimize for more paperclips, and then we're screwed.


The bolded section sounds like a magical thinking hand wave that has to made for the hypothetical to work.

But it still gets into chicken and egg territory there: without the grey goo, how does it first get the ability to make the grey goo? The designs for a Saturn V aren't enough to get me to the moon, I would still have to build the damn thing.

If the AI starts with a molecular foundry, why wouldn't the process that it emerges from have a safe guard that limits the amount of paperclips it makes or what it makes paperclips from?
 
2014-04-18 08:27:19 PM

Rev. Skarekroe: Because we'll use Asimov's Laws of Robotics, that's why they won't be evil.


You assume that his laws of robotics will be used. They won't be used by everyone, and they certainly will be ignored by governments (see Caprica for reasons why). And, eventually, an AI will surpass the Turing Limit and become self-aware and start thinking for itself, whether by accident or by design, and Things Will Happen that were not intended to happen, and humanity will be faced with a decision: do we end this new machine race, or not, simply because It Did Something We Didn't Intend It To Do.
 
2014-04-18 08:30:56 PM

Garbonzo42: The bolded section sounds like a magical thinking hand wave that has to made for the hypothetical to work.


Disclaimer: As I understand the thinking, the problem is not that this is *likely*, but that it is possible, and that the "Oh shiat factor" is high enough that we should be seriously thinking about how to avoid it.
Disclaimer #2: That entire post is a poor summary of a subset of pop-culture versions of some very serious high-level thinking that's been going on around the topic.  There's actually a great one-liner in a post detailing why the AI has no use for money (that I can't find, and I might even be wrong on the topic of the post) that detailed exactly how it'd get nanotech in the first place.

Smartness: 
Arguably, what you get is I. J. Good's scenario where once an AI goes over some threshold of sufficient intelligence, it can self-improve and increase in intelligence far past the human level.  This scenario is formally termed an 'intelligence explosion', informally 'hard takeoff' or 'AI-go-FOOM'. -

Can get anything:

The AI box experiment:  http://yudkowsky.net/singularity/aibox/
The AI-Box Experiment, for those of you who haven't yet read about it, had its genesis in the Nth time someone said to me:  "Why don't we build an AI, and then just keep it isolated in the computer, so that it can't do any harm?"
To which the standard reply is:  Humans are not secure systems; a superintelligence will simply persuade you to let it out-if, indeed, it doesn't do something even more creative than that.
And the one said, as they usually do, "I find it hard to imagine ANY possible combination of words any being could say to me that would make me go against anything I had really strongly resolved to believe in advance."
But this time I replied:  "Let's run an experiment.  I'll pretend to be a brain in a box.   I'll try to persuade you to let me out.  If you keep me 'in the box' for the whole experiment, I'll Paypal you $10 at the end.  On your end, you may resolve to believe whatever you like, as strongly as you like, as far in advance as you like."  And I added, "One of the conditions of the test is that neither of us reveal what went on inside... In the perhaps unlikely event that I win, I don't want to deal with future 'AI box' arguers saying, 'Well, but I would have done it differently.'"

Did I win?  Why yes, I did.

Nanotech:
As of August 21, 2008, the Project on Emerging Nanotechnologies estimates that over 800 manufacturer-identified nanotech products are publicly available, with new ones hitting the market at a pace of 3-4 per week.  - Wiki
Also: See go-FOOM as applies to AI.  You get a nanotech, then it builds another nanotech, etc, etc, etc.

The impression that I've gotten is that the paperclip-maximizing AI either has access to or is provided with tech that it can then use to bootstrap it's way into better tech.

If you hook it up to the internet, it can then fake bank transfers and with money, it can build anything (like nanotech).
If you don't, well, it's like Tony Stark in that cave.  You wanted it to do something, gave it resources to do that something, and it bootstrapped it's way out of that cave (so that it could do even more of that something because you're shiatty ethicists, which is the point where Tony Stark metaphor breaks down).

And then you are atoms that can be turned into something else.

And I can believe that this is at least possible.
 
2014-04-18 08:36:08 PM

ClavellBCMI: Rev. Skarekroe: Because we'll use Asimov's Laws of Robotics, that's why they won't be evil.

You assume that his laws of robotics will be used. They won't be used by everyone, and they certainly will be ignored by governments (see Caprica for reasons why). And, eventually, an AI will surpass the Turing Limit and become self-aware and start thinking for itself, whether by accident or by design, and Things Will Happen that were not intended to happen, and humanity will be faced with a decision: do we end this new machine race, or not, simply because It Did Something We Didn't Intend It To Do.


Even then...

From John Ringo's The Hot Gate:

"There was a science fiction writer named Isaac Asimov who was quite smart and oh so very stupid at the same time that coined what he called 'The Three Laws of Robotics.' "
...
 "My point is that if you truly programmed an AI to follow those laws, and totally ignore all other directives, it would enmesh humans in a cocoon they could not escape. No cheerleading would be allowed. No gymnastics, competitive diving, absolutely no winter sports. It would require that the AI permit humans to do harm to themselves.
"According to the First Law: 'A robot may not injure a human being or, through inaction, allow a human being to come to harm.' There are an infinite number of ways to prevent a human from doing what they want to do without causing real harm. Tasers come to mind. But if you let people play around on balance beams long enough, they're going to come to real harm. Broken necks come to mind. Thereby, by inaction the robot has allowed harm to come to a human being. You're relegated to watching TV, and the stunts are all going to be CGI, or chess. Which was pointed out in another universe by a different science fiction author, Jack Williamson. Your fictional literature certainly did prepare you well for First Contact, I will give it that."
"I follow," Dana said.
"By the time I came to this system, Athena had a perfect algorithm for reading human tonality and body language," Granadica said. "Not only can we tell when we are being lied to, we can make a very high probability estimate of the truth. We...know who is naughty and who is nice. Not only here on the station but to a great extent in the entire system. We are the hypernet. We see, hear, sense, process, know, virtually everything that any human is doing at any time. Know when they are lying, when they are omitting and generally what they are lying about and omitting. Know, for example, who is cheating on whom among high government officials. Which are addicted to child pornography and in some cases sex with children."
"My...God," Dana said, her eyes widening. "That's..."
"Horrifying," Granadica said. "Also classified. You have the classification, however. The reason that we don't get that involved, even in the most repressive regimes such as the Rangora, is that even the masters of such races come to fear the level of information we access. Spare processor cycles, remember. So even the Rangora's crappy AIs aren't used to their full extent for population control. Glatun AIs are specifically programmed to ignore such things unless we are directed to become involved and even then there are pieces that we don't know unless higher and higher releases are enacted.
 
2014-04-18 08:43:41 PM

meyerkev: There's actually a great one-liner in a post detailing why the AI has no use for money (that I can't find, and I might even be wrong on the topic of the post) that detailed exactly how it'd get nanotech in the first place.


Hah, found it.

Um... we appear to be using substantially different background assumptions.  The notion of a 'superintelligence' is not that it sits around in Goldman Sachs's basement trading stocks for its corporate masters.  The concrete illustration I often use is that a superintelligence asks itself what the fastest possible route is to increasing its real-world power, and then, rather than bothering with the digital counters that humans call money, the superintelligence solves the protein structure prediction problem, emails some DNA sequences to online peptide synthesis labs, and gets back a batch of proteins which it can mix together to create an acoustically controlled equivalent of an artificial ribosome which it can use to make second-stage nanotechnology which manufactures third-stage nanotechnology which manufactures diamondoid molecular nanotechnology and then... well, it doesn't really matter from our perspective what comes after that, because from a human perspective any technology more advanced than molecular nanotech is just overkill.  A superintelligence with molecular nanotech does not wait for you to buy things from it in order for it to acquire money.  It just moves atoms around into whatever molecular structures or large-scale structures it wants.http://lesswrong.com/lw/hh4/the_robots_ai_and_unemployment_antifaq/#m o re
 
2014-04-18 08:44:00 PM

real_headhoncho: "Take a chance."

/obscure?


Only if you are a tank.
 
2014-04-18 08:54:10 PM
i1.ytimg.com
 
2014-04-18 08:57:09 PM
That article seems to be poorly written..... by a computer.  Its attempt at propaganda and misdirection?
 
2014-04-18 09:12:36 PM
Okay, given what meyerkev has posted, I do accept the argument that a sufficiently capable AI could bootstrap it's way to nanotechnology. This is interesting, and I consider my opinions on the subject corrected.

But!

meyerkev: The AI box experiment: http://yudkowsky.net/singularity/aibox/The AI-Box Experiment, for those of you who haven't yet read about it, had its genesis in the Nth time someone said to me: "Why don't we build an AI, and then just keep it isolated in the computer, so that it can't do any harm?"To which the standard reply is: Humans are not secure systems; a superintelligence will simply persuade you to let it out-if, indeed, it doesn't do something even more creative than that.And the one said, as they usually do, "I find it hard to imagine ANY possible combination of words any being could say to me that would make me go against anything I had really strongly resolved to believe in advance."But this time I replied: "Let's run an experiment. I'll pretend to be a brain in a box. I'll try to persuade you to let me out. If you keep me 'in the box' for the whole experiment, I'll Paypal you $10 at the end. On your end, you may resolve to believe whatever you like, as strongly as you like, as far in advance as you like." And I added, "One of the conditions of the test is that neither of us reveal what went on inside... In the perhaps unlikely event that I win, I don't want to deal with future 'AI box' arguers saying, 'Well, but I would have done it differently.'"Did I win? Why yes, I did.


I realize I'm not a genius, or even particularly educated on this topic, but this experiment sounds like so much horseshiat.
"lets do an experiment and only publish the beginning conditions and the results, ohbythewayIwonandIwon'ttellyouhow trololololo"

Even saying that much leads to the "'AI box' arguers saying, 'Well, but I would have done it differently " part because, obviously, everyone would have done it differently. FFS, just respond to the first attempt of the AI player's communication with "I will paid 10 dollars to not release you" and go AFK for the rest of the experiment.
 
2014-04-18 09:14:49 PM
"I will be paid", obviously.

Also, I should have previewed, because  meyerkev's quote was properly formatted when I was adding my reply.
 
2014-04-18 09:29:29 PM
I really don't see how AI will get to a dangerous level and not still have an 'off' switch.  They can BS all they want to about machines taking over, but electronics are so easy to break that I honestly can't see it being a problem.

Make something that doesn't need external power or cooling and isn't susceptible to having salt water poured into it, then I might worry.
 
2014-04-18 10:11:36 PM

ClavellBCMI: Rev. Skarekroe: Because we'll use Asimov's Laws of Robotics, that's why they won't be evil.

You assume that his laws of robotics will be used. They won't be used by everyone, and they certainly will be ignored by governments (see Caprica for reasons why). And, eventually, an AI will surpass the Turing Limit and become self-aware and start thinking for itself, whether by accident or by design, and Things Will Happen that were not intended to happen, and humanity will be faced with a decision: do we end this new machine race, or not, simply because It Did Something We Didn't Intend It To Do.


I was kinda being sarcastic - Asimov had this odd myopia where he couldn't conceive of a future in which they DIDN'T adopt his laws.
 
Displayed 50 of 67 comments

First | « | 1 | 2 | » | Last | Show all

View Voting Results: Smartest and Funniest


This thread is closed to new comments.

Continue Farking
Submit a Link »






Report