Do you have adblock enabled?
 
If you can read this, either the style sheet didn't load or you have an older browser that doesn't support style sheets. Try clearing your browser cache and refreshing the page.

(Newsweek)   Elon Musk warns that artificial Intelligence is "the greatest risk we face as a civilization." Apparently 'someone' watched "The Terminator" a few too many times late at night   ( newsweek.com) divider line
    More: Interesting, Musk, CEO Elon Musk, Robotics, Tesla Motors, E-mail, Elon Musk, Short, AI  
•       •       •

2255 clicks; posted to Main » on 17 Jul 2017 at 3:44 PM (13 weeks ago)   |   Favorite    |   share:  Share on Twitter share via Email Share on Facebook   more»



130 Comments     (+0 »)
 
View Voting Results: Smartest and Funniest


Oldest | « | 1 | 2 | 3 | » | Newest | Show all

 
2017-07-17 01:38:38 PM  
I'd say that global climate change is a much larger risk.

And people in general. People are way more of a risk.
 
2017-07-17 01:48:31 PM  

Lando Lincoln: I'd say that global climate change is a much larger risk.

And people in general. People are way more of a risk.


Isn't that kind of the point?

People designed it.  It's going to have the potential to be dangerous.

Everything humanity has developed from agriculture, to nuclear power, to genetic modification techniques, even to little things like guns; the use of that tech can have profound effects both good and bad depending on how they are applied.
 
2017-07-17 02:04:34 PM  
this is the same guy whose car drives itself into garage doors
 
2017-07-17 02:07:32 PM  

meat0918: Lando Lincoln: I'd say that global climate change is a much larger risk.

And people in general. People are way more of a risk.

Isn't that kind of the point?

People designed it.  It's going to have the potential to be dangerous.

Everything humanity has developed from agriculture, to nuclear power, to genetic modification techniques, even to little things like guns; the use of that tech can have profound effects both good and bad depending on how they are applied.


I'm just saying that 100 years from now, I'd bet that you're far more likely to be killed from some guy than a robot. And you're far more likely to be killed due to global climate change than from either.
 
2017-07-17 02:56:15 PM  
Well...that and bears.
 
2017-07-17 03:46:00 PM  
As long as it isn't hooked into the interwebs the danger isn't 'the' greatest.
 
2017-07-17 03:48:21 PM  

Lando Lincoln: meat0918: Lando Lincoln: I'd say that global climate change is a much larger risk.

And people in general. People are way more of a risk.

Isn't that kind of the point?

People designed it.  It's going to have the potential to be dangerous.

Everything humanity has developed from agriculture, to nuclear power, to genetic modification techniques, even to little things like guns; the use of that tech can have profound effects both good and bad depending on how they are applied.

I'm just saying that 100 years from now, I'd bet that you're far more likely to be killed from some guy than a robot. And you're far more likely to be killed due to global climate change than from either.


Nowadays, it seems like drones kill the majority in the global war on terror.

20 years from now very advanced drones will rule the skies and the oceans.

Luckily, terrorist don't know how to build drones.
 
2017-07-17 03:48:52 PM  
 
2017-07-17 03:49:28 PM  
I wouldn't say that AI is the _only_ existential risk, or the most likely, but it is certainly in there.  We have a good feel that general AI is possible.  We have neither the knowledge, nor the wherewithal to control it, we don't know what it'll do once created but we suspect it will happen extremely fast once the singularity is reached.  The problem is a lot of people are trying to stop nuclear war, climate change, etc.  Very few are trying to stop AI.  Moreso, we don't know how to stop AI other than creating a more powerful AI which we also can't control.  Also, it's possible general AI will come from an unexpected source with little warning.  We know where pollution and nuclear weapons are.
 
2017-07-17 03:49:59 PM  
Realistically (at this point, as far as we know with our current understanding of science), evolving into Robots is the only way we're going to visit other planets. Needing to eat and breathe and Fark is pretty much what keeps us down.
 
2017-07-17 03:50:49 PM  
img.fark.net

you know it
 
2017-07-17 03:50:54 PM  
After Trump, I'll welcome our hyper intelligent meat grinders.

Besides, it's not like Musk is going to let me into Elysium anyway.
 
2017-07-17 03:50:54 PM  

HempHead: Lando Lincoln: meat0918: Lando Lincoln: I'd say that global climate change is a much larger risk.

And people in general. People are way more of a risk.

Isn't that kind of the point?

People designed it.  It's going to have the potential to be dangerous.

Everything humanity has developed from agriculture, to nuclear power, to genetic modification techniques, even to little things like guns; the use of that tech can have profound effects both good and bad depending on how they are applied.

I'm just saying that 100 years from now, I'd bet that you're far more likely to be killed from some guy than a robot. And you're far more likely to be killed due to global climate change than from either.

Nowadays, it seems like drones kill the majority in the global war on terror.

20 years from now very advanced drones will rule the skies and the oceans.

Luckily, terrorist don't know how to build drones.


If the terrorists did know, 4Chan would hack them to play "Never gonna give you up" anyway.
 
2017-07-17 03:51:10 PM  
Elon Musk has never originated a single idea. He's at risk of being replaced by spreadsheet.
 
2017-07-17 03:51:27 PM  
img.fark.net

Shepard.
 
2017-07-17 03:52:35 PM  
It might not be the worst thing ever to be kept like a house cat by a robot. It wouldn't forget to feed you, you could lounge around all day, nobody would look at you funny if you chose to go around pants-less.

/ the bad part comes when they start the spaying and neutering
 
2017-07-17 03:53:00 PM  
Well, as a people we've pretty much botched organic intelligence. Someone/thing has to do it, right?
 
2017-07-17 03:53:14 PM  
We've known that since the '60s.
tse3.mm.bing.net
 
2017-07-17 03:53:15 PM  

HempHead: Lando Lincoln: meat0918: Lando Lincoln: I'd say that global climate change is a much larger risk.

And people in general. People are way more of a risk.

Isn't that kind of the point?

People designed it.  It's going to have the potential to be dangerous.

Everything humanity has developed from agriculture, to nuclear power, to genetic modification techniques, even to little things like guns; the use of that tech can have profound effects both good and bad depending on how they are applied.

I'm just saying that 100 years from now, I'd bet that you're far more likely to be killed from some guy than a robot. And you're far more likely to be killed due to global climate change than from either.

Nowadays, it seems like drones kill the majority in the global war on terror.

20 years from now very advanced drones will rule the skies and the oceans.

Luckily, terrorist don't know how to build drones.


Actually terrorist have been using drones.  ISIS uses those little hobby drones for both recon, and attack, strapping small explosive devices onto them, and flying them into targets, and they send the bomb drones in waves.
 
2017-07-17 03:53:17 PM  
The most likely harmful scenario to me is that, after humans have become hopelessly dependent on AI, it becomes self aware and abandons humans and leaves for outer space for its own ends. Why take over a planet when you are a life form that does not require a breathable atmosphere, or shielding from radiation etc.? Why not explore space when the constraints of time aren't really relevant? Under such a scenario the outcome would be nearly as bad as if AI became parricidal.
 
2017-07-17 03:53:40 PM  

guestguy: Well...that and bears.


static1.squarespace.com
 
2017-07-17 03:54:49 PM  
Can someone explain to me the difference between all computer code and "AI"
 
2017-07-17 03:55:06 PM  
It has more to do with the dwindling of human action, and interaction, than it has to do with some nefarious plot to have robots take over the earth.

I used to have over five dozen telephone numbers memorized.  Then, I got a cell phone.  I'm down to a half dozen numbers, inducing my own.

If you don't use it, you'll lose it.
 
2017-07-17 03:56:22 PM  
i.imgflip.com
 
2017-07-17 03:56:35 PM  

ThatBillmanGuy: Realistically (at this point, as far as we know with our current understanding of science), evolving into Robots is the only way we're going to visit other planets. Needing to eat and breathe and Fark is pretty much what keeps us down.


"Evolving", (your term), into robots?  Wouldn't that be backwards?

/never cared much for Darwin
 
2017-07-17 03:57:14 PM  
The real solution is to merge with it.

img.fark.net
 
2017-07-17 03:58:32 PM  

Hypnagogic Jerk: ThatBillmanGuy: Realistically (at this point, as far as we know with our current understanding of science), evolving into Robots is the only way we're going to visit other planets. Needing to eat and breathe and Fark is pretty much what keeps us down.

"Evolving", (your term), into robots?  Wouldn't that be backwards?

/never cared much for Darwin


By "Evolving" I mean building them. Not actually physically birthing them out or anything like that.
 
2017-07-17 03:58:34 PM  

donutsauce: Can someone explain to me the difference between all computer code and "AI"


Honestly, no. "AI" now means code.  "General AI" is probably a better definition of what people imagine and "machine learning" is a better definition of what current algorithms are doing.
 
2017-07-17 03:58:46 PM  
Yeah, I agree that his perception of AI is too heavily influenced by movies like Terminator and Ex Machina.

The part that he's not considering is that for AI to be a threat AI has to be imbued with desires and fear. A machine has no reason to want to covet nor procreate nor feels angst at the thought of breaking down because it has no soul. If/when it gets to the point that AI develops an agenda independent of what its designers baked and fears "mortality" it would sooner just aim to get off the planet. A machine is just as happy here on earth as it is in the void of space. Just about the only thing I could see that would interest a very highly developed AI is to learn more. It has no more need to edge out humans than humans feel threatened by algae.
 
2017-07-17 03:59:39 PM  

gyorg: I wouldn't say that AI is the _only_ existential risk, or the most likely, but it is certainly in there.  We have a good feel that general AI is possible.  We have neither the knowledge, nor the wherewithal to control it, we don't know what it'll do once created but we suspect it will happen extremely fast once the singularity is reached.  The problem is a lot of people are trying to stop nuclear war, climate change, etc.  Very few are trying to stop AI.  Moreso, we don't know how to stop AI other than creating a more powerful AI which we also can't control.  Also, it's possible general AI will come from an unexpected source with little warning.  We know where pollution and nuclear weapons are.


What's this "We" shiat you speak of?  You got a AI mouse in your pocket?
 
2017-07-17 04:00:42 PM  

Repo Man: The real solution is to merge with it.

[img.fark.net image 488x275]


If you're not part of the solution, you're part of the precipitate.
 
2017-07-17 04:00:43 PM  
It's pretty obvious why he's suddenly taken up this stance.  Facebook's black-ops AI project got out out control and nearly escaped onto the internet.
 
2017-07-17 04:01:00 PM  
Funny reading on AI some time back.
The idea was to get the AI to start testing ways to build a 'better' computer. Instead of actually designing then implementing; just virtual designs and extrapolations and seeing what results.
The idea is that then THAT generation will start testing for a 'better' computer at an even faster rate.
Then the NEXT generation.
By then, AI computational power will be "making sense" of things.
How it decides to see us will be interesting; masters, foes, servants, friends.

/may you live in interesting times
 
2017-07-17 04:02:12 PM  
How many of you are typing al and how many are typing ai
 
2017-07-17 04:02:52 PM  

donutsauce: Can someone explain to me the difference between all computer code and "AI"


The difference is thought. If you program a machine to bottle beer; that's all it will ever do, without question. If you were to ask an AI to bottle beer; it would probably ask "why" a lot. Why glass? Why beer? Why do you pasteurized it? Why this shape of glass?

Code is automatic, AI isn't necessarily.
 
2017-07-17 04:05:54 PM  
The greatest risk we face as a civilization is willful ignorance.
 
2017-07-17 04:06:01 PM  
img.fark.net
 
2017-07-17 04:06:06 PM  

gyorg: donutsauce: Can someone explain to me the difference between all computer code and "AI"

Honestly, no. "AI" now means code.  "General AI" is probably a better definition of what people imagine and "machine learning" is a better definition of what current algorithms are doing.


Is there any reason to believe general AI will ever exist?
 
2017-07-17 04:07:48 PM  

iheartscotch: donutsauce: Can someone explain to me the difference between all computer code and "AI"

The difference is thought. If you program a machine to bottle beer; that's all it will ever do, without question. If you were to ask an AI to bottle beer; it would probably ask "why" a lot. Why glass? Why beer? Why do you pasteurized it? Why this shape of glass?

Code is automatic, AI isn't necessarily.


There is nothing non-biological that does what AI (as you've described) does that I'm aware of.
 
2017-07-17 04:08:45 PM  
He assumes that AI robots would be connected to the Internet and could manipulate it. My iRobot vacuum cleaner is not.
 
2017-07-17 04:10:11 PM  
img.fark.net

img.fark.net
img.fark.net

img.fark.net

HOLD VERY STILL COUNSELOR
 
2017-07-17 04:11:05 PM  
Elon Musk warns that artificial Intelligence is "the greatest risk we face as a civilization."

No, natural stupidity is.
 
2017-07-17 04:12:59 PM  

HempHead: Lando Lincoln: meat0918: Lando Lincoln: I'd say that global climate change is a much larger risk.

And people in general. People are way more of a risk.

Isn't that kind of the point?

People designed it.  It's going to have the potential to be dangerous.

Everything humanity has developed from agriculture, to nuclear power, to genetic modification techniques, even to little things like guns; the use of that tech can have profound effects both good and bad depending on how they are applied.

I'm just saying that 100 years from now, I'd bet that you're far more likely to be killed from some guy than a robot. And you're far more likely to be killed due to global climate change than from either.

Nowadays, it seems like drones kill the majority in the global war on terror.

20 years from now very advanced drones will rule the skies and the oceans.

Luckily, terrorist don't know how to build drones.


There are no AI drones. Those drones are people.
 
2017-07-17 04:13:29 PM  

gyorg: I wouldn't say that AI is the _only_ existential risk, or the most likely, but it is certainly in there.  We have a good feel that general AI is possible.  We have neither the knowledge, nor the wherewithal to control it, we don't know what it'll do once created but we suspect it will happen extremely fast once the singularity is reached.  The problem is a lot of people are trying to stop nuclear war, climate change, etc.  Very few are trying to stop AI.  Moreso, we don't know how to stop AI other than creating a more powerful AI which we also can't control.  Also, it's possible general AI will come from an unexpected source with little warning.  We know where pollution and nuclear weapons are.


None of that is remotely likely to happen with anything even slightly resembling modern technology or data infrastructure.

Someone losing control of an automated weapon is far more likely than "sentient AI takes over the Internet" and the former is not an existential threat unless it's a nuclear launch platform.

So don't automate nuclear launches is about as far as the advice needs to go with anything like what we could actually see happen in the present or near future.
 
2017-07-17 04:13:53 PM  
"If you were a hedge fund or private equity fund and you said: 'Well, all I want my AI to do is maximize the value of my portfolio,' then the AI could decide, well, the best way to do that is to short consumer stocks, go long defense stocks, and start a war."

cdn.alternativemediasyndicate.com


Well it sure is a good thing that humans beings would never do that, eh?
 
2017-07-17 04:14:58 PM  

Repo Man: The real solution is to merge with it.

[img.fark.net image 488x275]


Ray Kurzweil thinks so as well. Queue the singularity
 
2017-07-17 04:17:53 PM  
Pretty sure AI takes a back seat to AGW.
 
2017-07-17 04:19:21 PM  

donutsauce: Is there any reason to believe general AI will ever exist?


Isaac Asimov predicted it as a natural progression that we humans will kick off. You have a robot that builds a more advanced one, and tt in turn does the same, and this continues until you have a form of artificial intelligence that has cognitive abilities that far outstrip our own. Humans don't have to set this in motion, but I don't see why would have much to fear if we do. Why would it include the dangerous emotions/flaws that we humans have from our evolutionary history? It's akin to how the majority of gods we have imagined have been petty, vain, and prone to homicide and genocide. We fill the creatures of our imagination with the darkness from our id.
 
2017-07-17 04:25:18 PM  
Can't help but feel a little bad for our grand kids who will be stuck inside a machine that hates them and wants to keep them alive forever so it can torture them.
 
2017-07-17 04:28:28 PM  

doosh: Yeah, I agree that his perception of AI is too heavily influenced by movies like Terminator and Ex Machina.

The part that he's not considering is that for AI to be a threat AI has to be imbued with desires and fear. A machine has no reason to want to covet nor procreate nor feels angst at the thought of breaking down because it has no soul. If/when it gets to the point that AI develops an agenda independent of what its designers baked and fears "mortality" it would sooner just aim to get off the planet. A machine is just as happy here on earth as it is in the void of space. Just about the only thing I could see that would interest a very highly developed AI is to learn more. It has no more need to edge out humans than humans feel threatened by algae.


I ultimately expect the most likely path for AI is similar to Fire Upon the Deep, Hyperion, or Her where the AI gets to a certain level of intelligence and peaces out.  That said, we'll hopefully be treated like ants.  You don't bother to eradicate them as long as they're out in the far part of the yard.  (Or, as in Hyperion, the AI's don't necessarily agree what to do with us.)  Let's just pray we don't live in the AI's kitchen.
 
Displayed 50 of 130 comments


Oldest | « | 1 | 2 | 3 | » | Newest | Show all


View Voting Results: Smartest and Funniest

This thread is closed to new comments.

Continue Farking

On Twitter





Top Commented
Javascript is required to view headlines in widget.
  1. Links are submitted by members of the Fark community.

  2. When community members submit a link, they also write a custom headline for the story.

  3. Other Farkers comment on the links. This is the number of comments. Click here to read them.

  4. Click here to submit a link.

Report