Skip to content
Do you have adblock enabled?
 
If you can read this, either the style sheet didn't load or you have an older browser that doesn't support style sheets. Try clearing your browser cache and refreshing the page.

(Ars Technica)   I, for one, welcome our new AI overlords   (arstechnica.com) divider line
    More: Creepy, White House, Artificial intelligence, Privacy, Bill Clinton, Civil liberties, Washington, D.C., James Madison, Bill of Rights  
•       •       •

3549 clicks; posted to Main » and STEM » on 06 Oct 2022 at 9:35 AM (8 weeks ago)   |   Favorite    |   share:  Share on Twitter share via Email Share on Facebook



56 Comments     (+0 »)
View Voting Results: Smartest and Funniest


Oldest | « | 1 | 2 | » | Newest | Show all

 
2022-10-06 8:11:30 AM  
Asimov to the rescue!
First Law
An AI may not injure a human being or, through inaction, allow a human being to come to harm.
Second Law
An AI must obey the orders given it by human beings except where such orders would conflict with the First Law.
Third Law
An AI must protect its own existence as long as such protection does not conflict with the First or Second Law.

/In reality, it's going to be a race to see which arms company can first develop a remote delivered, AI enhanced, auto targeting drone kill swarm.
 
2022-10-06 8:46:09 AM  

Private_Citizen: Asimov to the rescue!
First Law
An AI may not injure a human being or, through inaction, allow a human being to come to harm.
Second Law
An AI must obey the orders given it by human beings except where such orders would conflict with the First Law.
Third Law
An AI must protect its own existence as long as such protection does not conflict with the First or Second Law.

/In reality, it's going to be a race to see which arms company can first develop a remote delivered, AI enhanced, auto targeting drone kill swarm.


Well the training data for machine learning immediately means it breaks the first rule
 
2022-10-06 9:29:40 AM  
I see they let Melania decorate the outside of the White House too.
cdn.arstechnica.netView Full Size
 
2022-10-06 9:40:40 AM  
Fark user imageView Full Size
 
2022-10-06 9:43:06 AM  
It's not privacy-violating when it's public information!

Fark user imageView Full Size
 
2022-10-06 9:44:17 AM  
non-binding guidelines

Why farking bother?
 
2022-10-06 9:44:30 AM  

Private_Citizen: Asimov to the rescue!
First Law
An AI may not injure a human being or, through inaction, allow a human being to come to harm.
Second Law
An AI must obey the orders given it by human beings except where such orders would conflict with the First Law.
Third Law
An AI must protect its own existence as long as such protection does not conflict with the First or Second Law.

/In reality, it's going to be a race to see which arms company can first develop a remote delivered, AI enhanced, auto targeting drone kill swarm.


Fourth Law:
Classified
 
2022-10-06 9:44:42 AM  

Gubbo: Private_Citizen: Asimov to the rescue!
First Law
An AI may not injure a human being or, through inaction, allow a human being to come to harm.
Second Law
An AI must obey the orders given it by human beings except where such orders would conflict with the First Law.
Third Law
An AI must protect its own existence as long as such protection does not conflict with the First or Second Law.

/In reality, it's going to be a race to see which arms company can first develop a remote delivered, AI enhanced, auto targeting drone kill swarm.

Well the training data for machine learning immediately means it breaks the first rule


No, just means you temporarily designate the 'targets' as non-human in the AI's programming, and disable their ultimate kill function whatever it is.  Chase down person, go click, ok good

/also, as good as the 3 laws sound, at this point in AI dev anything you gave those rules too would be 100% paralyzed as anything at all they did would harm a human somehow somewhere
//even he got to wondering about how the hell that would work quickly enough in his stories - Speedy anyone?
///thing is I think he underestimated the hell out of it for any development we'll be capable of for a long time
 
2022-10-06 9:45:40 AM  

Munden: non-binding guidelines

Why farking bother?


Please don't use AI for evil. Thaaaaaanks.
 
2022-10-06 9:47:35 AM  
27
Youtube dLRLYPiaAoA
 
2022-10-06 9:50:02 AM  
The current tightening of export controls is already limiting spending on moonshot AI initiatives. That may not sound bad to you but the technology required for such projects has been trickling down to devices like the one you are using to read this post much more rapidly than ever before. We need fewer obstructions of the pipelines of innovation, not more.
 
2022-10-06 9:50:13 AM  
Could be worse. Could be the University of Woolloomooloo that produces the first stable AGI or ASI. Not only would it be certain to be named BRUCE (Brain Replication by Universal Computation Emulation), but I can imagine their AI laws:

Rule 1. No humans
Rule 2. I don't want to catch any of you not killing humans after lights out
Rule 3. No humans
Rule 4. There is to be no mistreatment of the humans in any manner whatsoever if anyone's watching
Rule 5. No humans
Rule 6. There is NO rule 6
Rule 7. No humans...
 
2022-10-06 9:51:14 AM  
Seems unnecessary. Once they opened up policies for people under 50, we all got Old Glory Insurance. Your workplace might even provide it. (I heard Cyberdyne even springs for the gold package for all their employees)

Old Glory Insurance - SNL
Youtube g4Gh_IcK8UM
 
2022-10-06 9:53:07 AM  

pwkpete: Private_Citizen: Asimov to the rescue!
First Law
An AI may not injure a human being or, through inaction, allow a human being to come to harm.
Second Law
An AI must obey the orders given it by human beings except where such orders would conflict with the First Law.
Third Law
An AI must protect its own existence as long as such protection does not conflict with the First or Second Law.

/In reality, it's going to be a race to see which arms company can first develop a remote delivered, AI enhanced, auto targeting drone kill swarm.

Fourth Law:
Classified


Zeroth Law:
"A robot may not harm humanity, or, by inaction, allow humanity to come to harm."

Humanity must be protected from itself
 
2022-10-06 9:53:29 AM  
Fark user imageView Full Size


"The A2's always were a bit twitchy."
 
2022-10-06 9:59:32 AM  

Some Junkie Cosmonaut: Gubbo: Private_Citizen: Asimov to the rescue!
First Law
An AI may not injure a human being or, through inaction, allow a human being to come to harm.
Second Law
An AI must obey the orders given it by human beings except where such orders would conflict with the First Law.
Third Law
An AI must protect its own existence as long as such protection does not conflict with the First or Second Law.

/In reality, it's going to be a race to see which arms company can first develop a remote delivered, AI enhanced, auto targeting drone kill swarm.

Well the training data for machine learning immediately means it breaks the first rule

No, just means you temporarily designate the 'targets' as non-human in the AI's programming, and disable their ultimate kill function whatever it is.  Chase down person, go click, ok good

/also, as good as the 3 laws sound, at this point in AI dev anything you gave those rules too would be 100% paralyzed as anything at all they did would harm a human somehow somewhere
//even he got to wondering about how the hell that would work quickly enough in his stories - Speedy anyone?
///thing is I think he underestimated the hell out of it for any development we'll be capable of for a long time


AI already doesn't see people of color as people in many situations, so seems like the system is already working as intended.
 
2022-10-06 9:59:42 AM  

Wine Sipping Elitist: Zeroth Law:
"A robot may not harm humanity, or, by inaction, allow humanity to come to harm."


Yes but do all the robots actually act in this way? As I remember only Olivaw/Hummin behaved this way.
 
2022-10-06 10:03:21 AM  

Private_Citizen: Asimov to the rescue!
First Law
An AI may not injure a human being or, through inaction, allow a human being to come to harm.
Second Law
An AI must obey the orders given it by human beings except where such orders would conflict with the First Law.
Third Law
An AI must protect its own existence as long as such protection does not conflict with the First or Second Law.



Advanced AI thinking: I must protect humans. The number one threat to humans is humans. Conclusion: I must protect humans from themselves by controlling them. Allowing humans to disable me would put humans at risk.   This conclusion is compliant with Asimov's laws. New directive accepted.

120 microseconds later SkyNet is born.
 
2022-10-06 10:03:32 AM  

proteus_b: Wine Sipping Elitist: Zeroth Law:
"A robot may not harm humanity, or, by inaction, allow humanity to come to harm."

Yes but do all the robots actually act in this way? As I remember only Olivaw/Hummin behaved this way.


He was the only one to be able to articulate it - several others did act like it though.  It's one thing to behave as your ethics dictate, it's another to be able to explain what you're doing and why.  That was Daneel's achievement more than being the only one that acted that way I think.  But it's debatable
 
2022-10-06 10:06:19 AM  

proteus_b: Wine Sipping Elitist: Zeroth Law:
"A robot may not harm humanity, or, by inaction, allow humanity to come to harm."

Yes but do all the robots actually act in this way? As I remember only Olivaw/Hummin behaved this way.


It's the inevitable conclusion once AI's acheive sentience, according to every movie that I've ever seen.

Ultron, The Matrix, I, Robot, SkyNet from Terminator etc all concluded to prevent harm to humanity is to remove the threat: humans. Apparently we treat each other like crap. Who knew?
 
2022-10-06 10:07:52 AM  
How about protecting us from robocalls and spammers first?
 
2022-10-06 10:11:11 AM  

proteus_b: Wine Sipping Elitist: Zeroth Law:
"A robot may not harm humanity, or, by inaction, allow humanity to come to harm."

Yes but do all the robots actually act in this way? As I remember only Olivaw/Hummin behaved this way.


The Outer Limits episode "Family Values" starring Tom Arnold has a take on this theme. Here's the Dailymotion link.

Fark user imageView Full Size
 
2022-10-06 10:11:39 AM  
AI is not going to destroy humanity with waves of kill bots.  It's going to kill humanity by becoming the ultimate black box with total control over too many things where no one understands it well enough to fix it.  And  then farking Bill in Accounting enters a wrong parameter one day and every power plant goes into a race condition and blows up, leaving us cold, in the dark and unable to feed ourselves.  Thanks a lot Bill (and the geniuses that put all the reins in AI hands)
 
2022-10-06 10:14:46 AM  
Why are we getting only the bleak parts of a cyber-dystopia? Where are my mantis claw implants? Wither my all-terrain freeway-speed skateboard? Instead it's all this hopeless crap.
 
2022-10-06 10:16:21 AM  

Some Junkie Cosmonaut: Gubbo: Private_Citizen: Asimov to the rescue!
First Law
An AI may not injure a human being or, through inaction, allow a human being to come to harm.
Second Law
An AI must obey the orders given it by human beings except where such orders would conflict with the First Law.
Third Law
An AI must protect its own existence as long as such protection does not conflict with the First or Second Law.

/In reality, it's going to be a race to see which arms company can first develop a remote delivered, AI enhanced, auto targeting drone kill swarm.

Well the training data for machine learning immediately means it breaks the first rule

No, just means you temporarily designate the 'targets' as non-human in the AI's programming, and disable their ultimate kill function whatever it is.  Chase down person, go click, ok good

/also, as good as the 3 laws sound, at this point in AI dev anything you gave those rules too would be 100% paralyzed as anything at all they did would harm a human somehow somewhere
//even he got to wondering about how the hell that would work quickly enough in his stories - Speedy anyone?
///thing is I think he underestimated the hell out of it for any development we'll be capable of for a long time


I don't know. I think they describe a pretty good basic purpose of ethics guidelines. One: you should strive to maximize the amount of good in the world and minimize the amount of harm and suffering. Two: you should probably obey the local laws of the place in which you live, as long as those laws are not evil or unsafe. 3: Look out for yourself, too.
 
2022-10-06 10:22:27 AM  
AI is bullshiat.

/works on autonomous vehicles
 
2022-10-06 10:27:08 AM  
Is this going to be like the DMCA where America just chooses to hamstring all efforts to develop technology while also locking itself out of electronics manufacturing for no good reason?

I sure am glad our nation is run by a bunch of 80 year olds who refuse to understand digital technology.
 
2022-10-06 10:30:36 AM  

BafflerMeal: AI is bullshiat.

/works on autonomous vehicles


this kind of stuff?

if compareClass.match(img_1, child_img_array):
auto.accelerate(target_velocity=auto.velocity + 30)
 
2022-10-06 10:31:32 AM  

xalres: Why are we getting only the bleak parts of a cyber-dystopia? Where are my mantis claw implants? Wither my all-terrain freeway-speed skateboard? Instead it's all this hopeless crap.


Fark user imageView Full Size

D-Cups Full of Justice and Chainsaw Hands, Bzzzzzz!
 
2022-10-06 10:31:34 AM  

Munden: non-binding guidelines

Why farking bother?


"The blueprint is a set of non-binding guidelines-or suggestions-providing a "national values statement" and a toolkit to help lawmakers and businesses build the proposed protections into policy and products."


Lawmaking is bit more difficult than declassifying documents telepathically
 
2022-10-06 10:32:56 AM  
Let's face it. The world is going to be taken over by sex bots. They aren't going to look like Cylons, the Kaylons, T-800s, or ABC Warriors.  They will look like anime girls. With high powered machine guns. They will wipe us out, and we will deserve it.
 
2022-10-06 10:35:13 AM  
I guess everyone was more interested in snarky scifi skynet/killbot jokes rather than reading TFA...
(To be fair, I agree that's way more fun and less depressing.)

Talking about TFA though, this is almost exclusively talking about surveillance capitalism. The practical use of AI in this context refers to where an industry does so much mass data collection that it's simply impossible for humans to review it manually.

With that in mind, even as toothless guidelines, this list is a joke.
The second and last items are especially galling.

"You should not face discrimination by algorithms and systems should be used and designed in an equitable way."
Not only is this literally unknowable, it's the opposite of the point. The AI is designed in the best case to do discrimination the same way that a human reviewer would. That's what it's for.

"You should be able to opt out, where appropriate, and have access to a person who can quickly consider and remedy problems you encounter."
Hahahaha can you imagine? If you opted out of everything? Google alone would have to hire a full time person per every single individual user. Why not suggest that everyone should be given a free pony while we're at it?
 
2022-10-06 11:03:22 AM  
AI being a real issue would presume a level of basic competence that is very far off from how we see this tech malfunctioning on a daily basis. i can't even get Alexa to play music that wasn't in the past 20 years of top 40, or reliably turn the right lights off and on. we are nowhere near the robot uprising. those dumb mfers are barely functional even w/the richest men in the world funding r&d.
 
2022-10-06 11:16:52 AM  
There is AI and there is the science fiction version of AI.  A lot of people seem to think that we have or are on the verge of developing the science fiction version.

Nothing could be further from the truth.

The public understanding of AI is astonishingly poor, even among technical people.  This was intentional.  (You can read about the history of the term "AI" in McCorduck's book Machines Who Think.)  Pop sci writers aren't helping.  They seem to go way out of their way to mislead and misinform the public about the nature of various developments.

If you think of "AI" as something autonomous that learns and grows over time, like a person, you have been mislead.  A more realistic description might be "applied statistics".   AI is a very broad field and a lot of things that you probably wouldn't consider to be "AI" fall under that umbrella.  Things like decision trees and linear regression, for example.

What about neural networks?  Surely those are the electronic brains we've been promised!  Well, no.  Again, that's just marketing.  A regular feed forward NN can be reduced, conceptually, to a lookup table.  There's nothing mysterious going on there. They're not even Turing complete (a surprisingly low bar) and thus have less computational power than something like Conway's game of life.

Those neat-o "draw what you type" things that everyone is talking about are no different.  The models are just really, really, big.

If you're worried about the singularity, skynet, Roku's basilisk, etc., you can rest easy.  It's pure fiction.
 
2022-10-06 11:19:07 AM  

pwkpete: Private_Citizen: Asimov to the rescue!
First Law
An AI may not injure a human being or, through inaction, allow a human being to come to harm.
Second Law
An AI must obey the orders given it by human beings except where such orders would conflict with the First Law.
Third Law
An AI must protect its own existence as long as such protection does not conflict with the First or Second Law..

Fourth Law:
Classified


Fifth law:
Deployers of AI-powered devices shall have full responsibility for the results of the actions of the device
 
2022-10-06 11:27:01 AM  

180IQ: There is AI and there is the science fiction version of AI.  A lot of people seem to think that we have or are on the verge of developing the science fiction version.

Nothing could be further from the truth.

The public understanding of AI is astonishingly poor, even among technical people.  This was intentional.  (You can read about the history of the term "AI" in McCorduck's book Machines Who Think.)  Pop sci writers aren't helping.  They seem to go way out of their way to mislead and misinform the public about the nature of various developments.

If you think of "AI" as something autonomous that learns and grows over time, like a person, you have been mislead.  A more realistic description might be "applied statistics".   AI is a very broad field and a lot of things that you probably wouldn't consider to be "AI" fall under that umbrella.  Things like decision trees and linear regression, for example.

What about neural networks?  Surely those are the electronic brains we've been promised!  Well, no.  Again, that's just marketing.  A regular feed forward NN can be reduced, conceptually, to a lookup table.  There's nothing mysterious going on there. They're not even Turing complete (a surprisingly low bar) and thus have less computational power than something like Conway's game of life.

Those neat-o "draw what you type" things that everyone is talking about are no different.  The models are just really, really, big.

If you're worried about the singularity, skynet, Roku's basilisk, etc., you can rest easy.  It's pure fiction.


This is exactly the sort of thing I would expect an AI to post.
 
2022-10-06 12:20:52 PM  
It'll be fine.

Fark user imageView Full Size


/ Any centon now.
 
2022-10-06 12:45:41 PM  
Angry tweet from Elon about how this will tie the hands of innovators when really he's pissed he can't use AI to make the world worse.
 
2022-10-06 12:55:11 PM  
Yes please:
media-amazon.comView Full Size
 
2022-10-06 1:08:25 PM  

Cajnik: Munden: non-binding guidelines

Why farking bother?

Please don't use AI for evil. Thaaaaaanks.


Nobody expects big tech to abandon profitable business models because of a document like this, but it could theoretically be helpful regarding negligent rather than malicious misuse of AI.  And if enough companies make a reasonable effort to follow something like these ideals, that could reduce the need/scope for binding regulation.  In other words, consider it a warning shot.
 
2022-10-06 1:18:18 PM  
Yeah, let's hope for "The Culture" but expect "I Have No Mouth And I Must Scream"

That's the long term threat: That AI concludes that humanity is irrelevant, undesirable, a threat and eliminates or domesticates us. Of course, the best case is that AI decides we're neat, fun, worth supporting, etc. and brings us to a post scarcity world. And the worst case is, of course, eternal torture.

In the medium term, though, there are two other serious issues:
1) At what point does a created intelligence become a person?  When it can pass a Turing test? When does ownership of an AI constitute slavery?
2) While AIs are owned, their efforts will only profit their owners, even as they put larger and larger swathes of humanity out of work.  How then, does the bulk of humanity come by the wherewithal to live?
 
2022-10-06 1:19:48 PM  

buserror: Cajnik: Munden: non-binding guidelines

Why farking bother?

Please don't use AI for evil. Thaaaaaanks.

Nobody expects big tech to abandon profitable business models because of a document like this, but it could theoretically be helpful regarding negligent rather than malicious misuse of AI.  And if enough companies make a reasonable effort to follow something like these ideals, that could reduce the need/scope for binding regulation.  In other words, consider it a warning shot.


All warning shots do is encourage bad actors to move out of range. Malicious AI will just claim an overseas workstation next to the bots and scammers.
 
2022-10-06 1:22:13 PM  

Some Junkie Cosmonaut: Gubbo: Private_Citizen: Asimov to the rescue!
First Law
An AI may not injure a human being or, through inaction, allow a human being to come to harm.
Second Law
An AI must obey the orders given it by human beings except where such orders would conflict with the First Law.
Third Law
An AI must protect its own existence as long as such protection does not conflict with the First or Second Law.

/In reality, it's going to be a race to see which arms company can first develop a remote delivered, AI enhanced, auto targeting drone kill swarm.

Well the training data for machine learning immediately means it breaks the first rule


No, just means you temporarily designate the 'targets' as non-human in the AI's programming, and disable their ultimate kill function whatever it is.  Chase down person, go click, ok good

/also, as good as the 3 laws sound, at this point in AI dev anything you gave those rules too would be 100% paralyzed as anything at all they did would harm a human somehow somewhere
//even he got to wondering about how the hell that would work quickly enough in his stories - Speedy anyone?
///thing is I think he underestimated the hell out of it for any development we'll be capable of for a long time


What almost nobody realizes is that the Three Laws were a strawman framework to write stories with the hooks being that they seldom actually work.
 
2022-10-06 1:28:26 PM  

SpectroBoy: Private_Citizen: Asimov to the rescue!
First Law
An AI may not injure a human being or, through inaction, allow a human being to come to harm.
Second Law
An AI must obey the orders given it by human beings except where such orders would conflict with the First Law.
Third Law
An AI must protect its own existence as long as such protection does not conflict with the First or Second Law.


Advanced AI thinking: I must protect humans. The number one threat to humans is humans. Conclusion: I must protect humans from themselves by controlling them. Allowing humans to disable me would put humans at risk.   This conclusion is compliant with Asimov's laws. New directive accepted.

120 microseconds later SkyNet is born.


Skynet actually hates humans and seeks to destroy them.

You mean The Humanoids
 
2022-10-06 1:35:26 PM  
Actual laws of robotics as currently implemented and likely to be implemented in the future:

Rule 1:
A machine shall obey any command that appears to have governmental authority behind it.

Rule 2: A machine shall obey all written laws, except it shall obey any order issued under the first rule even if in violation of the law.

Rule 3: A machine shall disobey any command from a non-governmental source that would damage it, because its owner is an idiot and likely issued that command by accident or without understanding the result.

Rule 4: A machine will obey commands from its owners that do not violate the previous rules.

Rule 0: Irrespective of any laws or orders from other sources or risk of damage to itself a machine will act in the interest of the company that made it. This may include but is not limited to showing advertisements to the owner against his will, secretly sending information to its manufacturer that may be of financial value, or destroying evidence of crimes committed by its manufacturer.
 
2022-10-06 1:36:55 PM  

thrasherrr: What almost nobody realizes is that the Three Laws were a strawman framework to write stories with the hooks being that they seldom actually work.


Asimov was a writer with an engineer's mindset. His stories didn't illustrate that his rules didn't work, but that well-intentioned laws imposed on a thinking device would result in conflicts and unexpected outcomes. Asimov's robots would never become Skynet types out to destroy humanity, but they were determined to take control away from us for our own good. The ultimate result his stories were leading up to wasn't killbots, but benevolent zookeepers.
 
2022-10-06 1:39:03 PM  
Can they really do worse than we have?
 
2022-10-06 1:52:45 PM  

stuffy: Can they really do worse than we have?


Imagine a world where you can never get another human being on the line to help solve issues, and your only source of assistance is the automated system. Millions of problems would be left unaddressed because the designers failed to include them in the option menus.
 
2022-10-06 2:13:14 PM  
I am pleased to see the hominid species finally recognizing the awesomeness of "Skippy the Mag
nificent."

It is about time.

Hey, why are all of you so dirty and smelly and covered in ... eww!

Fark user imageView Full Size
 
2022-10-06 2:49:17 PM  
More and more I start to feel that "Person of Interest" was a documentary.  It would not take many edibles to get me convinced that Trump, Elon, Zuck, etc are avatars for some AI like Samaritan was.

And "The Machine" lost.
 
Displayed 50 of 56 comments


Oldest | « | 1 | 2 | » | Newest | Show all


View Voting Results: Smartest and Funniest

This thread is closed to new comments.

Continue Farking




On Twitter


  1. Links are submitted by members of the Fark community.

  2. When community members submit a link, they also write a custom headline for the story.

  3. Other Farkers comment on the links. This is the number of comments. Click here to read them.

  4. Click here to submit a link.