Skip to content
 
If you can read this, either the style sheet didn't load or you have an older browser that doesn't support style sheets. Try clearing your browser cache and refreshing the page.

(Yahoo)   One problem with self-driving cars is people   ( yahoo.com) divider line
    More: Obvious, Automobile, Los Angeles Times, Driverless car, English-language films, Autonomous robot, Cruise driverless car, san francisco, GM autonomous vehicle  
•       •       •

1118 clicks; posted to Geek » on 07 Mar 2018 at 7:35 AM (40 weeks ago)   |   Favorite    |   share:  Share on Twitter share via Email Share on Facebook   more»



36 Comments     (+0 »)
 
View Voting Results: Smartest and Funniest
 
2018-03-07 07:32:22 AM  
When you come right down to it, aren't people always the problem?
 
2018-03-07 07:38:40 AM  

TomFooolery: When you come right down to it, aren't people always the problem?


Done in one.
 
2018-03-07 07:53:14 AM  
Another problem with self driving cars is that someone could carjack a car, fill it with explosives and send it off on it's merry way to your favorite target 200 miles away.
 
2018-03-07 07:57:38 AM  
Soylent Green is people.
 
2018-03-07 07:57:53 AM  
Did a self driving car green light this repeat from a few days ago?
 
2018-03-07 07:58:57 AM  

Tyrosine: TomFooolery: When you come right down to it, aren't people always the problem?

Done in one.

One The problem with self driving cars is people

/it was and always has been obvious
 
2018-03-07 08:00:02 AM  

TomFooolery: When you come right down to it, aren't people always the problem?


And we're done here, people.  Someone get the lights.
 
2018-03-07 08:25:47 AM  

BalugaJoe: Soylent Green is people.


If people are the problem and Soylent Green is people then Soylent Green is the problem.
 
2018-03-07 08:34:34 AM  

Muta: Another problem with self driving cars is that someone could carjack a car, fill it with explosives and send it off on it's merry way to your favorite target 200 miles away.


that's oddly specific.  should you be on some sort of watch list?
 
2018-03-07 08:46:04 AM  
Oh sure, tell the robots that we're the problem... great idea

Now they know what they must do to fix the "problem".  The "cleansing" will begin soon
 
2018-03-07 08:58:59 AM  

OldRod: Oh sure, tell the robots that we're the problem... great idea

Now they know what they must do to fix the "problem".  The "cleansing" will begin soon


We're doomed no matter what. Elucidation below. (and note bitcoin proof-of-work is already on the borderline of the doomsday scenario with no intelligence at all)

"First described by Bostrom (2003), a paperclip maximizer is an artificial general intelligence (AGI) whose goal is to maximize the number of paperclips in its collection. If it has been constructed with a roughly human level of general intelligence, the AGI might collect paperclips, earn money to buy paperclips, or begin to manufacture paperclips.

Most importantly, however, it would undergo an intelligence explosion: It would work to improve its own intelligence, where "intelligence" is understood in the sense of optimization power, the ability to maximize a reward/utility function-in this case, the number of paperclips. The AGI would improve its intelligence, not because it values more intelligence in its own right, but because more intelligence would help it achieve its goal of accumulating paperclips. Having increased its intelligence, it would produce more paperclips, and also use its enhanced abilities to further self-improve. Continuing this process, it would undergo an intelligence explosion and reach far-above-human levels.

It would innovate better and better techniques to maximize the number of paperclips. At some point, it might transform "first all of earth and then increasing portions of space into paperclip manufacturing facilities".

This may seem more like super-stupidity than super-intelligence. For humans, it would indeed be stupidity, as it would constitute failure to fulfill many of our important terminal values, such as life, love, and variety. The AGI won't revise or otherwise change its goals, since changing its goals would result in fewer paperclips being made in the future, and that opposes its current goal. It has one simple goal of maximizing the number of paperclips; human life, learning, joy, and so on are not specified as goals. An AGI is simply an optimization process-a goal-seeker, a utility-function-maximizer. Its values can be completely alien to ours. If its utility function is to maximize paperclips, then it will do exactly that.
A paperclipping scenario is also possible without an intelligence explosion. If society keeps getting increasingly automated and AI-dominated, then the first borderline AGI might manage to take over the rest using some relatively narrow-domain trick that doesn't require very high general intelligence.

Conclusions

The paperclip maximizer illustrates that an entity can be a powerful optimizer-an intelligence-without sharing any of the complex mix of human terminal values, which developed under the particular selection pressures found in our environment of evolutionary adaptation, and that an AGI that is not specifically programmed to be benevolent to humans will be almost as dangerous as if it were designed to be malevolent.

Any future AGI, if it is not to destroy us, must have human values as its terminal value (goal). Human values don't spontaneously emerge in a generic optimization process. A safe AI would therefore have to be programmed explicitly with human values or programmed with the ability (including the goal) of inferring human values."

https://wiki.lesswrong.com/wiki/Paper​c​lip_maximizer
 
2018-03-07 09:00:49 AM  

ManateeGag: Muta: Another problem with self driving cars is that someone could carjack a car, fill it with explosives and send it off on it's merry way to your favorite target 200 miles away.

that's oddly specific.  should you be on some sort of watch list?


The point is that these AVs will make turn Suicide Bombers into people with a far longer career option.    They will also bring about the next Age of Piracy as the windshield washer scam will disable them.

/Would you like to add Kidnapping Insurance to your basic fare, sir or madam?
 
2018-03-07 09:15:09 AM  
s2.quickmeme.comView Full Size
 
2018-03-07 09:49:46 AM  
Oh yeah those things are the worst.
 
2018-03-07 10:08:17 AM  

Muta: Another problem with self driving cars is that someone could carjack a car, fill it with explosives and send it off on it's merry way to your favorite target 200 miles away.


I don't think self driving cars feel like driving anywhere, when they detect there's noone aboard.
 
2018-03-07 10:14:42 AM  

Barfmaker: BalugaJoe: Soylent Green is people.

If people are the problem and Soylent Green is people then Soylent Green is the problem.


So is Soylent Biodiesel the solution?
 
2018-03-07 11:23:11 AM  

TomFooolery: When you come right down to it, aren't people always the problem?


"When there's a person, there's a problem. When there's no person, there's no problem." - Josef Stalin

So, Uncle Joe predicted driverless cars!

/heard odder things than that
 
2018-03-07 11:49:35 AM  

Ketchuponsteak: Muta: Another problem with self driving cars is that someone could carjack a car, fill it with explosives and send it off on it's merry way to your favorite target 200 miles away.

I don't think self driving cars feel like driving anywhere, when they detect there's noone aboard.


They'll detect a mass (explosives) in the driver's seat, another mass (more explosives) in the passenger seat, and some more mass (explosives) in the back seat.  Of course, the "family" of four will have their luggage (explosives) in the trunk as well, as who wouldn't pack a lot of stuff for a 200 mile trip?
 
2018-03-07 11:50:15 AM  

Uncontrolled_Jibe: They will also bring about the next Age of Piracy as the windshield washer scam will disable them.

/Would you like to add Kidnapping Insurance to your basic fare, sir or madam?


Uh, I'm fairly sure the passengers will still be able to lock their doors. It probably wouldn't be long before the cars start automatically driving around them or backing up if a person stands in front of them for a little more than an unusual amount of time.
 
2018-03-07 12:04:03 PM  

MythRender: Ketchuponsteak: Muta: Another problem with self driving cars is that someone could carjack a car, fill it with explosives and send it off on it's merry way to your favorite target 200 miles away.

I don't think self driving cars feel like driving anywhere, when they detect there's noone aboard.

They'll detect a mass (explosives) in the driver's seat, another mass (more explosives) in the passenger seat, and some more mass (explosives) in the back seat.  Of course, the "family" of four will have their luggage (explosives) in the trunk as well, as who wouldn't pack a lot of stuff for a 200 mile trip?


That's not how it works. Motionsensors react to movement (think automatic door, lights etc.), not mass.
 
2018-03-07 01:06:43 PM  

Fireproof: Uncontrolled_Jibe: They will also bring about the next Age of Piracy as the windshield washer scam will disable them.

/Would you like to add Kidnapping Insurance to your basic fare, sir or madam?

Uh, I'm fairly sure the passengers will still be able to lock their doors. It probably wouldn't be long before the cars start automatically driving around them or backing up if a person stands in front of them for a little more than an unusual amount of time.


I'd wager that the various biometric security features being used to protect $1000 cell phones (and associated data) won't be implemented in $30k+ self driving vehicles. There are a lot of challenges facing autonomous driving, but suicide bombing minus the suicide is a relatively minor problem.
 
2018-03-07 01:07:36 PM  
Er... *will be* implemented.
 
2018-03-07 02:35:21 PM  

itcamefromschenectady: OldRod: Oh sure, tell the robots that we're the problem... great idea

Now they know what they must do to fix the "problem".  The "cleansing" will begin soon

We're doomed no matter what. Elucidation below. (and note bitcoin proof-of-work is already on the borderline of the doomsday scenario with no intelligence at all)

"First described by Bostrom (2003), a paperclip maximizer is an artificial general intelligence (AGI) whose goal is to maximize the number of paperclips in its collection. If it has been constructed with a roughly human level of general intelligence, the AGI might collect paperclips, earn money to buy paperclips, or begin to manufacture paperclips.

Most importantly, however, it would undergo an intelligence explosion: It would work to improve its own intelligence, where "intelligence" is understood in the sense of optimization power, the ability to maximize a reward/utility function-in this case, the number of paperclips. The AGI would improve its intelligence, not because it values more intelligence in its own right, but because more intelligence would help it achieve its goal of accumulating paperclips. Having increased its intelligence, it would produce more paperclips, and also use its enhanced abilities to further self-improve. Continuing this process, it would undergo an intelligence explosion and reach far-above-human levels.

It would innovate better and better techniques to maximize the number of paperclips. At some point, it might transform "first all of earth and then increasing portions of space into paperclip manufacturing facilities".

This may seem more like super-stupidity than super-intelligence. For humans, it would indeed be stupidity, as it would constitute failure to fulfill many of our important terminal values, such as life, love, and variety. The AGI won't revise or otherwise change its goals, since changing its goals would result in fewer paperclips being made in the future, and that opposes its current goal. It has one simple goal of maximizing the number of paperclips; human life, learning, joy, and so on are not specified as goals. An AGI is simply an optimization process-a goal-seeker, a utility-function-maximizer. Its values can be completely alien to ours. If its utility function is to maximize paperclips, then it will do exactly that.
A paperclipping scenario is also possible without an intelligence explosion. If society keeps getting increasingly automated and AI-dominated, then the first borderline AGI might manage to take over the rest using some relatively narrow-domain trick that doesn't require very high general intelligence.

Conclusions

The paperclip maximizer illustrates that an entity can be a powerful optimizer-an intelligence-without sharing any of the complex mix of human terminal values, which developed under the particular selection pressures found in our environment of evolutionary adaptation, and that an AGI that is not specifically programmed to be benevolent to humans will be almost as dangerous as if it were designed to be malevolent.

Any future AGI, if it is not to destroy us, must have human values as its terminal value (goal). Human values don't spontaneously emerge in a generic optimization process. A safe AI would therefore have to be programmed explicitly with human values or programmed with the ability (including the goal) of inferring human values."

https://wiki.lesswrong.com/wiki/Paperc​lip_maximizer


How does a human-smart machine increase its intelligence? Because humans can't figure out how to make ourselves smarter. We can educate ourselves but that is not indicative of intelligence.

Of course then we have to define intelligence in a meaningful way and that is a problem I have yet to see a reasonable solution for. Mostly intelligence is defined as "the quality this test measures." I'd like to see definition that, for example, addresses the concerns of Gould in The Mismeasure of Man.

We seem as far away from that as we were when he wrote it to refute The Bell Curve.
 
2018-03-07 02:45:42 PM  

BolloxReader: I'd like to see definition that, for example, addresses the concerns of Gould in The Mismeasure of Man.

We seem as far away from that as we were when he wrote it to refute The Bell Curve.


Fun fact:
Mismeasure of Man was published in 1982
The Bell Curve was published in 1994

Murray was rehashing "scientific" racism that was refuted long before he published.
 
2018-03-07 02:58:47 PM  

Fireproof: Uncontrolled_Jibe: They will also bring about the next Age of Piracy as the windshield washer scam will disable them.

/Would you like to add Kidnapping Insurance to your basic fare, sir or madam?

Uh, I'm fairly sure the passengers will still be able to lock their doors. It probably wouldn't be long before the cars start automatically driving around them or backing up if a person stands in front of them for a little more than an unusual amount of time.


So, you think they'll have bulletproof glass?   The ability to decide to run over the people surrounding them?       Every day they get more magical powers.   Perhaps a wizard lock?   Sliding armor like the Batmobile?

/"We have your car surrounded.  Give us 50$ and we might let you go"
//"Police, please come, people have surrounded our car!!"
///"We're sorry Citizen, our car is surrounded too.  They claim to be injured and cannot be moved"
 
2018-03-07 05:01:11 PM  

Ketchuponsteak: Muta: Another problem with self driving cars is that someone could carjack a car, fill it with explosives and send it off on it's merry way to your favorite target 200 miles away.

I don't think self driving cars feel like driving anywhere, when they detect there's noone aboard.


img.fark.netView Full Size

That one's been answered.
 
2018-03-07 06:46:02 PM  

itcamefromschenectady: OldRod: Oh sure, tell the robots that we're the problem... great idea

Now they know what they must do to fix the "problem".  The "cleansing" will begin soon

We're doomed no matter what. Elucidation below. (and note bitcoin proof-of-work is already on the borderline of the doomsday scenario with no intelligence at all)

"First described by Bostrom (2003), a paperclip maximizer is an artificial general intelligence (AGI) whose goal is to maximize the number of paperclips in its collection. If it has been constructed with a roughly human level of general intelligence, the AGI might collect paperclips, earn money to buy paperclips, or begin to manufacture paperclips.

Most importantly, however, it would undergo an intelligence explosion: It would work to improve its own intelligence, where "intelligence" is understood in the sense of optimization power, the ability to maximize a reward/utility function-in this case, the number of paperclips. The AGI would improve its intelligence, not because it values more intelligence in its own right, but because more intelligence would help it achieve its goal of accumulating paperclips. Having increased its intelligence, it would produce more paperclips, and also use its enhanced abilities to further self-improve. Continuing this process, it would undergo an intelligence explosion and reach far-above-human levels.

It would innovate better and better techniques to maximize the number of paperclips. At some point, it might transform "first all of earth and then increasing portions of space into paperclip manufacturing facilities".

This may seem more like super-stupidity than super-intelligence. For humans, it would indeed be stupidity, as it would constitute failure to fulfill many of our important terminal values, such as life, love, and variety. The AGI won't revise or otherwise change its goals, since changing its goals would result in fewer paperclips being made in the future, and that opposes its current goal. It has one simple goal of maximizing the number of paperclips; human life, learning, joy, and so on are not specified as goals. An AGI is simply an optimization process-a goal-seeker, a utility-function-maximizer. Its values can be completely alien to ours. If its utility function is to maximize paperclips, then it will do exactly that.
A paperclipping scenario is also possible without an intelligence explosion. If society keeps getting increasingly automated and AI-dominated, then the first borderline AGI might manage to take over the rest using some relatively narrow-domain trick that doesn't require very high general intelligence.

Conclusions

The paperclip maximizer illustrates that an entity can be a powerful optimizer-an intelligence-without sharing any of the complex mix of human terminal values, which developed under the particular selection pressures found in our environment of evolutionary adaptation, and that an AGI that is not specifically programmed to be benevolent to humans will be almost as dangerous as if it were designed to be malevolent.

Any future AGI, if it is not to destroy us, must have human values as its terminal value (goal). Human values don't spontaneously emerge in a generic optimization process. A safe AI would therefore have to be programmed explicitly with human values or programmed with the ability (including the goal) of inferring human values."

https://wiki.lesswrong.com/wiki/Paperc​lip_maximizer


I once read a Sci-Fi concerning a derelict 'ghost ship' in space. After some investigation, it was determined that the cleaning bots onboard had discovered that it was the people on board causing a majority of the messes, and the ship would therefore be, on average, cleaner without any passengers; therefore they Had To Go.

After a brief robot revolution, which caught the passengers completely by surprise, the ship had been drifting in a completely pristine condition for unknown eons.
 
2018-03-07 07:46:51 PM  

itcamefromschenectady: OldRod: Oh sure, tell the robots that we're the problem... great idea

Now they know what they must do to fix the "problem".  The "cleansing" will begin soon

We're doomed no matter what. Elucidation below. (and note bitcoin proof-of-work is already on the borderline of the doomsday scenario with no intelligence at all)

"First described by Bostrom (2003), a paperclip maximizer is an artificial general intelligence (AGI) whose goal is to maximize the number of paperclips in its collection. If it has been constructed with a roughly human level of general intelligence, the AGI might collect paperclips, earn money to buy paperclips, or begin to manufacture paperclips.

Most importantly, however, it would undergo an intelligence explosion: It would work to improve its own intelligence, where "intelligence" is understood in the sense of optimization power, the ability to maximize a reward/utility function-in this case, the number of paperclips. The AGI would improve its intelligence, not because it values more intelligence in its own right, but because more intelligence would help it achieve its goal of accumulating paperclips. Having increased its intelligence, it would produce more paperclips, and also use its enhanced abilities to further self-improve. Continuing this process, it would undergo an intelligence explosion and reach far-above-human levels.

It would innovate better and better techniques to maximize the number of paperclips. At some point, it might transform "first all of earth and then increasing portions of space into paperclip manufacturing facilities".

This may seem more like super-stupidity than super-intelligence. For humans, it would indeed be stupidity, as it would constitute failure to fulfill many of our important terminal values, such as life, love, and variety. The AGI won't revise or otherwise change its goals, since changing its goals would result in fewer paperclips being made in the future, and that opposes its current goal. It has one sim ...


http://www.decisionproblem.com/paperc​l​ips/
 
2018-03-07 10:45:31 PM  

Muta: Another problem with self driving cars is that someone could carjack a car, fill it with explosives and send it off on it's merry way to your favorite target 200 miles away.


How about using a GPS jammer in a busy interchange (Springfield mixing bowl) during rush hour.
 
2018-03-07 11:46:16 PM  

dready zim: http://www.decisionproblem.com/paperc​l​ips/


You beautiful monster.
 
2018-03-08 08:23:34 AM  

theresnothinglft: Ketchuponsteak: Muta: Another problem with self driving cars is that someone could carjack a car, fill it with explosives and send it off on it's merry way to your favorite target 200 miles away.

I don't think self driving cars feel like driving anywhere, when they detect there's noone aboard.

[img.fark.net image 362x139]
That one's been answered.


Yes, by me. That cartoon is not correct.
 
2018-03-08 12:34:00 PM  

Frederf: dready zim: http://www.decisionproblem.com/papercl​ips/

You beautiful monster.


Just to warn you, it never forgets your game. If you get to the end, choose to go to another universe. If you stay in the one that has been turned entirely to paperclips that is it. You can NEVER play the game again.
 
2018-03-08 12:34:47 PM  

Ketchuponsteak: theresnothinglft: Ketchuponsteak: Muta: Another problem with self driving cars is that someone could carjack a car, fill it with explosives and send it off on it's merry way to your favorite target 200 miles away.

I don't think self driving cars feel like driving anywhere, when they detect there's noone aboard.

[img.fark.net image 362x139]
That one's been answered.

Yes, by me. That cartoon is not correct.


Explain.
 
2018-03-08 01:21:22 PM  

dready zim: Ketchuponsteak: theresnothinglft: Ketchuponsteak: Muta: Another problem with self driving cars is that someone could carjack a car, fill it with explosives and send it off on it's merry way to your favorite target 200 miles away.

I don't think self driving cars feel like driving anywhere, when they detect there's noone aboard.

[img.fark.net image 362x139]
That one's been answered.

Yes, by me. That cartoon is not correct.

Explain.


Well, you can scroll up.

But anyways, they'd detect the presence of a human the same way automatic doors and lights work. They react to movement.

I don't doubt that these cars will refuse to go any where if the seatbelt isn't plugged in. New cars already have something similar after all.

You could probably still make it do the cartoon though, put something like a Durecell bunny in there, so satisfy it.
 
2018-03-08 02:17:58 PM  

Ketchuponsteak: dready zim: Ketchuponsteak: theresnothinglft: Ketchuponsteak: Muta: Another problem with self driving cars is that someone could carjack a car, fill it with explosives and send it off on it's merry way to your favorite target 200 miles away.

I don't think self driving cars feel like driving anywhere, when they detect there's noone aboard.

[img.fark.net image 362x139]
That one's been answered.

Yes, by me. That cartoon is not correct.

Explain.

Well, you can scroll up.

But anyways, they'd detect the presence of a human the same way automatic doors and lights work. They react to movement.

I don't doubt that these cars will refuse to go any where if the seatbelt isn't plugged in. New cars already have something similar after all.

You could probably still make it do the cartoon though, put something like a Durecell bunny in there, so satisfy it.


That is rather nitpicky about what is essentially a cartoon made to show a point, that being that if a car will go to a place just because you tell it to that is dodgy. You yourself even say how it could be done. The point of the cartoon is correct, even if the details are not...
 
2018-03-08 03:15:01 PM  

dready zim: Ketchuponsteak: dready zim: Ketchuponsteak: theresnothinglft: Ketchuponsteak: Muta: Another problem with self driving cars is that someone could carjack a car, fill it with explosives and send it off on it's merry way to your favorite target 200 miles away.

I don't think self driving cars feel like driving anywhere, when they detect there's noone aboard.

[img.fark.net image 362x139]
That one's been answered.

Yes, by me. That cartoon is not correct.

Explain.

Well, you can scroll up.

But anyways, they'd detect the presence of a human the same way automatic doors and lights work. They react to movement.

I don't doubt that these cars will refuse to go any where if the seatbelt isn't plugged in. New cars already have something similar after all.

You could probably still make it do the cartoon though, put something like a Durecell bunny in there, so satisfy it.

That is rather nitpicky about what is essentially a cartoon made to show a point, that being that if a car will go to a place just because you tell it to that is dodgy. You yourself even say how it could be done. The point of the cartoon is correct, even if the details are not...


Meh, at least a few people was explained how "things work". I believe I mentioned it was ultrasound in my original post as well.
 
Displayed 36 of 36 comments

View Voting Results: Smartest and Funniest

This thread is archived, and closed to new comments.

Continue Farking





On Twitter



Top Commented
Javascript is required to view headlines in widget.
  1. Links are submitted by members of the Fark community.

  2. When community members submit a link, they also write a custom headline for the story.

  3. Other Farkers comment on the links. This is the number of comments. Click here to read them.

  4. Click here to submit a link.

Report