Skip to content
 
If you can read this, either the style sheet didn't load or you have an older browser that doesn't support style sheets. Try clearing your browser cache and refreshing the page.

(Tech Xplore)   Autonomous vehicles can be tricked into dangerous driving behavior. Humans, on the other hand, engage in dangerous driving because they can   (techxplore.com) divider line
    More: Obvious, Driverless car, Automobile, Traffic, Computer science, Autonomous vehicles, UCI professor of computer science, Autonomous robot, Electrical engineering  
•       •       •

439 clicks; posted to STEM » on 26 May 2022 at 11:30 PM (4 weeks ago)   |   Favorite    |   share:  Share on Twitter share via Email Share on Facebook



19 Comments     (+0 »)
View Voting Results: Smartest and Funniest
 
2022-05-26 11:45:02 PM  
Overly conservative driving is bad, human or autonomous. For instance, while one should slow down and leave more space in bad conditions, if one slows down too much, almost beyond reason, in conditions where it's hard for someone behind to stop, they're just as likely to create an incident from behind. There's a balance in all situations.

For the autonomous cars and the scenario described in TFA, there's going to have to be some AI recognition developed: What's the object, what's its weight/density (which radar might be able to pick up on), is it capable of moving or being moved such that it can potentially do damage to the vehicle, could the weather conditions move the object, and so on.

The thing about the development of the autonomous car is people get hung up on potential pitfalls. But, ultimately, it's just like a human driving: There are pitfalls to that, too, and, if anything, the autonomous car should be able to work around more pitfalls than humans, such as being on a network where cars actually can communicate to each other what they're doing -- if a human driving another vehicle decides to run a stop sign, you might not know until they're doing it, whereas a network of autonomous cars might be able to eliminate the need for traffic lights in time since the cars will just figure out by communicating with each other which ones should go through how so they don't hit each other.

Just like with human drivers, for every problem that arises, they'll think about it and find a solution. It'll just take time to get things to a high enough standard of non-incidence to make (the smart) people believe in it. There will always be the doofuses who don't get how programming works and will always say the autonomous car will never work, or the folks who don't want it to work for reasons (technological unemployment, leaving certain segments of industry high and dry, etc.), but just like everything else, they'll be proven wrong in the end.
 
2022-05-27 1:23:45 AM  
Imagine a bunch of monkeys encased in massive vehicles built with technology they couldn't possibly understand barreling down the road at speeds that are orders of magnitude faster than anything they were genetically prepared for and also half of them are kind of bored while the other half are angry... plus some are drunk.

Then realize we are those monkeys.
 
2022-05-27 1:26:13 AM  
That's a problem suggested in some near term semi-dystopian fiction, where the poor will form a barrier you can't drive through, so you have to pay bribe and give rides to the poors.
 
2022-05-27 1:28:21 AM  
I only drive dangerously when a UPS, Fedex, or dump truck parks its dumb ass on one lane of a two lane road.
Dunno wtf would happen if a robot car encountered that.
 
2022-05-27 1:44:11 AM  

Pfighting Polish: Overly conservative driving is bad, human or autonomous. For instance, while one should slow down and leave more space in bad conditions, if one slows down too much, almost beyond reason, in conditions where it's hard for someone behind to stop, they're just as likely to create an incident from behind. There's a balance in all situations.

For the autonomous cars and the scenario described in TFA, there's going to have to be some AI recognition developed: What's the object, what's its weight/density (which radar might be able to pick up on), is it capable of moving or being moved such that it can potentially do damage to the vehicle, could the weather conditions move the object, and so on.

The thing about the development of the autonomous car is people get hung up on potential pitfalls. But, ultimately, it's just like a human driving: There are pitfalls to that, too, and, if anything, the autonomous car should be able to work around more pitfalls than humans, such as being on a network where cars actually can communicate to each other what they're doing -- if a human driving another vehicle decides to run a stop sign, you might not know until they're doing it, whereas a network of autonomous cars might be able to eliminate the need for traffic lights in time since the cars will just figure out by communicating with each other which ones should go through how so they don't hit each other.

Just like with human drivers, for every problem that arises, they'll think about it and find a solution. It'll just take time to get things to a high enough standard of non-incidence to make (the smart) people believe in it. There will always be the doofuses who don't get how programming works and will always say the autonomous car will never work, or the folks who don't want it to work for reasons (technological unemployment, leaving certain segments of industry high and dry, etc.), but just like everything else, they'll be proven wrong in the end.


I don't know. I saw one of the Musk cars pick up and take off into orbit.
 
2022-05-27 1:45:39 AM  

SomeAmerican: Imagine a bunch of monkeys encased in massive vehicles built with technology they couldn't possibly understand barreling down the road at speeds that are orders of magnitude faster than anything they were genetically prepared for and also half of them are kind of bored while the other half are angry... plus some are drunk.

Then realize we are those monkeys.


But we should be good barreling through the galaxy at a reasonable percentage of C, right?
 
2022-05-27 2:04:53 AM  

Pfighting Polish: Overly conservative driving is bad, human or autonomous. For instance, while one should slow down and leave more space in bad conditions, if one slows down too much, almost beyond reason, in conditions where it's hard for someone behind to stop, they're just as likely to create an incident from behind. There's a balance in all situations.

For the autonomous cars and the scenario described in TFA, there's going to have to be some AI recognition developed: What's the object, what's its weight/density (which radar might be able to pick up on), is it capable of moving or being moved such that it can potentially do damage to the vehicle, could the weather conditions move the object, and so on.

The thing about the development of the autonomous car is people get hung up on potential pitfalls. But, ultimately, it's just like a human driving: There are pitfalls to that, too, and, if anything, the autonomous car should be able to work around more pitfalls than humans, such as being on a network where cars actually can communicate to each other what they're doing -- if a human driving another vehicle decides to run a stop sign, you might not know until they're doing it, whereas a network of autonomous cars might be able to eliminate the need for traffic lights in time since the cars will just figure out by communicating with each other which ones should go through how so they don't hit each other.

Just like with human drivers, for every problem that arises, they'll think about it and find a solution. It'll just take time to get things to a high enough standard of non-incidence to make (the smart) people believe in it. There will always be the doofuses who don't get how programming works and will always say the autonomous car will never work, or the folks who don't want it to work for reasons (technological unemployment, leaving certain segments of industry high and dry, etc.), but just like everything else, they'll be proven wrong in the end.


Go walk on Legos.
May ACs die on the vine.
 
2022-05-27 3:23:22 AM  
i.kym-cdn.comView Full Size

techcrunch.comView Full Size
 
2022-05-27 4:18:18 AM  
Hold my beer and watch this?
 
2022-05-27 9:27:06 AM  

SomeAmerican: Imagine a bunch of monkeys encased in massive vehicles built with technology they couldn't possibly understand barreling down the road at speeds that are orders of magnitude faster than anything they were genetically prepared for and also half of them are kind of bored while the other half are angry... plus some are drunk.

Then realize we are those monkeys.


Just because you don't understand the technology behind cars doesn't mean nobody does.
 
2022-05-27 11:10:11 AM  

Pfighting Polish: Overly conservative driving is bad, human or autonomous. For instance, while one should slow down and leave more space in bad conditions, if one slows down too much, almost beyond reason, in conditions where it's hard for someone behind to stop, they're just as likely to create an incident from behind. There's a balance in all situations.

For the autonomous cars and the scenario described in TFA, there's going to have to be some AI recognition developed: What's the object, what's its weight/density (which radar might be able to pick up on), is it capable of moving or being moved such that it can potentially do damage to the vehicle, could the weather conditions move the object, and so on.

The thing about the development of the autonomous car is people get hung up on potential pitfalls. But, ultimately, it's just like a human driving: There are pitfalls to that, too, and, if anything, the autonomous car should be able to work around more pitfalls than humans, such as being on a network where cars actually can communicate to each other what they're doing -- if a human driving another vehicle decides to run a stop sign, you might not know until they're doing it, whereas a network of autonomous cars might be able to eliminate the need for traffic lights in time since the cars will just figure out by communicating with each other which ones should go through how so they don't hit each other.

Just like with human drivers, for every problem that arises, they'll think about it and find a solution. It'll just take time to get things to a high enough standard of non-incidence to make (the smart) people believe in it. There will always be the doofuses who don't get how programming works and will always say the autonomous car will never work, or the folks who don't want it to work for reasons (technological unemployment, leaving certain segments of industry high and dry, etc.), but just like everything else, they'll be proven wrong in the end.


One example I like to bring up is hitting animals.

In Colorado, the law is such that you do not stop or serve to avoid hitting animals. So, in neighborhoods you should be mowing down rover, on mountain roads, running into deer and moose, etc.

Most humans do not obey this law, which is in place to prevent far worse outcomes, like serving into the pet owner chasing their dog, or slamming on the breaks, causing rear end collisions. In the mountains, serving can mean you go over a cliff.

Truly, it's situational. You shouldn't plow full speed into a moose if you can stop safely, you probably shouldn't run over pets of you can stop safely, etc. But that doesn't mean the balance slider is going to be trivial to nail down, and humans overriding the vehicle are definitely more apt to slam on the breaks because they are emotional, not using cold logic.
 
2022-05-27 11:24:11 AM  
Chav cunts cause CRASH with invisible rope prank
Youtube Q9MM5o8RbZA
 
2022-05-27 12:59:34 PM  

Quantumbunny: One example I like to bring up is hitting animals.

In Colorado, the law is such that you do not stop or serve to avoid hitting animals. So, in neighborhoods you should be mowing down rover, on mountain roads, running into deer and moose, etc.

Most humans do not obey this law, which is in place to prevent far worse outcomes, like serving into the pet owner chasing their dog, or slamming on the breaks, causing rear end collisions. In the mountains, serving can mean you go over a cliff.

Truly, it's situational. You shouldn't plow full speed into a moose if you can stop safely, you probably shouldn't run over pets of you can stop safely, etc. But that doesn't mean the balance slider is going to be trivial to nail down, and humans overriding the vehicle are definitely more apt to slam on the breaks because they are emotional, not using cold logic.


Even there, though, there's logic happening on the part of the human driver. It's just happening really fast. The human driver weighs the risks and makes a decision in the moment.

So, theoretically, the law could be altered to say, "You don't swerve or stop to avoid hitting animals, unless ... ". And just continually altered to handle all the iterations of different situations.

We don't do this because then the law becomes unwieldy and too much for a human brain to remember in the moment. A computer, however ...
 
2022-05-27 1:47:18 PM  
Just a few more years...........................
 
2022-05-27 2:24:55 PM  
A variant of this problem is that most autonomous vehicles are programmed to obey the speed limit.

Including in areas where no human does so, and is not expecting to encounter a vehicle doing so (around a blind curve, say).

There have been a lot of cases where a human driving normal human speeds rear ends an autonomous vehicle doing the actual posted speed limit which no actual human obeys.  These are always the legal fault of the human for speeding, of course.
 
2022-05-27 4:02:26 PM  

BlazeTrailer: SomeAmerican: Imagine a bunch of monkeys encased in massive vehicles built with technology they couldn't possibly understand barreling down the road at speeds that are orders of magnitude faster than anything they were genetically prepared for and also half of them are kind of bored while the other half are angry... plus some are drunk.

Then realize we are those monkeys.

But we should be good barreling through the galaxy at a reasonable percentage of C, right?


statistically space contains no matter, therefore there's nothing to hit.

/thank you Douglas Adams
 
2022-05-27 6:25:25 PM  

Geotpf: A variant of this problem is that most autonomous vehicles are programmed to obey the speed limit.

Including in areas where no human does so, and is not expecting to encounter a vehicle doing so (around a blind curve, say).

There have been a lot of cases where a human driving normal human speeds rear ends an autonomous vehicle doing the actual posted speed limit which no actual human obeys.  These are always the legal fault of the human for speeding, of course.


In a way, I kinda side with the autonomous vehicles on this one.

I mean, yeah, I go 5-10 over, too, but it's because we, societally, set speed limits too low. It should be a limit-limit-limit, or there should be a "suggested speed" and a hard-cap limit, but there isn't, because there's a lot of political stuff that goes into setting speed limits -- not left/right politics necessarily, but neighborhood safety-type stuff. "I want my kids to be able to play in the street, so the speed limit should only be 25" -- even though they play in the street, what, maybe a fraction of a percent of the time people are driving on the roads? Or speed trap towns that set the speed limit low to get ticket revenue. Or driving safety advocacy groups, which are well-meaning, but just kind of slow everyone down, too. Or whatever.

So we have this weirdly-defined "limit," that's not really a limit in practice but is codified as such, that we have to program the machines to follow, not because the machines are dumb but because a lot of humans are. This then prompts humans to cite their own stupidity as the reason we shouldn't have such a technological advance, which just seems to perpetuate, if not multiply, the stupid factor.

I sometimes say I prefer dealing with computers over humans. I get crazy looks from folks who get confused by technology when I say that, and it helps I have a computer science major, but it's true. Unless you're watching some sort of futuristic sci-fi where computers have become sentient and develop human like emotions and grudges, in the present tense, computers do what they're programmed to do. They don't have an agenda. They're not stubborn. Even things they do that emulate human behavior are just them following well-written code to do so. Even when they do something faulty, you can pinpoint the cause and fix it with a computer, whether it be buggy code or a fried circuit, rather than hearing, "Well, I just didn't feel like it" from a human. They're far more predictable than people. Accordingly, I trust them more than people. Yes, stuff fails, but we've advanced far enough that it fails far less than mere humans do.

Theoretically, we should allow autonomous cars to move at the average speed of human cars on the same road, regardless of speed limit, at least until they're the majority on the road and they can communicate their speed to each other. That would be the safest choice, and it would be easy to program the cars to do that. But it's human politics that get in the way. I find that unfortunate and I think it speaks worse of the humans than it does the autonomous cars.
 
2022-05-27 8:19:35 PM  

moothemagiccow: I only drive dangerously when a UPS, Fedex, or dump truck parks its dumb ass on one lane of a two lane road.
Dunno wtf would happen if a robot car encountered that.


As a brown driver, I can attest to the fact that drivers absolutely lose their shiat and forget how to drive altogether when coming upon a truck temporarily parked in the road.

I marvel daily that I am required to get tested by the DOT every 6 months while the overwhelming majority are never required to get tested after the first time.
 
2022-05-28 1:15:54 AM  
In cities and regions with auto autos, everything and everyone will be networked. Anyone who tries to "trick" a car will be detected and incorporated into the model.
 
Displayed 19 of 19 comments

View Voting Results: Smartest and Funniest

This thread is closed to new comments.

Continue Farking




On Twitter


  1. Links are submitted by members of the Fark community.

  2. When community members submit a link, they also write a custom headline for the story.

  3. Other Farkers comment on the links. This is the number of comments. Click here to read them.

  4. Click here to submit a link.