Skip to content
 
If you can read this, either the style sheet didn't load or you have an older browser that doesn't support style sheets. Try clearing your browser cache and refreshing the page.

(Gizmodo)   Who could have ever seen Anti-Bias Crime Prediction software going poorly?   (gizmodo.com) divider line
    More: Facepalm, Crime, crime-prediction software, police patrol decisions, Police, crime predictions, law enforcement agencies, Company CEO Brian MacDonald, single crime prediction  
•       •       •

669 clicks; posted to STEM » on 02 Dec 2021 at 10:38 AM (23 weeks ago)   |   Favorite    |   share:  Share on Twitter share via Email Share on Facebook



18 Comments     (+0 »)
View Voting Results: Smartest and Funniest
 
2021-12-02 10:08:54 AM  
If #FFFFFF go to sleep.
If #000000 plant gun.
 
2021-12-02 10:13:46 AM  
My *guess* (for conversation sake) is that someone is farking lazy with the metric and the "awesome" programmers don't give a fark about anything except for how clever they were for coming up with some "googlesque" recursive nonsense.
 
2021-12-02 10:41:16 AM  
Um, poorer areas do have more violent crime than richer ones.  (And poor neighborhoods are more likely to be heavily minority than rich ones.)  Sounds like the software is working.  Not sure what the issue here is.
 
2021-12-02 10:46:52 AM  
cilisos.myView Full Size
 
2021-12-02 10:47:25 AM  
Financial crimes, illegal drug use, and drunk driving are higher in rich communities. Maybe they should focus more enforcement there.
 
2021-12-02 10:55:47 AM  
Repeat after me "There are no such things as arrest quotas."
 
2021-12-02 10:57:44 AM  

Bukharin: [cilisos.my image 850x624]


Bonus points here because Nikon is a Japanese company.
 
2021-12-02 11:20:38 AM  

UberDave: My *guess* (for conversation sake) is that someone is farking lazy with the metric and the "awesome" programmers don't give a fark about anything except for how clever they were for coming up with some "googlesque" recursive nonsense.


It is more that AIs are really good at sussing out patterns that humans don't see.  You may think "I remove the Race column and all is good", but the AI just merrily recreates racism without ever knowing what race is.  They don't know what anything means.  They are just comparing sets of numbers and then making predictions based on the patterns in those numbers.  But since they have no meaning to the numbers, they have to look at previous decisions made by humans to sort the numbers.  If those previous decisions were made by utterly racist shiatweasels - and they almost certainly were - the AI just learns to send all the African-Americans to jail, despite not even knowing what African-Americans are.  Because it sees all the interrelated factors that define race in the previous decisions.

It isn't even new.  AIs meant to de-racist court decisions routinely set Whites free for murder while sending African Americans to the chair for jaywalking.  Or deny African-Americans life-saving medical help for a gaping chest wound (2 placebo aspirin and .05cc water) while expending All The Resources for a White guy with a hangnail.  The techbros conducting experiments went to sociologists and asked "Why Bull Connor?"  The response was basically "Did you just delete the Race column and think it was all good?" **nods**  "Jesus Christ, are you really that simple?" **nods**  The solution was to be very deliberate with what information to input and to include biases in certain columns to balance out the biases of the 350 years of Happy Funtime Racism baked into the previous decisions.  Most of the blatant racial farkwittery just went away then.  But if you aren't hiring a couple of sociologists to curate your data sets, you can produce Simon Legree effortlessly with any AI that uses previous data to learn how to sort its decisions (which is to say all of them).
 
2021-12-02 11:37:16 AM  

Geotpf: Um, poorer areas do have more violent crime than richer ones.  (And poor neighborhoods are more likely to be heavily minority than rich ones.)  Sounds like the software is working.  Not sure what the issue here is.


Poorer areas have more crimes that local police deal with.  Wealthy areas have more crimes that the SEC or FBI would should deal with.  In fact, I would like to see the SEC use this software and see what the results are.
 
2021-12-02 11:38:14 AM  
Who could have seen it?

Fark user imageView Full Size
 
2021-12-02 12:02:30 PM  
Again this is, what i believe, is the main influecne on this sort of thing.


When you make a thing. Whatever you make that thing out of. The thing you make, will exhibit some of the properties, of whatever it is made out of.

And an algorithm, is basically a chocie making making machine.
So yeah, whoever the humans were the gave that algorithm its logic. Then are likely to have been a factor in what its logic is.

Show the bot unedited comments sections, and it should in short order learn to be a racists hater piece of shiat with nothign worth hearing to say.

What it is made out of, will be some of what it is.
 
2021-12-02 12:34:15 PM  

PvtStash: And an algorithm, is basically a chocie making making machine.
So yeah, whoever the humans were the gave that algorithm its logic. Then are likely to have been a factor in what its logic is.


That isn't how it works.  An AI is given a bunch of previous data.  Using the inputs it picks an output.  It is then told what the correct answers are.  Or, more precisely, it is told how closely all of its decisions together match the all the previous decisions together.  So, something like "You got 2946 of 12,000 right"  Not which 2046, mind you, just how many.  The AI then uses complicated math to modify its internal states and tries again.  And repeats this until it is getting 11,998 right.  So, it makes its own logic.  That's why people say you can't predict something like the YouTube algorithm - to us, the internal states are gibberish, so trying to figure out what to modify to make it do what you want is impossible.  You can change its inputs and how you score the outputs, but between that is a damned black box.  But the result is eerily good at tossing up videos you might want to watch.  As an example, an AI sorting handwritten numbers isn't refining images by type and placement of line until it make them look like a predefined numeral.  Looking at its internal states, they look like 8-bit Jackson Pollack paintings.  Seemingly random noise.  But out will pop the correct number 99.9998% of the time.  No human told it "stare at white noise until you see the number 6" - but it does, and it works.
 
2021-12-02 12:44:12 PM  

Bukharin: [cilisos.my image 850x624]


They all did!
 
2021-12-02 12:46:49 PM  
so the software said that crime is more likely to take place in denser packed urban environments where folks tend to be poor ?

wow. was it two lines of code
 
2021-12-02 2:04:17 PM  

phalamir: That isn't how it works.  An AI is given a bunch of previous data.  Using the inputs it picks an output.  It is then told what the correct answers are.  Or, more precisely, it is told how closely all of its decisions together match the all the previous decisions together.


Yup - and when the training dataset has harsher sentences for poorer (usually, less white) defendants, and "sees" more crime in poorer areas, especially in urban areas (so, generally less white areas), and the "right" answer is "what sentence was handed down" (generally more favorable to richer and whiter defendants)...

It's almost as though the system itself is a little (or more) racist, don't you think? Like there's some element of racism that isn't outright quantified in our current metrics, but which rears its ugly head basically everywhere?

Nahhhh, that's crazy talk.

// Garbage In, Garbage Out
 
2021-12-02 2:28:36 PM  

Dr Dreidel: phalamir: That isn't how it works.  An AI is given a bunch of previous data.  Using the inputs it picks an output.  It is then told what the correct answers are.  Or, more precisely, it is told how closely all of its decisions together match the all the previous decisions together.

Yup - and when the training dataset has harsher sentences for poorer (usually, less white) defendants, and "sees" more crime in poorer areas, especially in urban areas (so, generally less white areas), and the "right" answer is "what sentence was handed down" (generally more favorable to richer and whiter defendants)...

It's almost as though the system itself is a little (or more) racist, don't you think? Like there's some element of racism that isn't outright quantified in our current metrics, but which rears its ugly head basically everywhere?

Nahhhh, that's crazy talk.

// Garbage In, Garbage Out


i.imgur.comView Full Size
 
2021-12-02 4:42:33 PM  

Geotpf: Um, poorer areas do have more violent crime than richer ones.  (And poor neighborhoods are more likely to be heavily minority than rich ones.)  Sounds like the software is working.  Not sure what the issue here is.


The AI "predicts" more crime in areas of higher population density and lower incomes which, surprise, are frequently also predominantly minority areas. Has anyone looked at the the crime rates in those areas before the AI was implemented? I'd bet that those were areas with higher crime rates. The AI might suggest that there will be more crimes in those areas but it doesn't send out the cops.

Before the computers were involved people could look at crime rates and decide, "Gee, there are an awful lot of violent crimes occurring in this neighborhood. Maybe we should consider a higher police presence?" They didn't consider how the police would treat the residents, just that maybe having more cops on patrol might be a deterrent.

Using actuarial tables you can find patterns in the data to help in solving a problem. This was done during the cholera outbreak in 19th century London to track down the source of the pathogen. We still do it today in our pandemic contact tracing. Why is doing something similar for policing considered so offensive? Just as you don't fish where the fish aren't biting, you don't send cops to places where there is no or little crime. It is a waste of resources.

Please keep in mind, I am in no way, saying that police presence is always a good thing. It can too often exacerbate a tenuous situation and can lead to horrendous outcomes from trigger happy, nervous, paranoid, racist or poorly trained (frequently all five at once) cops. And I am not equating correlation with causation. But you can't dismiss what the AI is recommending unless you look at the historical crime statistics and determine if they have any validity.
 
2021-12-02 8:24:57 PM  
This software predicted more people drown around lakes than in deserts.  It's prejudiced against lakes.
 
Displayed 18 of 18 comments

View Voting Results: Smartest and Funniest

This thread is closed to new comments.

Continue Farking




On Twitter


  1. Links are submitted by members of the Fark community.

  2. When community members submit a link, they also write a custom headline for the story.

  3. Other Farkers comment on the links. This is the number of comments. Click here to read them.

  4. Click here to submit a link.