Skip to content
 
If you can read this, either the style sheet didn't load or you have an older browser that doesn't support style sheets. Try clearing your browser cache and refreshing the page.

(Gizmodo)   Who could have ever seen Anti-Bias Crime Prediction software going poorly?   (gizmodo.com) divider line
    More: Facepalm, Crime, crime-prediction software, police patrol decisions, Police, crime predictions, law enforcement agencies, Company CEO Brian MacDonald, single crime prediction  
•       •       •

680 clicks; posted to STEM » on 02 Dec 2021 at 10:38 AM (29 weeks ago)   |   Favorite    |   share:  Share on Twitter share via Email Share on Facebook



Voting Results (Smartest)
View Voting Results: Smartest and Funniest

 
2021-12-02 10:41:16 AM  
6 votes:
Um, poorer areas do have more violent crime than richer ones.  (And poor neighborhoods are more likely to be heavily minority than rich ones.)  Sounds like the software is working.  Not sure what the issue here is.
 
2021-12-02 10:08:54 AM  
3 votes:
If #FFFFFF go to sleep.
If #000000 plant gun.
 
2021-12-02 11:37:16 AM  
2 votes:

Geotpf: Um, poorer areas do have more violent crime than richer ones.  (And poor neighborhoods are more likely to be heavily minority than rich ones.)  Sounds like the software is working.  Not sure what the issue here is.


Poorer areas have more crimes that local police deal with.  Wealthy areas have more crimes that the SEC or FBI would should deal with.  In fact, I would like to see the SEC use this software and see what the results are.
 
2021-12-02 12:46:49 PM  
2 votes:
so the software said that crime is more likely to take place in denser packed urban environments where folks tend to be poor ?

wow. was it two lines of code
 
2021-12-02 11:20:38 AM  
1 vote:

UberDave: My *guess* (for conversation sake) is that someone is farking lazy with the metric and the "awesome" programmers don't give a fark about anything except for how clever they were for coming up with some "googlesque" recursive nonsense.


It is more that AIs are really good at sussing out patterns that humans don't see.  You may think "I remove the Race column and all is good", but the AI just merrily recreates racism without ever knowing what race is.  They don't know what anything means.  They are just comparing sets of numbers and then making predictions based on the patterns in those numbers.  But since they have no meaning to the numbers, they have to look at previous decisions made by humans to sort the numbers.  If those previous decisions were made by utterly racist shiatweasels - and they almost certainly were - the AI just learns to send all the African-Americans to jail, despite not even knowing what African-Americans are.  Because it sees all the interrelated factors that define race in the previous decisions.

It isn't even new.  AIs meant to de-racist court decisions routinely set Whites free for murder while sending African Americans to the chair for jaywalking.  Or deny African-Americans life-saving medical help for a gaping chest wound (2 placebo aspirin and .05cc water) while expending All The Resources for a White guy with a hangnail.  The techbros conducting experiments went to sociologists and asked "Why Bull Connor?"  The response was basically "Did you just delete the Race column and think it was all good?" **nods**  "Jesus Christ, are you really that simple?" **nods**  The solution was to be very deliberate with what information to input and to include biases in certain columns to balance out the biases of the 350 years of Happy Funtime Racism baked into the previous decisions.  Most of the blatant racial farkwittery just went away then.  But if you aren't hiring a couple of sociologists to curate your data sets, you can produce Simon Legree effortlessly with any AI that uses previous data to learn how to sort its decisions (which is to say all of them).
 
2021-12-02 2:04:17 PM  
1 vote:

phalamir: That isn't how it works.  An AI is given a bunch of previous data.  Using the inputs it picks an output.  It is then told what the correct answers are.  Or, more precisely, it is told how closely all of its decisions together match the all the previous decisions together.


Yup - and when the training dataset has harsher sentences for poorer (usually, less white) defendants, and "sees" more crime in poorer areas, especially in urban areas (so, generally less white areas), and the "right" answer is "what sentence was handed down" (generally more favorable to richer and whiter defendants)...

It's almost as though the system itself is a little (or more) racist, don't you think? Like there's some element of racism that isn't outright quantified in our current metrics, but which rears its ugly head basically everywhere?

Nahhhh, that's crazy talk.

// Garbage In, Garbage Out
 
2021-12-02 4:42:33 PM  
1 vote:

Geotpf: Um, poorer areas do have more violent crime than richer ones.  (And poor neighborhoods are more likely to be heavily minority than rich ones.)  Sounds like the software is working.  Not sure what the issue here is.


The AI "predicts" more crime in areas of higher population density and lower incomes which, surprise, are frequently also predominantly minority areas. Has anyone looked at the the crime rates in those areas before the AI was implemented? I'd bet that those were areas with higher crime rates. The AI might suggest that there will be more crimes in those areas but it doesn't send out the cops.

Before the computers were involved people could look at crime rates and decide, "Gee, there are an awful lot of violent crimes occurring in this neighborhood. Maybe we should consider a higher police presence?" They didn't consider how the police would treat the residents, just that maybe having more cops on patrol might be a deterrent.

Using actuarial tables you can find patterns in the data to help in solving a problem. This was done during the cholera outbreak in 19th century London to track down the source of the pathogen. We still do it today in our pandemic contact tracing. Why is doing something similar for policing considered so offensive? Just as you don't fish where the fish aren't biting, you don't send cops to places where there is no or little crime. It is a waste of resources.

Please keep in mind, I am in no way, saying that police presence is always a good thing. It can too often exacerbate a tenuous situation and can lead to horrendous outcomes from trigger happy, nervous, paranoid, racist or poorly trained (frequently all five at once) cops. And I am not equating correlation with causation. But you can't dismiss what the AI is recommending unless you look at the historical crime statistics and determine if they have any validity.
 
Displayed 7 of 7 comments

View Voting Results: Smartest and Funniest

This thread is closed to new comments.

Continue Farking




On Twitter


  1. Links are submitted by members of the Fark community.

  2. When community members submit a link, they also write a custom headline for the story.

  3. Other Farkers comment on the links. This is the number of comments. Click here to read them.

  4. Click here to submit a link.