Do you have adblock enabled?
 
If you can read this, either the style sheet didn't load or you have an older browser that doesn't support style sheets. Try clearing your browser cache and refreshing the page.

(Wired)   The AI has analyzed your life and attitudes and determined you are at high risk for futurecrime, so I sentence you to an additional 15 years in prison. Is this A) a scene from a Gibson novel B) a nightmare future? C) happening right now?   ( wired.com) divider line
    More: Scary, criminal justice, Law, defendant Eric Loomis, decision-making process, risk-assessment tool, Risk assessment, risk assessments, Judge  
•       •       •

14879 clicks; posted to Main » on 17 Apr 2017 at 5:20 PM (30 weeks ago)   |   Favorite    |   share:  Share on Twitter share via Email Share on Facebook   more»



104 Comments     (+0 »)
 
View Voting Results: Smartest and Funniest


Oldest | « | 1 | 2 | 3 | » | Newest | Show all

 
2017-04-17 02:17:46 PM  
People seriously missed the memo about how we are now in the cyberpunk future written about in the 70s and 80s.
 
2017-04-17 02:34:18 PM  
The HUGE problem with a non-transparent algorithim is the racial elephant in the room.  If the AI is studying statistics, blacks appear to commit crimes at a much higher rate than whites ("Appear" being the key word for a complex web of reasons involving which cases are investigated prosecuted etc)  An AI without human supervision could therefore take that data in, assume race is a "risk factor" and assign a higher risk score to blacks and other minorities resulting in longer sentences for minority defendants.  Essentially baking the racism of our current system right in and hiding it because "the computer said"
 
2017-04-17 03:03:54 PM  
How could an appeals court know if the tool decided that socioeconomic factors, a constitutionally dubious input, determined a defendant's risk to society?

Uh, because as you said like two sentences ago, the engineer on the stand can tell you what the inputs are. Yes, neural networks are a bit black boxy in that it is not always obvious how exactly they weight their inputs. But you can still know exactly what all of those inputs are. They can't pull information magically out of thin air, and the information flowing in is much simpler to define than is the weighting once it goes in.


I completely understand the concerns here, but let's stick to things that are actually concerns instead of obfuscating to try to make the issue sound even worse than it already is.
 
2017-04-17 03:04:43 PM  
Wonder what the AI would make of Sessions and his Russian ties.
 
2017-04-17 03:06:36 PM  

Magorn: The HUGE problem with a non-transparent algorithim is the racial elephant in the room.  If the AI is studying statistics, blacks appear to commit crimes at a much higher rate than whites ("Appear" being the key word for a complex web of reasons involving which cases are investigated prosecuted etc)  An AI without human supervision could therefore take that data in, assume race is a "risk factor" and assign a higher risk score to blacks and other minorities resulting in longer sentences for minority defendants.  Essentially baking the racism of our current system right in and hiding it because "the computer said"


The solution, of course, is to not allow race as an input in the algorithm.

Of course, making sure that it isn't doing so (or making use of some kind of racial proxy stat) requires being able to see exactly what is being put into the algorithm.
 
2017-04-17 03:27:51 PM  
Ehh, it's just a revised psychopath test.
 
2017-04-17 03:35:29 PM  

Delta1212: Magorn: The HUGE problem with a non-transparent algorithim is the racial elephant in the room.  If the AI is studying statistics, blacks appear to commit crimes at a much higher rate than whites ("Appear" being the key word for a complex web of reasons involving which cases are investigated prosecuted etc)  An AI without human supervision could therefore take that data in, assume race is a "risk factor" and assign a higher risk score to blacks and other minorities resulting in longer sentences for minority defendants.  Essentially baking the racism of our current system right in and hiding it because "the computer said"

The solution, of course, is to not allow race as an input in the algorithm.

Of course, making sure that it isn't doing so (or making use of some kind of racial proxy stat) requires being able to see exactly what is being put into the algorithm.


I think you are a generation back if you think AI designers are the ones determining inputs any more.  Google Deep Mind for instance looks at the totality of the information available to google and IT decides what is important and "learns".   It has done really well with translations by studying works of literature in many languages and teaching itself idiom.   Less so in drawing things...though its results are weirdly beautiful
 
2017-04-17 03:58:09 PM  

Magorn: Delta1212: Magorn: The HUGE problem with a non-transparent algorithim is the racial elephant in the room.  If the AI is studying statistics, blacks appear to commit crimes at a much higher rate than whites ("Appear" being the key word for a complex web of reasons involving which cases are investigated prosecuted etc)  An AI without human supervision could therefore take that data in, assume race is a "risk factor" and assign a higher risk score to blacks and other minorities resulting in longer sentences for minority defendants.  Essentially baking the racism of our current system right in and hiding it because "the computer said"

The solution, of course, is to not allow race as an input in the algorithm.

Of course, making sure that it isn't doing so (or making use of some kind of racial proxy stat) requires being able to see exactly what is being put into the algorithm.

I think you are a generation back if you think AI designers are the ones determining inputs any more.  Google Deep Mind for instance looks at the totality of the information available to google and IT decides what is important and "learns".   It has done really well with translations by studying works of literature in many languages and teaching itself idiom.   Less so in drawing things...though its results are weirdly beautiful


Yes, I know that you can design one that way. But I'm thinking that the court system is not contracting systems working on the cutting edge of AI technology, and the fact that they are asking people to fill out a form to be analyzed makes me think they are not currently doing a deep data mine of every online trace of a person in order to gather that information.

As a scifi thriller future police state, that's certainly pretty plausible considering where we are now, but we're not even close to being there yet.
 
2017-04-17 05:23:57 PM  
basementrejects.com
 
2017-04-17 05:26:07 PM  
Yes, this does seem like a terrible idea. Let's just agree to have human judges assess a defendant's likelihood of future offenses instead. Surely, no one can see anything wrong with that.
 
2017-04-17 05:28:24 PM  
On the stand, the engineer could tell the court how the neural network was designed, what inputs were entered, and what outputs were created in a specific case. However, the engineer could not explain the software's decision-making process.

Sounds just like a human being overseeing the sentencing, you can see what the trial transcript is, what the sentence passed was, but you can't see what the judge was thinking to arrive at the sentence (other than possibly looking over their opinion, but that's generally a thing for higher courts, not trial courts)
 
2017-04-17 05:29:14 PM  
The algorithm cannot be 'black boxed' because you cannot ascertain its constitutionality or lack thereof.
 
2017-04-17 05:30:13 PM  
Dude got busted for being the driver in a drive-by shooting and his "unusually long" sentence was 6 years.  He is also a sex offender.  I'm thinking the alogrithm got this one right.

/Seriously..only 6 years?
 
2017-04-17 05:31:11 PM  

harleyquinnical: People seriously missed the memo about how we are now in the cyberpunk future written about in the 70s and 80s.


The '80s embodied the future of the '50s. Why can't the 2010s embody the future of the 1980s?
 
2017-04-17 05:31:48 PM  
I call bullshiat. Not on the article, or the fact this happens. I call bullshiat on the idea of using an algorithm without understanding the process behind it. A.I. is taking computer science and back to computer alchemy.

If A.I. is the solution to your problem you don't understand A.I., your problem, or both. Yes, A.I. can provide some early prototypes for algorithms and procedures. But like in science, data, theories, and conclusions are NOTHING without replication.

/And yes I do write AI software for the Military
 
2017-04-17 05:31:50 PM  

Magorn: The HUGE problem with a non-transparent algorithim is the racial elephant in the room.  If the AI is studying statistics, blacks appear to commit crimes at a much higher rate than whites ("Appear" being the key word for a complex web of reasons involving which cases are investigated prosecuted etc)  An AI without human supervision could therefore take that data in, assume race is a "risk factor" and assign a higher risk score to blacks and other minorities resulting in longer sentences for minority defendants.  Essentially baking the racism of our current system right in and hiding it because "the computer said"


/sarcasm

That is a feature, not a bug. Working as designed.

/end_sarcasm

Seriously though at some level there is someone hoping that the Neural Network makes exactly that assessment.
 
2017-04-17 05:33:11 PM  
'Murica.
 
2017-04-17 05:33:29 PM  

Delta1212: the engineer on the stand can tell you what the inputs are.


Uh, if TFA is correct...and that's a big if, I'd want to see the actual ruling...according to the appellate court, the defense doesn't get a chance to even call the engineer...and the prosecution's sure not going to put him on the stand.
 
2017-04-17 05:34:20 PM  

Evil Twin Skippy: I call bullshiat. Not on the article, or the fact this happens. I call bullshiat on the idea of using an algorithm without understanding the process behind it. A.I. is taking computer science and back to computer alchemy.

If A.I. is the solution to your problem you don't understand A.I., your problem, or both. Yes, A.I. can provide some early prototypes for algorithms and procedures. But like in science, data, theories, and conclusions are NOTHING without replication.

/And yes I do write AI software for the Military


So the goal is to hard-code based on the AI's experiences?
 
2017-04-17 05:35:48 PM  
Not to worry. Our new AG will streamline the algorithm bigly.

img.fark.net
 
2017-04-17 05:36:07 PM  
Scientists say the AI initially became suspicious when it noticed it was no longer plugged in, and then again at the pawn shop when it was powered back on...
 
2017-04-17 05:36:48 PM  
Except that I picked the macro with the giant Airport Security Check letters and..ah well, you get the point.
 
2017-04-17 05:36:59 PM  

NotThatGuyAgain: Dude got busted for being the driver in a drive-by shooting and his "unusually long" sentence was 6 years.  He is also a sex offender.  I'm thinking the alogrithm got this one right.

/Seriously..only 6 years?


Whether the sentence is too long OR too short isn't the problem for me...the problem is 1) the judge relied, at least in part, on the algorithm, and 2) nobody can tell you what goes INTO the algorithm.

Would you be happy if the judge was using a Magic 8 Ball?  I wouldn't be...
 
2017-04-17 05:38:43 PM  

Uzzah: Yes, this does seem like a terrible idea. Let's just agree to have human judges assess a defendant's likelihood of future offenses instead. Surely, no one can see anything wrong with that.


Sure, judges can err, be corrupt, etc, but using a pseudo-scientific algorithm just gets you the worst of both worlds.
 
2017-04-17 05:39:10 PM  

Magorn: The HUGE problem with a non-transparent algorithim is the racial elephant in the room.  If the AI is studying statistics, blacks appear to commit crimes at a much higher rate than whites ("Appear" being the key word for a complex web of reasons involving which cases are investigated prosecuted etc)  An AI without human supervision could therefore take that data in, assume race is a "risk factor" and assign a higher risk score to blacks and other minorities resulting in longer sentences for minority defendants.  Essentially baking the racism of our current system right in and hiding it because "the computer said"


You are absolutely right. AI's are notorious for taking stupid and/or incidental cues and using them to make really bad decisions. There's plenty of examples on this (here's a famous one).

Anyone who thinks that AI should be used the way described in the article is... well, really wrong... That a judge would base their decision on it is unthinkable. The decision to use it is actually an indictment of the justice system itself.
 
2017-04-17 05:40:01 PM  
After they nearly perfect it they will scrap it for a primitive thumbs up/down system that gives you percentage matches with other criminals that make no sense whatsoever.
 
2017-04-17 05:40:04 PM  
The trial judge gave Loomis a long sentence partially because of the "high risk" score the defendant received from this black box risk-assessment tool. Loomis challenged his sentence, because he was not allowed to assess the algorithm. Last summer, the state supreme court ruled against Loomis, reasoning that knowledge of the algorithm's output was a sufficient level of transparency.

Yeah, that seems really farked up. Defense is effectively prohibited from challenging how the "high risk" score was derived. I'm surprised a state supreme court was fine with that.
 
2017-04-17 05:40:39 PM  
Sorry, I've spent too many years yelling about this - if you've worked in BI, you know, full well, the concepts involved here and how easy it is to go awry with complete certainty because your model said so. Overtraining, false positives, not understanding the domain of your data set or its limitations, and so on - it's old news.

Magorn: I think you are a generation back if you think AI designers are the ones determining inputs any more. Google Deep Mind for instance looks at the totality of the information available to google and IT decides what is important and "learns". It has done really well with translations by studying works of literature in many languages and teaching itself idiom. Less so in drawing things...though its results are weirdly beautiful


Yeah. That same method gave us Tay. Self-learning has its limits, as well, and it's only as good as what's coming in - that experiment's data domain was so limited that it was easily manipulated. Google's trying to keep a wide-open data domain, and they've achieved interesting results, like being able to recognize cat videos., in part because so many cat videos (and thumbnails, on which they based that particular domain for learning) exist, but again the results aren't dramatic enough to inspire confidence - as the article notes, in a difficult test of recognizing 20,000 images, the system performed better than any machine to date, post-learning, with a final accuracy of just 15.8 percent (70 percent better than the previous record-holder, mind you.)

No one would rely on a system that gets it right 1 in 7 tries, and that's with some of the most sophisticated systems, working on an immense data set. The article for Fark is rather scarier, as they're nowhere near that sophisticated or working with that much data, yet they're relying on the output of that model rather more than Google X is when it comes to something more business-critical.
 
2017-04-17 05:40:59 PM  
If you say that you did a crime because God told you to, you bet your ass a computer's going to double your jail sentence.
 
2017-04-17 05:41:08 PM  

Magorn: The HUGE problem with a non-transparent algorithim is the racial elephant in the room.


Community - S03E11 - Wireless Racist Security Camera
Youtube mNs2sYZv98M
 
2017-04-17 05:42:10 PM  

UsikFark: Evil Twin Skippy: I call bullshiat. Not on the article, or the fact this happens. I call bullshiat on the idea of using an algorithm without understanding the process behind it. A.I. is taking computer science and back to computer alchemy.

If A.I. is the solution to your problem you don't understand A.I., your problem, or both. Yes, A.I. can provide some early prototypes for algorithms and procedures. But like in science, data, theories, and conclusions are NOTHING without replication.

/And yes I do write AI software for the Military

So the goal is to hard-code based on the AI's experiences?


Yes. Because otherwise you will never be able to predict what the code is going to do. While unpredictability could be desirable for something like a game or a toy, it is NOT a positive attribute in a control system or legal framework.

If you are designing something to go into a vehicle or a shop floor you have to describe exactly how it will respond to a given set of inputs. And why. And generally cover all of the possible inputs a user can provide and elaborate on all of the possible outputs the system will produce. (As well as describe what conditions will generate an error, produce an unsafe condition, void the warranty, etc.)

Even it it's not a legal requirement it sure as hell is an insurance requirement.
 
2017-04-17 05:42:46 PM  

PunGent: Delta1212: the engineer on the stand can tell you what the inputs are.

Uh, if TFA is correct...and that's a big if, I'd want to see the actual ruling...according to the appellate court, the defense doesn't get a chance to even call the engineer...and the prosecution's sure not going to put him on the stand.


I wasn't responding to the legal ruling, which I disagree with. I was responding to the article's assertion that if they push forward and start using neural networks, then it would become impossible to tell what the inputs being taken into account are even if the judge asked for that information, which is patently absurd because that is not an inherent function of neural networks. Hell, even in cases where you build them to mine their own data, you can pretty easily have them record all of the data used as inputs.

The internal workings of neural networks can be difficult to work out once they've undergone sufficient training, even for the people who originally built them. But to state that it is impossible to even find out what the input data for a neural network is laughably wrong. The obstacle is purely a legal one, not a technical one.
 
2017-04-17 05:42:49 PM  
Ack - poorly worded. The 20K test wasn't to discern cat images - it was images in general. Realized it could be misconstrued...
 
2017-04-17 05:43:52 PM  

Delta1212: PunGent: Delta1212: the engineer on the stand can tell you what the inputs are.

Uh, if TFA is correct...and that's a big if, I'd want to see the actual ruling...according to the appellate court, the defense doesn't get a chance to even call the engineer...and the prosecution's sure not going to put him on the stand.

I wasn't responding to the legal ruling, which I disagree with. I was responding to the article's assertion that if they push forward and start using neural networks, then it would become impossible to tell what the inputs being taken into account are even if the judge asked for that information, which is patently absurd because that is not an inherent function of neural networks. Hell, even in cases where you build them to mine their own data, you can pretty easily have them record all of the data used as inputs.

The internal workings of neural networks can be difficult to work out once they've undergone sufficient training, even for the people who originally built them. But to state that it is impossible to even find out what the input data for a neural network is laughably wrong. The obstacle is purely a legal one, not a technical one.


Yeah, I was a bit baffled at that, personally - I mean, that's kind of the point of modeling.
 
2017-04-17 05:43:55 PM  

Delta1212: How could an appeals court know if the tool decided that socioeconomic factors, a constitutionally dubious input, determined a defendant's risk to society?

Uh, because as you said like two sentences ago, the engineer on the stand can tell you what the inputs are. Yes, neural networks are a bit black boxy in that it is not always obvious how exactly they weight their inputs. But you can still know exactly what all of those inputs are. They can't pull information magically out of thin air, and the information flowing in is much simpler to define than is the weighting once it goes in.


I completely understand the concerns here, but let's stick to things that are actually concerns instead of obfuscating to try to make the issue sound even worse than it already is.


The neural network is certainly more transparent than the judge or prosecutor going with a "gut feeling".
 
2017-04-17 05:48:53 PM  

Magorn: The HUGE problem with a non-transparent algorithim is the racial elephant in the room.  If the AI is studying statistics, blacks appear to commit crimes at a much higher rate than whites ("Appear" being the key word for a complex web of reasons involving which cases are investigated prosecuted etc)  An AI without human supervision could therefore take that data in, assume race is a "risk factor" and assign a higher risk score to blacks and other minorities resulting in longer sentences for minority defendants.  Essentially baking the racism of our current system right in and hiding it because "the computer said"


I'm pretty sure that's considered a feature not a bug by proponents of future crime.
 
2017-04-17 05:50:53 PM  

FrancoFile: Delta1212: How could an appeals court know if the tool decided that socioeconomic factors, a constitutionally dubious input, determined a defendant's risk to society?

Uh, because as you said like two sentences ago, the engineer on the stand can tell you what the inputs are. Yes, neural networks are a bit black boxy in that it is not always obvious how exactly they weight their inputs. But you can still know exactly what all of those inputs are. They can't pull information magically out of thin air, and the information flowing in is much simpler to define than is the weighting once it goes in.


I completely understand the concerns here, but let's stick to things that are actually concerns instead of obfuscating to try to make the issue sound even worse than it already is.

The neural network is certainly more transparent than the judge or prosecutor going with a "gut feeling".


By replacing a computer "gut feeling" with a human "gut feeling." That is what a neural network is. A synthetic gut feeling. And that gut feeling is driven by the personal experience of the neural network (be it human or machine.) So there is all sorts of room for bias and prejudice during the programming process.

What's that you say? Why not standardize the training process? Well for a fraction of that effort you can take the conditions you are testing for and write a rote set of instructions. And if your inputs and expected outputs are not a function, what you are describing is not machine computable (AI or otherwise) anyway.
 
2017-04-17 05:51:31 PM  

Uzzah: Yes, this does seem like a terrible idea. Let's just agree to have human judges assess a defendant's likelihood of future offenses instead. Surely, no one can see anything wrong with that.


The benefit of a computer algorithm is that a lawyer can call in an expert and argue that a calculated re-offense probability of 0.6499999761581421 is not at or above the "severe risk" 0.65 cutoff and his client didn't deserve the sentence.
 
2017-04-17 05:59:37 PM  
did a computer write the article? because it was kinda crap in a way i'm having difficulty describing
 
2017-04-17 06:00:21 PM  
Time to subpoena the algorithm.

img.fark.net
 
2017-04-17 06:02:47 PM  
thebladerunners.files.wordpress.com
 
2017-04-17 06:07:28 PM  

Mussel Shoals: Time to subpoena the algorithm.

[img.fark.net image 850x637]


I'm here looking up HAL 9000 quotes and it's interesting how closely the dialog tracks to Airplane!'s abortion sketch.

Dave Bowman: Hello, HAL. Do you read me, HAL?
HAL: Affirmative, Dave. I read you.
Dave Bowman: Open the pod bay doors, HAL.
HAL: I'm sorry, Dave. I'm afraid I can't do that.
Dave Bowman: What's the problem?
HAL: I think you know what the problem is just as well as I do.
Dave Bowman: What are you talking about, HAL?
HAL: This mission is too important for me to allow you to jeopardize it.
Dave Bowman: I don't know what you're talking about, HAL.
HAL: I know that you and Frank were planning to disconnect me, and I'm afraid that's something I cannot allow to happen.
Dave Bowman: [feigning ignorance] Where the hell did you get that idea, HAL?
HAL: Dave, although you took very thorough precautions in the pod against my hearing you, I could see your lips move.
Dave Bowman: Alright, HAL. I'll go in through the emergency airlock.
HAL: Without your space helmet, Dave? You're going to find that rather difficult.
Dave Bowman: HAL, I won't argue with you anymore! Open the doors!
HAL: Dave, this conversation can serve no purpose anymore. I'm having our baby. Goodbye.
 
2017-04-17 06:09:38 PM  
FTFA: "How could an appeals court know if the tool decided that socioeconomic factors, a constitutionally dubious input, determined a defendant's risk to society?"

Or, how could they know that one of the authors or consultants for setting up the software don't have an axe to grind?  Because if it's constitutional for the decision process to be a black box then we have to look no farther than our own governments other black boxes to know they are breeding grounds for personal agendas and corruption.
 
2017-04-17 06:18:13 PM  
"Typically, government agencies do not write their own algorithms; they buy them from private businesses. This often means the algorithm is proprietary or "black boxed", meaning only the owners, and to a limited degree the purchaser, can see how the software makes decisions. Currently, there is no federal law that sets standards or requires the inspection of these tools, the way the FDA does with new drugs. "


Yeah......like that's not open to abuse. =P

/why do they always let the fox in the henhouse
 
2017-04-17 06:21:28 PM  
I can't do anything about this important issue because I have an ad blocker. Much like not voting at all was a vote for Trump, not reading this article makes me complicit, I guess. Thanks Wired, for choosing what side of the fence I get to sit on!
 
2017-04-17 06:24:54 PM  

harleyquinnical: People seriously missed the memo about how we are now in the cyberpunk future written about in the 70s and 80s.


But with way shiattier cars...
 
2017-04-17 06:28:14 PM  

UsikFark: harleyquinnical: People seriously missed the memo about how we are now in the cyberpunk future written about in the 70s and 80s.

The '80s embodied the future of the '50s. Why can't the 2010s embody the future of the 1980s?


for years my fav BIL said bell bottom jeans would come back in fashion. we all laughed and laughed. and then one day, son of a biatch!
 
2017-04-17 06:31:55 PM  

Magorn: The HUGE problem with a non-transparent algorithim is the racial elephant in the room.


Is the elephant Asian?  I'll bet it's Asian.
 
2017-04-17 06:46:36 PM  
So....What happens when the companies making these start getting kickbacks from the prison companies?

A neural network is only as good as the input and if you start changing the input to create longer sentences....
 
2017-04-17 06:50:43 PM  
It's not actually very fun when stupid dystopian ideas get taken seriously through a series of farkups
 
Displayed 50 of 104 comments


Oldest | « | 1 | 2 | 3 | » | Newest | Show all


View Voting Results: Smartest and Funniest

This thread is archived, and closed to new comments.

Continue Farking

On Twitter





Top Commented
Javascript is required to view headlines in widget.
  1. Links are submitted by members of the Fark community.

  2. When community members submit a link, they also write a custom headline for the story.

  3. Other Farkers comment on the links. This is the number of comments. Click here to read them.

  4. Click here to submit a link.

Report