Skip to content
 
If you can read this, either the style sheet didn't load or you have an older browser that doesn't support style sheets. Try clearing your browser cache and refreshing the page.

(Vice)   What does white Obama have to tell us about bias in machine learning?   (vice.com) divider line
    More: Interesting, Racism, Artificial intelligence, pixelated image of Barack Obama, image of a white man, Machine learning, racial bias, Colored, Face Depixelizer  
•       •       •

1302 clicks; posted to Geek » on 23 Jun 2020 at 1:23 PM (13 weeks ago)   |   Favorite    |   share:  Share on Twitter share via Email Share on Facebook



48 Comments     (+0 »)
 
View Voting Results: Smartest and Funniest
 
2020-06-23 1:18:34 PM  
It could tell us that the lighting in an image plays an important part in an AI's ability to reconstruct an extremely low resolution image with too little information in it into what it may or may not have originally looked like when upscaled.

AI constructs like this have to work with the information they're given.  The upscaled image in TFA clearly isn't Obama, but the skin tone is similar based on the lighting conditions and the dearth of information in what looks to be a, what, 32x32 corner-of-a-postage-stamp image?  As far as I'm concerned, while the end product obviously doesn't look like Obama, it looks like a pretty good guesstimate reconstruction given what it had to work with.
 
2020-06-23 1:30:46 PM  
Honestly, if the fact that the computers that WE program and train end up having some serious problems with racial bias* isn't the best evidence for that racial bias being systematically embedded in our society, I don't know what does.

*bias is kind of a shorthand when it comes to machines...  Not like the machine is making a personal judgments based upon race... more of an improper weighting of reward structures and an statistical misrepresentation of input datastreams, which were constructed by humans.

/Keep in mind, the majority of testing and training done with this kind of software is done with famous people, the majority of whom are... drumroll...
//White
 
2020-06-23 1:33:30 PM  
Was really hoping for DeepNudes to provide some outputs of Obama before I passed judgement, but they're apparently gone now.
 
2020-06-23 1:35:52 PM  
They should just use the computer program from this documentary.

CSI Zoom Enhance
Youtube I_8ZH1Ggjk0
 
2020-06-23 1:36:18 PM  
Algorithms tend use weighted functions, which are inherently biased toward majority groups.

Im sure that Chinese algorithms would produce yet another result when asked to extrapolate.

And African algorithms.


The other thing is that since you're asking the algorithm to extrapolate to an image we are all famliliar with in hgih detail, it's not a controlled test - a better test would be to do random sampling of people of mixed race and ask the algorithm and humans to perform the same tasks in terms of assigning race and guessing features.
 
2020-06-23 1:37:28 PM  
Todd Howard? Is that you?
 
2020-06-23 1:39:47 PM  

Psychopusher: It could tell us that the lighting in an image plays an important part in an AI's ability to reconstruct an extremely low resolution image with too little information in it into what it may or may not have originally looked like when upscaled.

AI constructs like this have to work with the information they're given.  The upscaled image in TFA clearly isn't Obama, but the skin tone is similar based on the lighting conditions and the dearth of information in what looks to be a, what, 32x32 corner-of-a-postage-stamp image?  As far as I'm concerned, while the end product obviously doesn't look like Obama, it looks like a pretty good guesstimate reconstruction given what it had to work with.


No, I do believe that the present standard of behavior is to burn their institution to the ground.
Finally, my Viking cultural heritage is being accepted.
 
2020-06-23 1:40:45 PM  
We've basically invented computers that fall for optical illusions. The computer can't tell what color it is, it just knows the data about the pixel. If pixel A and pixel B have the same data, they're the same color. Even if Pixel A comes from a photo of Obama and Pixel B comes from Powder the magic Albino.

Fark user imageView Full Size
 
2020-06-23 1:43:39 PM  
Obviously any racial bias in the algorithms requires reprogramming to account for genital size.
 
2020-06-23 1:45:59 PM  
It's almost like this algorithm is still a ways off from working as one would expect.
 
2020-06-23 1:47:24 PM  
The AI can put eyes, a nose and a mouth under those pixels, but thats as far as it goes.

Here's the test:
show humans the pixelated pic first, then both original pic and the AI guess, and I'll bet we pick right every time.
 
2020-06-23 1:47:46 PM  
Holy shiat, is that Tim McVeigh?
 
2020-06-23 1:51:01 PM  

Psychopusher: It could tell us that the lighting in an image plays an important part in an AI's ability to reconstruct an extremely low resolution image with too little information in it into what it may or may not have originally looked like when upscaled.

AI constructs like this have to work with the information they're given.  The upscaled image in TFA clearly isn't Obama, but the skin tone is similar based on the lighting conditions and the dearth of information in what looks to be a, what, 32x32 corner-of-a-postage-stamp image?  As far as I'm concerned, while the end product obviously doesn't look like Obama, it looks like a pretty good guesstimate reconstruction given what it had to work with.


And therein lies the problem.  Assume that the police are using an algorithm that has been fed nothing but people of color to work with.  No matter the fuzzy input, you'll get back a person of color.  Now realize that any algorithm the police are using will nearly certainly be programmed to return people of color, at least in the US.  Lots of false arrests would be easy to make and likely plenty would be killed in the process.

Do you see the issue now?
 
2020-06-23 1:51:04 PM  

BeesNuts: Honestly, if the fact that the computers that WE program and train end up having some serious problems with racial bias* isn't the best evidence for that racial bias being systematically embedded in our society, I don't know what does.

*bias is kind of a shorthand when it comes to machines...  Not like the machine is making a personal judgments based upon race... more of an improper weighting of reward structures and an statistical misrepresentation of input datastreams, which were constructed by humans.

/Keep in mind, the majority of testing and training done with this kind of software is done with famous people, the majority of whom are... drumroll...
//White


They just fed the darn thing too many pictures of white people.
 
2020-06-23 1:52:56 PM  
They should rename it to wimp-lo.
 
2020-06-23 1:55:27 PM  

Explodo: Psychopusher: It could tell us that the lighting in an image plays an important part in an AI's ability to reconstruct an extremely low resolution image with too little information in it into what it may or may not have originally looked like when upscaled.

AI constructs like this have to work with the information they're given.  The upscaled image in TFA clearly isn't Obama, but the skin tone is similar based on the lighting conditions and the dearth of information in what looks to be a, what, 32x32 corner-of-a-postage-stamp image?  As far as I'm concerned, while the end product obviously doesn't look like Obama, it looks like a pretty good guesstimate reconstruction given what it had to work with.

And therein lies the problem.  Assume that the police are using an algorithm that has been fed nothing but people of color to work with.  No matter the fuzzy input, you'll get back a person of color.  Now realize that any algorithm the police are using will nearly certainly be programmed to return people of color, at least in the US.  Lots of false arrests would be easy to make and likely plenty would be killed in the process.

Do you see the issue now?


I think you got your knickers twisted the wrong way on this one - if they used this algoritthm, they'd arrest Pete Buttigieg after Obama robbed a liquor store.
 
2020-06-23 2:12:16 PM  
Pixellating the image removes LOTS of data so the result image will look like a face. That's all. It will bear little resemblance to the original face because the data are gone. Our brains interpret faces at a very subtle level. We can tell identical twins apart. This algorithm will never produce a face that looks like the original to us because, again, the data are gone.
 
2020-06-23 2:13:53 PM  

Explodo: And therein lies the problem. Assume that the police are using an algorithm that has been fed nothing but people of color to work with. No matter the fuzzy input, you'll get back a person of color. Now realize that any algorithm the police are using will nearly certainly be programmed to return people of color, at least in the US. Lots of false arrests would be easy to make and likely plenty would be killed in the process.

Do you see the issue now?


Oh, that issue I see, and already knew about; it comes back to the old garbage-in, garbage-out scenario. (Note: This is not referring to black people as "garbage", only that, if you feed an algorithm bad or biased data, you're going to get bad or biased data back out of it.  This has been a problem for as long as it has attempted to be a solution.)  In specific use cases like this, the police should either A) stop using this shiat because it doesn't work, or B) have the people that developed the software feed it better datasets and train it better, and in the mean time, stop using this shiat because it doesn't work, and don't start again until it can be proven to do so.  This isn't so much about a problem with the AI as a problem with the humans that trained it.
 
2020-06-23 2:16:14 PM  
O'Bama
 
2020-06-23 2:24:42 PM  
Fark user imageView Full Size
It's our bias informing the judgement that the output should be "a person of color."

That looks like a pretty good interpretation if you don't know it's Obama.
 
2020-06-23 2:25:31 PM  

brainlordmesomorph: The AI can put eyes, a nose and a mouth under those pixels, but thats as far as it goes.

Here's the test:
show humans the pixelated pic first, then both original pic and the AI guess, and I'll bet we pick right every time.


The Mark I Eyeball is still damn good.
 
2020-06-23 2:28:56 PM  
I know reading TFA is frowned upon here, but TFA even states that the program is not meant to reproduce the pixelated image, sans pixelation.

Have any further tests been done? How did those come out? Curious what would happen if you pixeled the 2nd picture and feed it back to the machine, it might spit out a lady. Which brings up another quandary, has it confused genders as well, yet?

This is kinda why no one accepts the first solution to an experiment and require much more testing.
 
2020-06-23 2:31:21 PM  
Two guys had a hobby project and released it for free to the public. If you have a problem with it make your own that's better.
Also, have your seen Obama's white grandfather?

Fark user imageView Full Size
 
2020-06-23 2:34:44 PM  
Since the input is a greatly reduced data set, I think the correct response would be dozens of images all different but all compatible with the input. Ranking the output images by a protocol that ranks them by some frequency distribution would be ok.
 
2020-06-23 2:57:24 PM  
just deflect, always defect and lay blame in blameless places where no actual accountability can be found.

This Image of a White Barack Obama Is AI's a demonstration of the Programmer's Racial Bias Problem In a Nutshell.

The AI is not itself mkaing decisions on t's own, any bias it has is there from the hands of those who programed it or had design control of the machine learning materials the algorithm was provided.

The AI is not bias, it just is whatever it was made to be. If you want to describe who the bias is in, then do so.
But to claim the bias belongs to the AI in the same way a persons' bias belongs to them, is fooking BS.

If a 5 year old kids says/acts out some racist chit in front of you, do you blame them for it?
Or do you have the two brain cells of understanding to know that the child does not yet know and is just copying what their parent/most influential adults have been showing them?

The AI ain't really any different, it's the kid showing us, without understanding, what its' parents showed it.
 
2020-06-23 3:11:17 PM  
I doubt any version of Obama has any useful information on the inner workings of machine learning
 
2020-06-23 3:12:04 PM  

BeesNuts: Honestly, if the fact that the computers that WE program and train end up having some serious problems with racial bias* isn't the best evidence for that racial bias being systematically embedded in our society, I don't know what does.

*bias is kind of a shorthand when it comes to machines...  Not like the machine is making a personal judgments based upon race... more of an improper weighting of reward structures and an statistical misrepresentation of input datastreams, which were constructed by humans.

/Keep in mind, the majority of testing and training done with this kind of software is done with famous people, the majority of whom are... drumroll...

//White


PvtStash: just deflect, always defect and lay blame in blameless places where no actual accountability can be found.

This Image of a White Barack Obama Is AI's a demonstration of the Programmer's Racial Bias Problem In a Nutshell.

The AI is not itself mkaing decisions on t's own, any bias it has is there from the hands of those who programed it or had design control of the machine learning materials the algorithm was provided.

The AI is not bias, it just is whatever it was made to be. If you want to describe who the bias is in, then do so.
But to claim the bias belongs to the AI in the same way a persons' bias belongs to them, is fooking BS.

If a 5 year old kids says/acts out some racist chit in front of you, do you blame them for it?
Or do you have the two brain cells of understanding to know that the child does not yet know and is just copying what their parent/most influential adults have been showing them?

The AI ain't really any different, it's the kid showing us, without understanding, what its' parents showed it.



It's like the AI made to predict whether a criminal will reoffend if they are released. It was made to give judges a tool to look past their own biases. To be extra sure, they never told the AI anything about race.

Of course, it noticed lots of trends among repeat offenders like prior arrests, family members who have been to prison, low income, etc. A whole bunch of things that also happen to correlate highly with being black.

They said it was racist and threw it out. It's a very good thing that they threw it out, but it absolutely was not racist. It merely observed that the system is racist.
 
2020-06-23 3:14:24 PM  

Animatronik: Algorithms tend use weighted functions, which are inherently biased toward majority groups.

Im sure that Chinese algorithms would produce yet another result when asked to extrapolate.

And African algorithms.


The other thing is that since you're asking the algorithm to extrapolate to an image we are all famliliar with in hgih detail, it's not a controlled test - a better test would be to do random sampling of people of mixed race and ask the algorithm and humans to perform the same tasks in terms of assigning race and guessing features.


It is actually a consequence of that "intersectionality" business spread throughout the meta-structure of our entire society and woven so deeply into our specific cultural psyche that it can't even rightly be blamed on the programmers who coded the the things.

You might be fascinated to learn that Chinese firms have more or less led the charge on facial recognition and that they have the *exact same problem*.  Including mis-indentifying, get this, Asian faces.

Why?  Because they want to sell it to us, and must meet our testing standards, which are based on studies performed decades ago, which have been taught in schools and refined to systems over a generation.  Standards that are taught the world over.  Methods of testing that are taught the world over.  Databases that are used the world over.

You're speaking to only one side of the problem and missing the forest for the trees as a result.
 
2020-06-23 3:18:44 PM  

Puglio: It's like the AI made to predict whether a criminal will reoffend if they are released. It was made to give judges a tool to look past their own biases. To be extra sure, they never told the AI anything about race.

Of course, it noticed lots of trends among repeat offenders like prior arrests, family members who have been to prison, low income, etc. A whole bunch of things that also happen to correlate highly with being black.

They said it was racist and threw it out. It's a very good thing that they threw it out, but it absolutely was not racist. It merely observed that the system is racist.


This is a good example of the phenomenon I was talking about at work.  Thanks for articulating it.

In the simplest case, you give it a reward function of matching existing criminal demographic data as closely as possible and BAM, it would do just this.  That would also be a very easy way for a software company to meet whatever requirements were being handed to them by whatever LEA they had a contract with.  Hell, if the results deviated too much from that specific metric, I bet nobody would buy it.
I can hear the meeting now...
"We should probably include it just as a control case in the calculations.  Give it a 10% window to wiggle around in between iterations."
 
2020-06-23 3:22:43 PM  
Remember when Google had to remove the ability for their image classification system to say anything was an ape because it wouldn't stop tagging pictures of black people as gorillas?

https://www.theverge.com/2018/1/12/16​8​82408/google-racist-gorillas-photo-rec​ognition-algorithm-ai
 
2020-06-23 3:41:32 PM  
How do we, on a continuing push, fark with these programs so that no one wants to put money into them?

Or just fund the ACLU to work for strengthening the 4th amendment.
 
2020-06-23 4:51:44 PM  

BeesNuts: Animatronik: Algorithms tend use weighted functions, which are inherently biased toward majority groups.

Im sure that Chinese algorithms would produce yet another result when asked to extrapolate.

And African algorithms.


The other thing is that since you're asking the algorithm to extrapolate to an image we are all famliliar with in hgih detail, it's not a controlled test - a better test would be to do random sampling of people of mixed race and ask the algorithm and humans to perform the same tasks in terms of assigning race and guessing features.

It is actually a consequence of that "intersectionality" business spread throughout the meta-structure of our entire society and woven so deeply into our specific cultural psyche that it can't even rightly be blamed on the programmers who coded the the things.

You might be fascinated to learn that Chinese firms have more or less led the charge on facial recognition and that they have the *exact same problem*.  Including mis-indentifying, get this, Asian faces.

Why?  Because they want to sell it to us, and must meet our testing standards, which are based on studies performed decades ago, which have been taught in schools and refined to systems over a generation.  Standards that are taught the world over.  Methods of testing that are taught the world over.  Databases that are used the world over.

You're speaking to only one side of the problem and missing the forest for the trees as a result.


There's a very simple explanation that has nothing to do with cultural anthropology and everything to do with basic science and data modeling -
in more detail it probably failed because:

1.) It's a lot of extrapolation - the result is not likely to look like Obama no matter what.
2.) A naive training set will naturally be biased toward some ethnicities more that others.  I didn't read up on the algorithm, but let's assume for the sake of argument that it's a neural network algorithm with a training set.  The training set may have a lot more caucasian faces than people of other races, based on directly sampling the population.  That means that when extrapolated, the result is likely to be weighted in favor of what would be more probable facial features in a random sampling of the population, without taking into account better solutions based on groupings of heritable traits.  If you pixellate to the point of it being unrecognizable, that might actually give a result closer to the right answer, on average.

3.)And finally, there is the fact that darker skin tones may require a better algorithm to account for more subtle changes in contrast across features.  not something I am expert in. A simple publicly available algorithm may not take that into account - just not good enough.

So you don't have to invoke any fancy explanations invoking intersectionality theory etc.  And without a control, there's no way to know how badly it actually failed compared to humans.
 
2020-06-23 4:58:23 PM  
Determining hue is a extremely hard problem and even humans are not good at it, look at the blue/gold dress or the optical illusion above. It is not even a mater of what data you feed in to teach it.

This is a matter of a technical problem that took evolution millions of years to get to the snail level of vision. It is amazing that they can get it to even fine a picture of someone close after removing most of the clues as to what the picture is. Just finding the edge of an object is something that took 50 years of work to figure out.

"If a 5 year old kids says/acts out some racist chit in front of you, do you blame them for it?"

The applications doing this are so much farther behind a sea slug (18,000 neurons) as a sea slug is to a five year old (8.6×1010neurons). Calling anything we are doing right now AI is stupid.

It are not the systems that are at fault. It is the organisations that think that they can use snails for this kind of work.
 
2020-06-23 5:08:43 PM  

God-is-a-Taco: Two guys had a hobby project and released it for free to the public. If you have a problem with it make your own that's better.
Also, have your seen Obama's white grandfather?

[Fark user image 396x480]


That is actually is kinda amazing.
 
2020-06-23 5:15:58 PM  

FireSpy: Determining hue is a extremely hard problem and even humans are not good at it, look at the blue/gold dress or the optical illusion above. It is not even a mater of what data you feed in to teach it.

This is a matter of a technical problem that took evolution millions of years to get to the snail level of vision. It is amazing that they can get it to even fine a picture of someone close after removing most of the clues as to what the picture is. Just finding the edge of an object is something that took 50 years of work to figure out.

"If a 5 year old kids says/acts out some racist chit in front of you, do you blame them for it?"

The applications doing this are so much farther behind a sea slug (18,000 neurons) as a sea slug is to a five year old (8.6×1010neurons). Calling anything we are doing right now AI is stupid.

It are not the systems that are at fault. It is the organisations that think that they can use snails for this kind of work.


YES.
Some humans are not as good at facial recognition. There's actually a big chunk of our brains devoted to this problem, because it's really important over the last 1000000 years and earlier for us to recognize our relatives by sight vs. the people about to club us to death.
 
2020-06-23 5:19:44 PM  

PirateKing: We've basically invented computers that fall for optical illusions. The computer can't tell what color it is, it just knows the data about the pixel. If pixel A and pixel B have the same data, they're the same color. Even if Pixel A comes from a photo of Obama and Pixel B comes from Powder the magic Albino.

[Fark user image 850x646]


That sounds rather opposite of falling for an optical illusion.  It is correctly identifying similar colors/tones and coming to a different conclusion than a human whose brain is processing more ancillary information and using a fancier heuristic - that extra processing being what causes optical illusions for humans.
 
2020-06-23 5:32:14 PM  

Tom Marvolo Bombadil: PirateKing: We've basically invented computers that fall for optical illusions. The computer can't tell what color it is, it just knows the data about the pixel. If pixel A and pixel B have the same data, they're the same color. Even if Pixel A comes from a photo of Obama and Pixel B comes from Powder the magic Albino.

[Fark user image 850x646]

That sounds rather opposite of falling for an optical illusion.  It is correctly identifying similar colors/tones and coming to a different conclusion than a human whose brain is processing more ancillary information and using a fancier heuristic - that extra processing being what causes optical illusions for humans.


The illusion being that the tones ARE in fact different, but due to the flaws in the visual system they're being perceived the same.

Of course the other big problem in facial reconstruction technology like this is that if the training data you give it for your facial recognition is always the 'usual suspects' then whenever you reconstruct a face...
 
2020-06-23 5:44:08 PM  

PirateKing: Of course the other big problem in facial reconstruction technology like this is that if the training data you give it for your facial recognition is always the 'usual suspects' then whenever you reconstruct a face...


Their training data must have been sex offenders because that guy on the right creeps me out.
 
2020-06-23 7:04:22 PM  

Tom Marvolo Bombadil: PirateKing: Of course the other big problem in facial reconstruction technology like this is that if the training data you give it for your facial recognition is always the 'usual suspects' then whenever you reconstruct a face...

Their training data must have been sex offenders because that guy on the right creeps me out.


These days ANYONE on the right creeps me out.
 
2020-06-23 9:09:03 PM  
Discount Daniel Craig... Is that you?
 
2020-06-23 9:47:50 PM  
Huh, that's a coincidence, Timothy McVeigh died on June 11, 2001. June is the sixth month of the year, which upside down and in Arabic numerals is 9. As soon as his cover was closed, Operative Q went to work in Chicago.
 
2020-06-23 11:42:50 PM  

Tom Marvolo Bombadil: That sounds rather opposite of falling for an optical illusion.  It is correctly identifying similar colors/tones and coming to a different conclusion than a human whose brain is processing more ancillary information and using a fancier heuristic - that extra processing being what causes optical illusions for humans.


It's not clear from the pictures in the article but if you follow the Twitter link you can see darker-skinned black people with less ambiguous features people get some white person's face, just colored darker, at least in the examples I saw. So I'm not sure the algorithm even considers skin color until it makes the chosen face match.

Also, maybe it's cuz I'm sorta Rican myself, but I find AOC has kinda white features to begin with so I don't know why they would expect it to spit out Rosie Perez.
 
2020-06-23 11:54:50 PM  
Why does everyone keep forgetting that Obama is half white?

That picture looks like Stanley Dunham, Obama's grandpa.

Fark user imageView Full Size
 
2020-06-24 12:08:18 AM  

LewDux: O'Bama


Alabama
 
2020-06-24 12:14:55 AM  
Also, if you watch boxing, when the dudes are shirtless going toe-to-toe it's pretty common for some white guy to actually be darker than some Latin guy, even if it's just spray-tan -- how we as humans interpret "race" has probably just as much to do with bone structure as skin color. Bone structure that is usually obscured in the pixelated photos, so I don't know how to improve it for light skinned minorities unless you just randomly spit out light-skinned minorities of different varieties x% of the time for all light-skinned people.
 
2020-06-24 4:45:23 AM  

fark'emfeed'emfish: [Fark user image image 425x238]It's our bias informing the judgement that the output should be "a person of color."

That looks like a pretty good interpretation if you don't know it's Obama.


Most people aren't white.  White shouldn't be the default output.
 
2020-06-24 5:15:43 AM  
The online tool utilizes an algorithm called PULSE-originally published by a group of undergraduate students at Duke University.

I don't need AI to tell that DUKE SUCKS.
 
2020-06-24 5:32:37 AM  
The problem seems to be a biased data set for training. They used photos off of Flickr. Two quick ways to fix this:

1. A race quotum for Flickr. Sorry, person of colour x, your upload has been queued until enough photos of colours y and z have been added.

2. Racial profiling of Flickr uploads. Researchers can make a healthy mix for their dataset.

I don't see any problems with that at all. Especially option 2 is a treasure trove for scientists!

/s
 
Displayed 48 of 48 comments

View Voting Results: Smartest and Funniest

This thread is closed to new comments.

Continue Farking




On Twitter



  1. Links are submitted by members of the Fark community.

  2. When community members submit a link, they also write a custom headline for the story.

  3. Other Farkers comment on the links. This is the number of comments. Click here to read them.

  4. Click here to submit a link.