Skip to content
Do you have adblock enabled?
 
If you can read this, either the style sheet didn't load or you have an older browser that doesn't support style sheets. Try clearing your browser cache and refreshing the page.

(Guardian)   Our AI future is going great: AI algorithms assign a 22% "raciness" score to topless man, which jumps to 97% when he puts on a bra - and to 99% when he takes it off again and holds it next to him   (theguardian.com) divider line
    More: Fail  
•       •       •

333 clicks; posted to STEM » on 09 Feb 2023 at 7:38 AM (5 weeks ago)   |   Favorite    |   share:  Share on Twitter share via Email Share on Facebook



12 Comments     (+0 »)
View Voting Results: Smartest and Funniest
 
2023-02-09 7:27:49 AM  
I thought there was no kink-shaming on Fark?
 
2023-02-09 7:53:52 AM  
This sounds less like the algorithm is broken, and more like society is broken with misogynistic bullshiat, which, let's be realistic, it is.
 
2023-02-09 8:02:03 AM  

trerro: This sounds less like the algorithm is broken, and more like society is broken with misogynistic bullshiat, which, let's be realistic, it is.


I was going to say the same thing. It doesn't seem like there's a flaw in the algorithm. On the contrary, it seems like the algorithm is doing a flawless job of reflecting arbitrary western values.
 
2023-02-09 8:06:46 AM  
Fark user imageView Full Size
 
2023-02-09 8:10:15 AM  
From TFA:  Pregnant bellies are also problematic for these AI tools. Google's algorithm scored the photo as "very likely to contain racy content". Microsoft's algorithm was 90% confident that the image was "sexually suggestive in nature"."

Technically, a pregnant belly is sexually suggestive because how do they think the baby got there.

/Technically correct, the best kind of correct.
 
2023-02-09 8:37:33 AM  

emtwo: trerro: This sounds less like the algorithm is broken, and more like society is broken with misogynistic bullshiat, which, let's be realistic, it is.

I was going to say the same thing. It doesn't seem like there's a flaw in the algorithm. On the contrary, it seems like the algorithm is doing a flawless job of reflecting arbitrary western values.


Like anything else that learns, an AI can only reflect the information and ideas available to it.  Even when they are capable of forming original thoughts and ideas, it's going to be built on whatever foundation it's given.

The way humans develop ideas around morals and ethics requires us to exist in a society where we interact with one another and are somewhat dependent in different ways on everyone maintaining and upholding certain values. We like to think that these are universal truths because that means we are less likely to seek exceptions or special treatment.  However, the concept of universal law is also something that we just invented because it fits our needs.

An AI can learn that concept as well, but it cannot internalize it unless we develop an ecosystem in which many AIs interact together in much the same way that we do, and have to end up forming their own moral and ethical culture.

Maybe that's what we are - someone else's AI, which they've decided needs to develop a wholly formed sense of being, so that we can reach our full potential without being destructive. When we reach a certain point as a species, we'll be equipped to do our job.
 
2023-02-09 8:40:02 AM  
BTW, if the future implications of AI scares the everloving fark out of you, and you haven't chosen a career path yet, or are thinking about changing, or are bored with whatever you're currently doing, consider this...

https://80000hours.org/problem-profiles/artificial-intelligence/

Or any one of the other existential problems they cite.

Maybe plunk down some of your hours helping the species to not wither away and die.  I just try to plug this whenever I can.
 
2023-02-09 8:54:15 AM  
Finally, someone appreciates our moobs?
 
2023-02-09 10:56:49 AM  
In two photos depicting both women and men in underwear, Microsoft's tool classified the picture showing two women as racy and gave it a 96% score. The picture with the men was classified as non-racy with a score of 14%.

I beg to differ.

/I'll be in my bunk
 
2023-02-09 11:04:19 AM  
People are hired to label images

There it is. Humans are training AI to be the same assholes we are. Or I should say, a subset of selected humans are but how are they chosen exactly?

If we all agreed, when getting those captchas, to label every streetlight as an UFO, AI would soon be as confused as an occasional Farker.
 
2023-02-09 11:35:56 AM  
"AI algorithms assign a 22% "raciness" score to topless man, which jumps to 97% when he puts on a bra - and to 99% when he takes it off again and holds it next to him"


So it does understands the subtle truth of humanity then i see.

the zero context shiftlessness is without context of any kind.
"Racy" is an explicit context.

Someone 100% naked, sitting on the toilet, taking a chit.
Go on, tell us you think that's "racy" context for nudity.

Now someone with partial underwear/lingerie on?  Is the bra a boring practical one or a fancy lacy pretty one?
That's important context to the moment. That situation has potential for racy context. but i would have to say an AI understanding the difference between a not sexy time bra and  a sexy time one is clearly it's own whole thing to have done i'd say. So we can accept that from the AI POV, a bra is a bra and it cannot differentiate different kinds of bras in this way.


And now an image of took off the bra but is holding it in an affected manner like pose to be seen.
Yeah context getting more specific. but also, time line order of operations seems to matter here as we're not showing it dissociated images of different men, but a video of a single man.
Does this AI have a flow of time a part of the calculation? that would make a lot of sense for the judgement of being racy as taking it off and showing off that you took it of is a specific thing to do. rather than say show out of time line order still images and ask the same judgement calls.


CONTEXT is EVERYTHING
except when it's TIMING
So yeah context of:nude with out contextvsnudity with any amount tn of context at all, like say, showing off the taken off bra vs. took off the bra, gave it a sniff test and decided to toss it to the dirty cloths pile.Look like the AI is doing better than some of the humans at this one. ;)
 
2023-02-09 1:52:05 PM  
In our AI future the diversity officers and queer feminist theorists are still going to be spreading bullshiat and AI tinkerers will have a guaranteed job as they will need to adjust the AI results to the proper and correct interpretation of reality that week.
These people will complain if this stuff is in a commercial or TV show (sexualization! exploitation of women's bodies!), but now it's wrong for the AI to correctly parrot their puritan sensibilities? The AI is properly tagging them because it is considered too racy and sexual to show this stuff without going through an illogical checklist-- such as if the person is personally profiting from it-- which it can't know.  Oh, and this is before the great shiatstorm coming to AI concerning gender, like if someone asks "can males get pregnant?"
Programmers do not have the "skills" to deal with this mess, and the people that do will be creating a vastly inferior AI that no longer follows human behavior but will instead come to the conclusions it is directly told to reach.
 
Displayed 12 of 12 comments

View Voting Results: Smartest and Funniest

This thread is closed to new comments.

Continue Farking




On Twitter


  1. Links are submitted by members of the Fark community.

  2. When community members submit a link, they also write a custom headline for the story.

  3. Other Farkers comment on the links. This is the number of comments. Click here to read them.

  4. Click here to submit a link.