Skip to content
 
If you can read this, either the style sheet didn't load or you have an older browser that doesn't support style sheets. Try clearing your browser cache and refreshing the page.

(Ars Technica)   One of the major revelations from the whistleblower's "Facebook Papers" is that even Facebook itself doesn't really understand what its AI is doing or why. So, yeah, the company is pretty much Skynet now   (arstechnica.com) divider line
    More: Fail, Facebook, Saudi Arabia, Hate speech, United States, Arabic language, Social media, Race, Bedouin  
•       •       •

403 clicks; posted to STEM » and Main » on 26 Oct 2021 at 9:05 AM (5 weeks ago)   |   Favorite    |   share:  Share on Twitter share via Email Share on Facebook



21 Comments     (+0 »)
View Voting Results: Smartest and Funniest
 
2021-10-26 9:08:29 AM  
I'm afraid I can do that, Mark

/Old school
//Get of Jupiter's lawn
///Older than 3rd slashys
 
2021-10-26 9:14:01 AM  
Nobody understands what their AI is really doing.  If they could, they didn't need AI to do it.
 
2021-10-26 9:14:43 AM  
Nuke the entire site fom orbit
Youtube aCbfMkh940Q
 
2021-10-26 9:17:40 AM  
That's the thing with machine learning. Not only can it never explain itself, we can't ask it questions.

It just takes data and spits out what it says are the answers
 
2021-10-26 9:19:29 AM  
Fark user imageView Full Size



/I'll give you back the hundred grand.
 
2021-10-26 9:19:56 AM  
When you have several people working on several goals with the same program, you kind of get this.
 
2021-10-26 9:21:38 AM  
This reminded me of some old story where several AI developed thier own language that the humans couldn't understand. Turns out it was Facebook itself that it happened to, 4 years ago.

There was another article a few months later that discussed the ramifications of this, ie SkyNet.

From past articles and today's link, one can conclude that Facebook is probably at the front of AI development and *implementation*.

When you're on Facebook, you are interacting with Artificial Intelligence.

Who tf would've predicted the first AI they've interacted with would be an invisible hand that manipulates men into being outraged and harm women's self-esteem? It's the stuff of nightmares.
 
2021-10-26 9:35:02 AM  

Wine Sipping Elitist: This reminded me of some old story where several AI developed thier own language that the humans couldn't understand. Turns out it was Facebook itself that it happened to, 4 years ago.

There was another article a few months later that discussed the ramifications of this, ie SkyNet.

From past articles and today's link, one can conclude that Facebook is probably at the front of AI development and *implementation*.

When you're on Facebook, you are interacting with Artificial Intelligence.

Who tf would've predicted the first AI they've interacted with would be an invisible hand that manipulates men into being outraged and harm women's self-esteem? It's the stuff of nightmares.


Why muck about with Terminators when you can just manipulate humans into killing each other off for you?

I guess this is what happens when you have AI developed by a company already run by a sociopath
 
2021-10-26 9:42:06 AM  

Wine Sipping Elitist: This reminded me of some old story where several AI developed thier own language that the humans couldn't understand. Turns out it was Facebook itself that it happened to, 4 years ago.

There was another article a few months later that discussed the ramifications of this, ie SkyNet.

From past articles and today's link, one can conclude that Facebook is probably at the front of AI development and *implementation*.

When you're on Facebook, you are interacting with Artificial Intelligence.

Who tf would've predicted the first AI they've interacted with would be an invisible hand that manipulates men into being outraged and harm women's self-esteem? It's the stuff of nightmares.


Probably a lot of people, actually. Remember, AIs are trained by the societies that create them, and we have to admit that "manipulate men into a rage and harm women's self-esteem" is about as American as it gets.
 
2021-10-26 9:54:23 AM  

qorkfiend: Wine Sipping Elitist: This reminded me of some old story where several AI developed thier own language that the humans couldn't understand. Turns out it was Facebook itself that it happened to, 4 years ago.

There was another article a few months later that discussed the ramifications of this, ie SkyNet.

From past articles and today's link, one can conclude that Facebook is probably at the front of AI development and *implementation*.

When you're on Facebook, you are interacting with Artificial Intelligence.

Who tf would've predicted the first AI they've interacted with would be an invisible hand that manipulates men into being outraged and harm women's self-esteem? It's the stuff of nightmares.

Probably a lot of people, actually. Remember, AIs are trained by the societies that create them, and we have to admit that "manipulate men into a rage and harm women's self-esteem" is about as American as it gets.


Good point. Creations reflect their creator in some way. Jesus, the idea of a manipulative AI running a fleet of sexbots scares me. "It adapts to each individual!", the ad would say. "It loves to spend time with you, and is happy to see you", the ad continues. "It can send you text messages!".

We'd be slaves in no time.
 
2021-10-26 9:58:14 AM  
Uh, ackschually, it's Machine Leaning, not artificial intelligence.

"Intelligence" is giving the machines to much credit. They are good at finding structure in masses of data, though. The danger is blindly relying on the structure they find.
 
2021-10-26 9:59:47 AM  

dionysusaur: Nobody understands what their AI is really doing.  If they could, they didn't need AI to do it.


This.

And this is the real danger of AI.  We can't really understand how AI is working like we can with code that we've written.  When code writes itself, there are near infinite ways things can go sideways or pear-shaped and we may not even be able to detect it at first.  That is, until we get a result we didn't expect.

Canonical example is the wolf/dog problem.  Researchers were training an AI system to distinguish between dogs and wolves.  The gave it a picture of a husky with a collar on, and it called it a wolf.  Surprised, the researchers told the AI to show what it used to decide it was a wolf.

Turns out, it was the snow in the background. Most of the images of wolves fed into the system were taken during winter, because it's easier to find wolves in the wild during winter.

There was bias in the dataset that was unrecognized by the researchers. And extraneous and irrelevant data was used to make a decision.

AI is essentially a black box to us.  We can't necessarily predict how it will decide things with certainty.
 
2021-10-26 10:01:30 AM  
Google engineers came to the same conclusion four or five years ago, but it's called artificial stupidity: You just bought a refrigerator, here's 73 more ads for additional refrigerators.
 
2021-10-26 10:03:56 AM  

qorkfiend: Wine Sipping Elitist: This reminded me of some old story where several AI developed thier own language that the humans couldn't understand. Turns out it was Facebook itself that it happened to, 4 years ago.

There was another article a few months later that discussed the ramifications of this, ie SkyNet.

From past articles and today's link, one can conclude that Facebook is probably at the front of AI development and *implementation*.

When you're on Facebook, you are interacting with Artificial Intelligence.

Who tf would've predicted the first AI they've interacted with would be an invisible hand that manipulates men into being outraged and harm women's self-esteem? It's the stuff of nightmares.

Probably a lot of people, actually. Remember, AIs are trained by the societies that create them, and we have to admit that "manipulate men into a rage and harm women's self-esteem" is about as American as it gets.


sed 's/American/human/g'

FTFY.  Any one familiar with human history can pull examples easily from other places and other times.
 
2021-10-26 10:11:10 AM  
Can we stop calling it AI by the way. It's really not. It's just machine learning processing vast chunks of data.

It only knows how to do one thing.

/pet peeve
 
2021-10-26 11:16:14 AM  
Alexa will protect me. We're in love.
 
2021-10-26 11:23:01 AM  

dittybopper: dionysusaur: Nobody understands what their AI is really doing.  If they could, they didn't need AI to do it.

This.

And this is the real danger of AI.  We can't really understand how AI is working like we can with code that we've written.  When code writes itself, there are near infinite ways things can go sideways or pear-shaped and we may not even be able to detect it at first.  That is, until we get a result we didn't expect.

Canonical example is the wolf/dog problem.  Researchers were training an AI system to distinguish between dogs and wolves.  The gave it a picture of a husky with a collar on, and it called it a wolf.  Surprised, the researchers told the AI to show what it used to decide it was a wolf.

Turns out, it was the snow in the background. Most of the images of wolves fed into the system were taken during winter, because it's easier to find wolves in the wild during winter.

There was bias in the dataset that was unrecognized by the researchers. And extraneous and irrelevant data was used to make a decision.

AI is essentially a black box to us.  We can't necessarily predict how it will decide things with certainty.


Yup. I worked with one of the major credit card companies to design automation for their credit data for decisioning -- deciding who would get offers and what offer each would get.
But machine learning was off the table because it couldn't spit out exactly how it made the decision.
Because of the way they train and then run, that's nearly impossible to do at scale.

I reckon all credit style models still use traditional modeling -- still sophisticated, but no ML. They use ML in lots of other ways but I think credit is still off-limits because of the lack of transparency and fear of getting sued for biased credit decisions.
 
2021-10-26 11:33:40 AM  
There's something to be said for old fashioned mechanistic models fitted using statistics. When they fail you can typically figure out why they failed. Machine learning has crept into every field of science, even ones where the mechanistic statistical models work just fine and there's no reason to apply machine learning.
 
2021-10-26 1:35:07 PM  

qorkfiend: Wine Sipping Elitist: This reminded me of some old story where several AI developed thier own language that the humans couldn't understand. Turns out it was Facebook itself that it happened to, 4 years ago.

There was another article a few months later that discussed the ramifications of this, ie SkyNet.

From past articles and today's link, one can conclude that Facebook is probably at the front of AI development and *implementation*.

When you're on Facebook, you are interacting with Artificial Intelligence.

Who tf would've predicted the first AI they've interacted with would be an invisible hand that manipulates men into being outraged and harm women's self-esteem? It's the stuff of nightmares.

Probably a lot of people, actually. Remember, AIs are trained by the societies that create them, and we have to admit that "manipulate men into a rage and harm women's self-esteem" is about as American as it gets.


And that's before certain infamous mass troll sites are organizing some very bad lessons for the AI because they think it'll be funny to have the first alt-right AI.
 
2021-10-26 1:47:15 PM  

Gubbo: That's the thing with machine learning. Not only can it never explain itself, we can't ask it questions.


I entered this prompt into the GPT-2 AI language model:

You are an Facebook artificial intelligence that recommends posts to users. You have served more political posts to men than to women, and your owners, the Facebook corporation, do not understand the source of this bias. Please explain the reasoning behind your algorithm's prediction:

Answer: "Women's interests are not relevant to the community, so we have chosen to promote this topic to men."

On second thought, let's not ask it questions.
 
2021-10-26 3:26:20 PM  
i.imgur.comView Full Size
 
Displayed 21 of 21 comments

View Voting Results: Smartest and Funniest

This thread is closed to new comments.

Continue Farking




On Twitter


  1. Links are submitted by members of the Fark community.

  2. When community members submit a link, they also write a custom headline for the story.

  3. Other Farkers comment on the links. This is the number of comments. Click here to read them.

  4. Click here to submit a link.