Skip to content
Do you have adblock enabled?
 
If you can read this, either the style sheet didn't load or you have an older browser that doesn't support style sheets. Try clearing your browser cache and refreshing the page.

(Unite.ai)   A foolproof, 100% effective way to stop journalists overstating the significance of new science papers   (unite.ai) divider line
    More: Unlikely  
•       •       •

1008 clicks; posted to STEM » on 15 Sep 2021 at 8:50 AM (4 days ago)   |   Favorite    |   share:  Share on Twitter share via Email Share on Facebook



13 Comments     (+0 »)
View Voting Results: Smartest and Funniest
 
4 days ago  
BUT WE NEED TO KNOW IF AIs ARE GOING YO EXTERMINATE ALL OF HUMANITY!
 
4 days ago  
insert xkcd here
 
4 days ago  
It will detect the exaggeration.

It won't prevent it.

You cannot dent crappy journalists their sweet, sweet clicks.
 
4 days ago  

FrancoFile: insert xkcd here


Counters with SMBC.
 
4 days ago  
phdcomics.comView Full Size
 
4 days ago  
smbc-comics.comView Full Size
 
4 days ago  
smbc-comics.comView Full Size
 
4 days ago  
The work leverages Natural Language Processing (NLP) against a novel dataset of paired press releases and abstracts, with the researchers claiming to have developed '[a] new, more realistic task formulation' for the detection of scientific exaggeration. The authors have promised to publish the code and data for the work at GitHub soon.

Oh, so you're going to use AI to fix misinformation on the web? Hey, isn't it AI and all your farking advertising mechanisms that caused this farking problem in the first place? Because it didn't seem to be nearly as pervasive until we monetized every single person using the internet.
sounds like a good way to get all scientific information to "conform" to some certain standard that advertisers would like to see, which means leaving out all the hard things. Like all that info about climate change, and the limits of growth, and the fallibility of humans when confronted with money, and the endless farking worthlessness of data engineers.

but we wouldn't want to UPSET anybody in our glorious consumer world, so shut up and get in line. You dont have much choice if you want to use the internet, do you?

fark you guys. And every time I see another sponsored link from UniteAI, I'm going to show up and troll the fark out of you assholes.
 
4 days ago  
'helps in the more difficult cases of identifying and differentiating direct causal claims from weaker claims, and that the most performant approach involves classifying and comparing the individual claim strength of statements from the source and target documents'.

"It's just a theory, therefore our mighty algorithm weeded it out. So unless you scientists have actually proven something, and it's in some journal that is high-class enough, we don't include it in our search results. That's just how the market--ah, the scientific community--wants things to work."
 
4 days ago  
Meanwhile, these are the very same people who write all those algorithms for use in advertising, which does this to us:

Fark user imageView Full Size


Fark user imageView Full Size


could you be bigger hypocrites? Nobody who works in AI is interested in anything but selling and getting paid. Truth and accuracy, let alone trying to make a better world, has nothing to do with it.
Out yourselves, you're so farking proud of it all.
 
4 days ago  

cryinoutloud: [unhinged ranting having nothing to do with TFA]

i.pinimg.comView Full Size

 
4 days ago  

Ambitwistor: cryinoutloud: [unhinged ranting having nothing to do with TFA]

[i.pinimg.com image 500x375]


Yes.  Like, STOP, stop.
 
4 days ago  
Hybrid of the Sarcasm Detector and the Frog Exaggerator
 
Displayed 13 of 13 comments

View Voting Results: Smartest and Funniest

This thread is closed to new comments.

Continue Farking




On Twitter


  1. Links are submitted by members of the Fark community.

  2. When community members submit a link, they also write a custom headline for the story.

  3. Other Farkers comment on the links. This is the number of comments. Click here to read them.

  4. Click here to submit a link.