Do you have adblock enabled?
If you can read this, either the style sheet didn't load or you have an older browser that doesn't support style sheets. Try clearing your browser cache and refreshing the page.

(Reason Magazine)   Study of pot smokers' brains shows that MRIs cause bad science reporting   (reason.com) divider line 64
    More: Followup, science reporting, Medical News Today, MedPage Today, Society for Neuroscience, gray matter, Northwestern University, brains  
•       •       •

4270 clicks; posted to Geek » on 21 Apr 2014 at 11:09 AM (1 year ago)   |  Favorite    |   share:  Share on Twitter share via Email Share on Facebook   more»



64 Comments   (+0 »)
   
View Voting Results: Smartest and Funniest

First | « | 1 | 2 | » | Last | Show all
 
2014-04-21 10:23:01 AM  
Pete Guither notes a scathing assessment of Gilman et al.'s study by U.C.-Berkeley computational biologist Lior Pachter, who calls it "quite possibly the worst paper I've read all year."

http://liorpachter.wordpress.com/2014/04/17/does-researching-casual- ma rijuana-use-cause-brain-abnormalities/

The scathing assessment is pretty fricking hilarious (and scathing).  I love nerd fights like this.  But you just have to know any study that takes the form of we found a correlation between _____________and ______________ after looking at a few dozen people is going to be meaningless.
 
2014-04-21 10:54:29 AM  
I had 4 MRIs on my head inside of a month and they couldn't find anything at all

/wait...
 
2014-04-21 11:13:14 AM  

MaudlinMutantMollusk: I had 4 MRIs on my head inside of a month and they couldn't find anything at all

/wait...


img.photobucket.com
 
2014-04-21 11:18:49 AM  
The breaking news alert on my phone from FOX and subsequent coverage in the media reeked off bullshiat. Too bad "corrections" is the biggest misnomer in journalism.
 
Skr
2014-04-21 11:28:11 AM  
They should do another double blind test on drugged spider's webs. That is where the real science is.
 
2014-04-21 11:32:53 AM  

lennavan: Pete Guither notes a scathing assessment of Gilman et al.'s study by U.C.-Berkeley computational biologist Lior Pachter, who calls it "quite possibly the worst paper I've read all year."

http://liorpachter.wordpress.com/2014/04/17/does-researching-casual- ma rijuana-use-cause-brain-abnormalities/

The scathing assessment is pretty fricking hilarious (and scathing).  I love nerd fights like this.  But you just have to know any study that takes the form of we found a correlation between _____________and ______________ after looking at a few dozen people is going to be meaningless.


If this was the worst paper Guither has read all year, he hasn't read very many papers.

He criticizes the paper based on just two issues:

The biggest criticism I've seen, and one Guither shares with others, is that the people reporting on the results of the study (including, unfortunately, one of the authors) have over-reached, using language implying causation when all they'd found was a correlation. While the paper apparently did have a single sentence that used the wrong word (implying causation), that's not in and of itself an indication that the paper was poorly done.

For his only other criticism, Guither takes a very hard-line stance when it comes to the multiple comparisons problem when calculating p-values. His stance is flawed; if I followed Guither's approach, I could easily design a study to rule out any association no matter what the data showed. Suppose I wanted to prove that nosebleeds are not correlated with number of punches in the face. I'd start testing linear, quadratic, cubic, etc. terms until the Bonferroni correction he wants to apply drops the threshold p-value low enough, and then state that the original linear relationship was too weak to achieve this level of significance. The paper reported all the data, including corrected and uncorrected p-values. Somehow, reporting both threshold results (one with boldface, one with asterisks) somehow personally offended Guither. He's never seen this before . . . probably because most papers either report one or the other.

I have no idea who Guither is, and can't be arsed to find out, but it's pretty clear he has no real experience in reading or writing medical literature.

/Really? This is the worst he's seen? Come on.
 
2014-04-21 11:35:25 AM  

thismomentinblackhistory: The breaking news alert on my phone from FOX and subsequent coverage in the media reeked off bullshiat. Too bad "corrections" is the biggest misnomer in journalism.


I'm pissed off NPR seemed to be giving it decent consideration.  They might have provided a rebuttal, but the initial few minutes of adoration for the work had me flip the station.

/don't smoke
//hate the state of science reporting and publishing though
 
2014-04-21 11:36:51 AM  
Just skimmed the beginning of the article, but 20 subjects for an MRI study is far, far too few.
 
2014-04-21 11:37:31 AM  

draypresct: lennavan: Pete Guither notes a scathing assessment of Gilman et al.'s study by U.C.-Berkeley computational biologist Lior Pachter, who calls it "quite possibly the worst paper I've read all year."

http://liorpachter.wordpress.com/2014/04/17/does-researching-casual- ma rijuana-use-cause-brain-abnormalities/

The scathing assessment is pretty fricking hilarious (and scathing).  I love nerd fights like this.  But you just have to know any study that takes the form of we found a correlation between _____________and ______________ after looking at a few dozen people is going to be meaningless.

If this was the worst paper Guither has read all year, he hasn't read very many papers.

He criticizes the paper based on just two issues:

The biggest criticism I've seen, and one Guither shares with others, is that the people reporting on the results of the study (including, unfortunately, one of the authors) have over-reached, using language implying causation when all they'd found was a correlation. While the paper apparently did have a single sentence that used the wrong word (implying causation), that's not in and of itself an indication that the paper was poorly done.

For his only other criticism, Guither takes a very hard-line stance when it comes to the multiple comparisons problem when calculating p-values. His stance is flawed; if I followed Guither's approach, I could easily design a study to rule out any association no matter what the data showed. Suppose I wanted to prove that nosebleeds are not correlated with number of punches in the face. I'd start testing linear, quadratic, cubic, etc. terms until the Bonferroni correction he wants to apply drops the threshold p-value low enough, and then state that the original linear relationship was too weak to achieve this level of significance. The paper reported all the data, including corrected and uncorrected p-values. Somehow, reporting both threshold results (one with boldface, one with asterisks) somehow personally o ...


Fair enough from your post, and thanks for that.  I think the distinction between correlation and causation is a pretty egregious error myself but other than that I might have been mistaken.
 
2014-04-21 11:43:44 AM  
Chances of owning a Rolls Royce increase as income increases.  Therefore, buying a Rolls Royce will likely increase your income.

Causation... how does it work?
 
2014-04-21 11:44:46 AM  
Long term heavy pot use might be unhealthy? So what? Until we use that same yardstick to make alcohol / tobacco consumption / distribution into felonies carrying decades long prison sentences it shouldn't be used as a justification for making or keeping pot illegal.
 
2014-04-21 11:46:09 AM  

digistil: Just skimmed the beginning of the article, but 20 subjects for an MRI study is far, far too few.


More than 20?   Do you have any idea how expensive MRIs are?

They're over $3,000 each, going by the rates at my local hospital.  That's already 60 grand!
 
2014-04-21 11:46:19 AM  

Target Builder: Long term heavy pot use might be unhealthy? So what? Until we use that same yardstick to make alcohol / tobacco consumption / distribution into felonies carrying decades long prison sentences it shouldn't be used as a justification for making or keeping pot illegal.


It amazes me the most that those most against pot use are the same ones that shout Personal Responsibility at the top of their lungs.
 
2014-04-21 11:47:15 AM  
But look at all the the other problems it causes:

img.fark.net
 
2014-04-21 11:49:07 AM  

StreetlightInTheGhetto: I think the distinction between correlation and causation is a pretty egregious error myself but other than that I might have been mistaken.


I agree it's a pretty egregious error, however it looks like the paper tried hard to avoid it. The critics can only point to a single offending word in the entire paper; every other criticism on this topic has been on the reporting of the paper, not of the paper itself.

It sounds like one of the authors has headline hunger, and has promoted the paper in ways that have intentionally implied causation. The paper itself, which presumably all the other authors of the paper had some control over, seemed to contain reasonably good science.

/I don't have any control over what my co-authors say either.
 
2014-04-21 11:49:46 AM  

sendtodave: digistil: Just skimmed the beginning of the article, but 20 subjects for an MRI study is far, far too few.

More than 20?   Do you have any idea how expensive MRIs are?

They're over $3,000 each, going by the rates at my local hospital.  That's already 60 grand!



Thanks ObamaCare.
 
2014-04-21 11:51:12 AM  
OOPS! Too late, Uncle Sam already patented THC, in 2003. -->

/ok 'unspecified cannabinoids,' don't get your panties in a bunch, the point is they patented it, now reschedule it!
//neuroprotectant
///antioxidant -- the fascist suppressors of free radicals -- HI JAMIE! ;)
//tinyurl.com 1mn
 
2014-04-21 11:56:00 AM  
Here's what I don't understand:

How did the study authors conclude that the observed brain differences meant that the pot users brains were somehow "worse".  Wouldn't they have to also show that these observed physical changes are associated with reduced cognitive ability in some way?

Maybe the changes observed in pot smokers brains result in greater, say, creativity.  Not all change is necessarily bad.
 
2014-04-21 11:57:55 AM  
3.bp.blogspot.com
 
2014-04-21 12:02:09 PM  

thismomentinblackhistory: The breaking news alert on my phone from FOX and subsequent coverage in the media reeked off bullshiat. Too bad "corrections" is the biggest misnomer in journalism.


I love how in the Fox report on this a few days ago they immediately tied it to Obama'a statement that marijuana was no harmful than alcohol and how wrong he is and that line of thinking is extremely dangerous. All Obama hate all the time.
 
2014-04-21 12:05:15 PM  

stonicus: Target Builder: Long term heavy pot use might be unhealthy? So what? Until we use that same yardstick to make alcohol / tobacco consumption / distribution into felonies carrying decades long prison sentences it shouldn't be used as a justification for making or keeping pot illegal.

It amazes me the most that those most against pot use are the same ones that shout Personal Responsibility at the top of their lungs.


Both of these things.

Though technically I'm against pot use, I also favor legalization. Ethanol, THC, nicotine, caffeine... all about the same amount of danger. But only one of those is illegal, while entire industries make billions addicting people to the other three...
 
2014-04-21 12:07:14 PM  

draypresct: He criticizes the paper based on just two issues:


I think the criticisms of incredibly small sample size, only 1 MRI per study member, not accurately controlling for other substance abuse, etc. are all pretty valid too.
 
2014-04-21 12:10:42 PM  

sendtodave: digistil: Just skimmed the beginning of the article, but 20 subjects for an MRI study is far, far too few.

More than 20?   Do you have any idea how expensive MRIs are?

They're over $3,000 each, going by the rates at my local hospital.  That's already 60 grand!


It would be about $600-800 per hour for research; not cheap but not as expensive as a medical MRI.
 
2014-04-21 12:13:50 PM  

draypresct: If this was the worst paper Guither has read all year, he hasn't read very many papers.


Well, presumably like most decent academics, he reads abstracts and triages the crap.  It's not difficult to skip bad papers.  I have no doubt he would have triaged this one but from his response, he took the "I'm a mathematician" claim from the author personally and I understand why.

draypresct: I'd start testing linear, quadratic, cubic, etc. terms until the Bonferroni correction


I agree, his Bonferroni analysis seemed harsh.  Partial disclosure - I'm not a mathematician.  But it seemed he wanted the p value threshold corrected by dividing by 123.  Wowza.

draypresct: I have no idea who Guither is, and can't be arsed to find out, but it's pretty clear he has no real experience in reading or writing medical literature.


He's a mathematician.  That's why his main beef is with the math.  He posted a link to his CV in the comments when someone questioned his credentials.

draypresct: Suppose I wanted to prove that nosebleeds are not correlated with number of punches in the face


Your stance is flawed.  It is exceedingly difficult to prove a negative.  All your study would prove is that your study was unable to identify a correlation between the two.

draypresct: While the paper apparently did have a single sentence that used the wrong word (implying causation), that's not in and of itself an indication that the paper was poorly done.


It's not just the paper, I think the larger beef is with how the authors are talking about this in public.  It is very common for news organizations to stretch conclusions and say causation where there is only correlation.  That's an enormous error that no scientist would ever make.  Except this author did make that error in the paper (which is simply outrageous) and this author continues to make that error in public interviews.

draypresct: He criticizes the paper based on just two issues:


I think you missed some other major flaws.  But now with full disclosure - I'm not at work so I don't have access to journals right now so I haven't read the primary article.  (I could login but I'm too lazy.  At best we have a correlation based on 40 people.  Not worth my time.)

1.  It seems some of the statistics weren't significant even with a p value of 0.05.
2.  It seems one of the significant differences in brain sizes between populations can be explained by a single outlier -- It is worth noting that the removal of the outlier at a volume of over would almost certainly flatten the line altogether and remove even the slight effect.
3. It would have been nice to test this hypothesis but the authors did not release any of their data.

The bolded (his emphasis) is outrageous.
 
2014-04-21 12:16:57 PM  
well duh, those Magneton Reconnaissance Infographs aren't easy to read, for scientists and journalists alike.
 
2014-04-21 12:19:13 PM  

bighairyguy: But look at all the the other problems it causes:


img.fark.net

I have an itchy rash on the back of my left calf, I am feeling a bit tired and I am a bit over weight.  The last time I smoked weed was over 25 years ago.  They're right that the symptoms can last a long time.
 
2014-04-21 12:26:37 PM  

HighZoolander: sendtodave: digistil: Just skimmed the beginning of the article, but 20 subjects for an MRI study is far, far too few.

More than 20?   Do you have any idea how expensive MRIs are?

They're over $3,000 each, going by the rates at my local hospital.  That's already 60 grand!

It would be about $600-800 per hour for research; not cheap but not as expensive as a medical MRI.


That's actually higher than I thought it would be.
 
2014-04-21 12:29:11 PM  

LazarusLong42: I'm against pot use, I also favor legalization. Ethanol, THC, nicotine, caffeine... all about the same amount of danger. But only one of those is illegal, while entire industries make billions addicting people to the other three


Nicotine slowly and horribly kills just under half a million Americans every year when taken in the traditional manner.
Alcohol kills about 25,000 Americans each year, is well noted for its involvement in fights and domestic violence, and is the most commonly used drug in drug-assisted rapes.
THC is not deadly at any consumable dose and opponents to it have struggled to link it to half a dozen or so deaths where it may or may not have been a factor.
Caffeine causes headaches in some people who go cold turkey on it and raises miscarriage risk.
 
2014-04-21 12:32:33 PM  

StreetlightInTheGhetto: thismomentinblackhistory: The breaking news alert on my phone from FOX and subsequent coverage in the media reeked off bullshiat. Too bad "corrections" is the biggest misnomer in journalism.

I'm pissed off NPR seemed to be giving it decent consideration.  They might have provided a rebuttal, but the initial few minutes of adoration for the work had me flip the station.

/don't smoke
//hate the state of science reporting and publishing though


We heard the same thing, from the same station. And I flipped the station too!
 
2014-04-21 12:57:38 PM  

lennavan: draypresct: Suppose I wanted to prove that nosebleeds are not correlated with number of punches in the faceYour stance is flawed. It is exceedingly difficult to prove a negative. All your study would prove is that your study was unable to identify a correlation between the two.


Yes, that is correct. I should have written "Suppose I wanted to show that the data do not support that nosebleeds are correlated with the number of punches in the face." If you'd prefer, suppose I worked for a tobacco company, and I used Guither's method (as I previously outlined) to show that no current study can claim that there is a statistically significant correlation between cigarette use and lung cancer. Would you consider it a valid analysis?

The problem with his approach to the Bonferroni correction is that it's too gameable.

lennavan: I think you missed some other major flaws. But now with full disclosure - I'm not at work so I don't have access to journals right now so I haven't read the primary article. (I could login but I'm too lazy. At best we have a correlation based on 40 people. Not worth my time.)


I don't have access to the original paper either. I agree that this is not something to devote much time to (this is Fark). And I agree that small-scale studies such as this tend to be interesting, but not conclusive. However, I don't think this is completely ignorable. The proper response would be a larger-scale study, not tossing this one away.

1. It seems some of the statistics weren't significant even with a p value of 0.05.
I can't believe the authors, after going to so much trouble (as noted by Guither) to identify the statistics that were associated with a p<0.05, reported results without regard for what the p-values showed.  Guither does not seem to make this claim either. May I ask where you're getting this?

2. It seems one of the significant differences in brain sizes between populations can be explained by a single outlier -- It is worth noting that the removal of the outlier at a volume of over would almost certainly flatten the line altogether and remove even the slight effect.

That is Guither's claim; however, he has not verified this claim. It would be easy enough to do - just eyeball the points, put them into a spreadsheet, and check.

The thing is, that single measurement is _not_ the only observation driving this correlation. The mean of the data shown at "joints per occasion" = 2 or 3 is clearly higher than the mean of the data at joints = 0 or 1, even without this outlier.

3. It would have been nice to test this hypothesis but the authors did not release any of their data.The bolded (his emphasis) is outrageous.

No, it's really not outrageous for authors to refuse to share potentially identifiable patient data with a random blogger, especially one who is clearly not experienced in working with medical data (i.e. has no idea what identifiable information is, or what is required to keep the information secure).

It's also not outrageous for authors to keep their data from public release until they're done using it for research. Otherwise, you'd get at most 1 paper per study before everyone else jumped in and got all your data for free.

/Guither's headline hunger is just as bad as Breitner's. He really, really wants this study to be wrong.
 
2014-04-21 01:02:57 PM  

lilplatinum: draypresct: He criticizes the paper based on just two issues:

I think the criticisms of incredibly small sample size, only 1 MRI per study member, not accurately controlling for other substance abuse, etc. are all pretty valid too.


The small sample size criticism is why you use p-values. Used properly, they help guide you as to whether the result you're seeing is extreme enough to be likely to have happened by chance.

How many MRIs per person would be valid? Each time I've gotten an MRI (for separate injuries, years apart), they made their diagnosis based on a single MRI per injury.

Re: substance abuse, I'm not quite sure what you're saying. Are you referring to the idea that pot is a 'gateway drug', and that pot users are more likely to be also taking more damaging drugs? Or were you making a different point?
 
2014-04-21 01:07:23 PM  
In other news, air causes bad science reporting.
 
2014-04-21 01:09:23 PM  
The best reporting is bad reporting because good reporting is boring.
 
2014-04-21 01:28:47 PM  

draypresct: However, I don't think this is completely ignorable. The proper response would be a larger-scale study, not tossing this one away.


Fair enough.  Perhaps a better way to put it is it's ignorable for everyone except the authors of the study, who now have reason to do a better, larger scale study.

draypresct: Guither... May I ask where you're getting this?


Not from Guither.  The link I posted that you replied to was actually from Lior Pachter.  I assumed all along you just confused the two names, we might be reading different critiques.

draypresct: to identify the statistics that were associated with a p<0.05, reported results without regard for what the p-values showed.


I made a mistake here.  It wasn't 0.05, it was a corrected value:

The best case was the left nucleus accumbens (Figure 1C) with a corrected p-value of 0.015 which is over the authors' own stated required threshold of 0.0125 (see caption).

liorpachter.files.wordpress.com

This seems to be a major point of the article, you can see the correlation between joints per occasion and volume.  Their p value does not meet their own stated threshold (p<0.0125) for significance.  That means according to the authors, this correlation does not even exist.  What's more, look at the actual points on the graph.  If you ignore the one outlier (2 joints, ~850 mm^3) and eyeball it, there's no increasing trend at all.  The mathematician, Lior Patcher, would really like to do that analysis but he cannot because the raw data was not released.  That's a really scathing critique.  He's saying their entire argument is based on an artifact, an outlier.  That's why I love nerd fights.  They sound so nerdy, yet at their core they're so brutal.
 
2014-04-21 01:32:47 PM  
draypresct:
How many MRIs per person would be valid? Each time I've gotten an MRI (for separate injuries, years apart), they made their diagnosis based on a single MRI per injury.

I don't know, if your goal is to demonstrate that pot changes brain structure, wouldn't it be more demonstrative to do multiple MRIs with people as they continue to smoke dope over a period of time?   Or to at least control the studies based on amount smoked?   I've had multiple MRIs before, and they were measuring something they knew they were looking for rather than trying to determine long term ongoing changes due to additction.

Re: substance abuse, I'm not quite sure what you're saying. Are you referring to the idea that pot is a 'gateway drug', and that pot users are more likely to be also taking more damaging drugs? Or were you making a different point?

The paper nebulously refers to 'recreational smokers', most of whom average 10 joints+ a week (with one guy at 30). Thats far beyond a normal recreational user.  It then asserts none of the subjects were "abusing" other drugs, but if they have stretched 'recreational smoker' to that extreme than it is certainly questionable about how adequate they were with charting additional drug use in a demographic that is probably fairly likely to be using additional drugs (if you are smoking over 4 joints a day, you are either smoking garbage or quite possibly not someone who cares all that much about putting substances in his body).

I don't know the validity of the rebutal article (which doesn't have any obligation to be a scholarly paper) but there do seem to be enough holes in it and a pretty huge jump in the original paper's press release to not take it and jump to conclusions as myriad news outlets already have.
 
2014-04-21 01:37:57 PM  

lilplatinum: draypresct:


The paper nebulously refers to 'recreational smokers', most of whom average 10 joints+ a week (with one guy at 30). Thats far beyond a normal recreational user.


I would be completely comatose of I smoked 10 joints a week.  And broke.  And unemployed.

That's like comparing someone who drinks a couple of beers after work to someone who does a fifth a day.
 
2014-04-21 01:38:50 PM  

draypresct: The mean of the data shown at "joints per occasion" = 2 or 3 is clearly higher than the mean of the data at joints = 0 or 1, even without this outlier.


You simply cannot be making this argument.  Right?  You know yourself to do the statistics:

draypresct: The small sample size criticism is why you use p-values.

  Used properly, they help guide you as to whether the result you're seeing is extreme enough to be likely to have happened by chance.

You always use p-values.  The only reason not to is you're an idiot, or alternatively you calculated the p-value and it was not significant so you're trying to sneak something by.  So why did you post to me an analysis comparing means?

draypresct: It's also not outrageous for authors to keep their data from public release until they're done using it for research.


Sure but that's not even close to what's happening here.  Numbers were utilized to generate this graph:

draypresct: No, it's really not outrageous for authors to refuse to share potentially identifiable patient data with a random blogger


Ridiculous.  You have a fundamental misunderstanding of the argument here.

img.fark.net
All that he wants is those actual numbers to be released.  Giving exact numbers in addition to the graph has fark-all to do with patient confidentiality.
 
2014-04-21 02:17:58 PM  
Media overreacts, news at 11

Shocking!


/need a doobie for this...
 
2014-04-21 02:39:17 PM  
Ignorance kills more than any drug.

Examples: anyone who took Jenny McCarthy at her word. Your average fundamentalist.
 
2014-04-21 03:38:43 PM  

lennavan: draypresct: The mean of the data shown at "joints per occasion" = 2 or 3 is clearly higher than the mean of the data at joints = 0 or 1, even without this outlier.

You simply cannot be making this argument.  Right?  You know yourself to do the statistics:

draypresct: The small sample size criticism is why you use p-values.  Used properly, they help guide you as to whether the result you're seeing is extreme enough to be likely to have happened by chance.

You always use p-values.  The only reason not to is you're an idiot, or alternatively you calculated the p-value and it was not significant so you're trying to sneak something by.  So why did you post to me an analysis comparing means?

draypresct: It's also not outrageous for authors to keep their data from public release until they're done using it for research.

Sure but that's not even close to what's happening here.  Numbers were utilized to generate this graph:

draypresct: No, it's really not outrageous for authors to refuse to share potentially identifiable patient data with a random blogger

Ridiculous.  You have a fundamental misunderstanding of the argument here.

[img.fark.net image 246x235]
All that he wants is those actual numbers to be released.  Giving exact numbers in addition to the graph has fark-all to do with patient confidentiality.


Heh, based on that sampling I would say that you need to avoid having 2 a day by having 3 a day.
 
2014-04-21 04:15:19 PM  
when I think of high standards in science reporting, I think of Fark.com

/said no one
 
2014-04-21 05:07:59 PM  

lennavan: draypresct: No, it's really not outrageous for authors to refuse to share potentially identifiable patient data with a random bloggerRidiculous. You have a fundamental misunderstanding of the argument here.All that he wants is those actual numbers to be released. Giving exact numbers in addition to the graph has fark-all to do with patient confidentiality.


I did misunderstand. I had thought he was asking for the full patient data, including any confounding factors, in order to perform adjusted analyses.

If you're not asking for potentially identifiable information, I really don't know why you and Guither are accusing the authors of hiding the very information they've put out in plain sight on that graph. I mean, really - just eyeball it, enter it into a spreadsheet until it looks close enough, and Guither could easily run the numbers himself. He didn't bother. Instead, he made a patently false claim that the entire relationship is being driven by a single outlier.

lennavan: draypresct: The mean of the data shown at "joints per occasion" = 2 or 3 is clearly higher than the mean of the data at joints = 0 or 1, even without this outlier.You simply cannot be making this argument. Right? You know yourself to do the statistics:

lennavan: You always use p-values. The only reason not to is you're an idiot, or alternatively you calculated the p-value and it was not significant so you're trying to sneak something by. So why did you post to me an analysis comparing means?


The "analysis comparing means" showed that there was still a trend, even with the outlier removed. This directly contradicts Guither's point that the entire relationship was driven by the outlier. You can tell by eye that, even without that data point, the mean value for 2 or 3 joints per occasion is numerically higher than that for 0 or 1. . . . Oh, what the hell. It takes literally less time to eyeball the data from the graph and do the analysis than it does to type up this response. Here you go:

img.fark.netThe red line is the one without that uppermost 'outlier'. The slope is still positive, and reasonably significant (p = 0.06 v. 0.03 with that point - yes, I probably got a few points a bit off).

Why do you think that Guither spent the effort to write that entire column without bothering to do this very, very simple analysis?
 
2014-04-21 05:11:03 PM  

Egoy3k: Heh, based on that sampling I would say that you need to avoid having 2 a day by having 3 a day.


Lol, but I'm gonna get all pedantic and say that there's no real difference between 2/day and 3/day in that graph.
 
2014-04-21 05:48:31 PM  

draypresct: I really don't know why you and Guither


The guy's name is not Guither, it is Lior Pacther.

lennavan: Not from Guither. The link I posted that you replied to was actually from Lior Pachter. I assumed all along you just confused the two names, we might be reading different critiques.


draypresct: a patently false claim that the entire relationship


What relationship?  The authors declared a p value of less than 0.0125 is the threshold for significance.  Then the authors calculated the p value and it was 0.015.  Therefore there is no goddamn relationship, the two are unrelated.

draypresct: The red line is the one without that uppermost 'outlier'. The slope is still positive, and reasonably significant (p = 0.06 v. 0.03 with that point - yes, I probably got a few points a bit off).


The authors calculated a p value of 0.015 with that point.  You calculated a p value twice as large by eyeballing it.  This is where you realize why you simply cannot just eyeball the data, right?

draypresct: Why do you think that Guither spent the effort to write that entire column without bothering to do this very, very simple analysis?


I'm at a loss for words.  You're an idiot.

draypresct: Lol, but I'm gonna get all pedantic and say that there's no real difference between 2/day and 3/day in that graph.


There also is no real (real meaning statistical) difference between 0/day and 3/day in that graph.  You clearly understand the bits and pieces yet you refuse to apply your knowledge.  You're clearly an idiot.
 
2014-04-21 05:53:28 PM  
I am not science smrt, but my understanding was this: Media sees MRI scans and makes bad journalism, not MRI scans make bad science.Is this correct?
 
2014-04-21 06:25:46 PM  

MaudlinMutantMollusk: I had 4 MRIs on my head inside of a month and they couldn't find anything at all

/wait...


I had to have an IQ test the other week. Luckily, it came back negative.
 
2014-04-21 06:42:12 PM  

lennavan: draypresct: The red line is the one without that uppermost 'outlier'. The slope is still positive, and reasonably significant (p = 0.06 v. 0.03 with that point - yes, I probably got a few points a bit off).The authors calculated a p value of 0.015 with that point. You calculated a p value twice as large by eyeballing it. This is where you realize why you simply cannot just eyeball the data, right?


Getting a value within a couple of percentage points by eyeballing it = close enough. There's a trend - omitting that single data point does not really alter the trend.

lennavan: Pete Guither notes a scathing assessment of Gilman et al.'s study by U.C.-Berkeley computational biologist Lior Pachter, who calls it "quite possibly the worst paper I've read all year."http://liorpachter.wordpress.com/2014/04/17/does-researching-cas ual- ma rijuana-use-cause-brain-abnormalities/


lilplatinum: draypresct:
How many MRIs per person would be valid? Each time I've gotten an MRI (for separate injuries, years apart), they made their diagnosis based on a single MRI per injury.

I don't know, if your goal is to demonstrate that pot changes brain structure, wouldn't it be more demonstrative to do multiple MRIs with people as they continue to smoke dope over a period of time?   Or to at least control the studies based on amount smoked?   I've had multiple MRIs before, and they were measuring something they knew they were looking for rather than trying to determine long term ongoing changes due to additction.


I completely agree that A) no-one should change their opinions on pot because of this study alone, and B) the ideal study would be to take a very large group of people and perform regular MRI scans on them.

In the large-scale study, they should do as you suggest and look at the changes over time, among those who took up regular pot smoking (to varying degrees) and those who did not. They'd want to record a lot of data about every patient; for example, if people are taking up pot because of medical conditions, they'd want to know about it and either exclude them or statistically account for them in the analyses.

This would be incredibly expensive, and there would need to be some serious justification before getting anywhere near that kind of funding. This study is the first step. The proper second step should be independent validation, preferably by a different research group using a different patient population. The study we described above would take place after there's enough small-scale validation to justify the expense.

In this thread, I'm just arguing that this first step shouldn't be completely ignored. I'm certainly not arguing that it's definitive.

Re: substance abuse, I'm not quite sure what you're saying. Are you referring to the idea that pot is a 'gateway drug', and that pot users are more likely to be also taking more damaging drugs? Or were you making a different point?

The paper nebulously refers to 'recreational smokers', most of whom average 10 joints+ a week (with one guy at 30). Thats far beyond a normal recreational user.  It then asserts none of the subjects were "abusing" other drugs, but if they have stretched 'recreational smoker' to that extreme than it is certainly questionable about how adequate they were with charting additional drug use in a demographic that is probably fairly likely to be using additional drugs (if you are smoking over 4 joints a day, you are either smoking garbage or quite possibly not someone who cares all that much about putting substances in his body).


Could be. I've seen different studies about the validity of the concept of pot as a gateway drug. I have no idea if the authors tested the subjects for other drugs, or if they just asked them. That would be certainly something to think about when designing the follow-up study.

I don't know the validity of the rebutal article (which doesn't have any obligation to be a scholarly paper) but there do seem to be enough holes in it and a pretty huge jump in the original paper's press release to not take it and jump to conclusions as myriad news outlets already have.

If I were one of the co-authors on this paper (I'm not), I'd be pretty embarrassed by Breitner's antics, and by that one word that got past the editors. Absent the media coverage, I don't believe from what I've seen that the authors have anything else to be particularly embarrassed about. They might have found some new science. Maybe it will pan out, maybe it won't.
 
2014-04-21 06:51:15 PM  

bighairyguy: But look at all the the other problems it causes:

[img.fark.net image 300x250]


I get all of that when I eat a lot of Taco Bell. Which of course, illustrates that correlation is not causation. Which means, all those symptoms are caused by eating at Taco Bell, and most people who eat at Taco Bell smoke marijuana, but it would be wrong to say that those symptoms are caused by smoking marijuana.
 
2014-04-21 07:09:19 PM  
Whups, got my replies mixed up. Sorry about that, lennavan & lilplatinum. Take my previous reply as being to lilplatinum.

lennavan:
draypresct: I really don't know why you and GuitherThe guy's name is not Guither, it is Lior Pacther.lennavan: Not from Guither. The link I posted that you replied to was actually from Lior Pachter. I assumed all along you just confused the two names, we might be reading different critiques.

You're right - I did mis-read your post as stating that the critique came from Guither, not Pacther. Guither was quoting Pacther. I apologize for the confusion.

lennavan: draypresct: The red line is the one without that uppermost 'outlier'. The slope is still positive, and reasonably significant (p = 0.06 v. 0.03 with that point - yes, I probably got a few points a bit off).The authors calculated a p value of 0.015 with that point. You calculated a p value twice as large by eyeballing it. This is where you realize why you simply cannot just eyeball the data, right?


I got literally within a couple of percentage points of the 'correct' p-value by eyeballing. That's close enough to check whether the outlier was what was driving the slope.

Re the 1.5% difference in calculated p-value: Do you really think that if you printed out the graph, took a ruler, measured the points as precisely as possible, and generated a dataset that achieved the original p = 0.015, that removing that outlying data point would change the slope to be flat, as per Pacther?
It is worth noting that the removal of the outlier at a volume of over 800 would almost certainly flatten the line altogether and remove even the slight effect. It would have been nice to test this hypothesis but the authors did not release any of their data.

Spoiler: removing that point will not have the effect that Pacther claimed. Feel free to perform the analysis and prove me wrong.

lennavan: What relationship? The authors declared a p value of less than 0.0125 is the threshold for significance. Then the authors calculated the p value and it was 0.015. Therefore there is no goddamn relationship, the two are unrelated.


lennavan: draypresct: Suppose I wanted to prove that nosebleeds are not correlated with number of punches in the faceYour stance is flawed. It is exceedingly difficult to prove a negative. All your study would prove is that your study was unable to identify a correlation between the two.


You're contradicting yourself here. Think about what that means - are you being objective about this?

The authors did not claim that the results were significant. They claimed they saw a trend in their data on this particular point ('trend towards significance' was their weasel-worded statement). This was one result out of several, and in fact out of the three main areas, it appears to have been the one with the least significant results, looking at table 4. On a number of measures, there appeared to be a significant relationship, with p < 0.01, especially in the shape area.

Pacther picked a fight over the least significant result, and made obviously, checkably false claims in doing so.
 
2014-04-21 11:02:09 PM  

MaudlinMutantMollusk: I had 4 MRIs on my head inside of a month and they couldn't find anything at all

/wait...


www.monologuedb.com

Sympathizes...
 
Displayed 50 of 64 comments

First | « | 1 | 2 | » | Last | Show all

View Voting Results: Smartest and Funniest


This thread is closed to new comments.

Continue Farking
Submit a Link »
Advertisement
On Twitter





In Other Media


  1. Links are submitted by members of the Fark community.

  2. When community members submit a link, they also write a custom headline for the story.

  3. Other Farkers comment on the links. This is the number of comments. Click here to read them.

  4. Click here to submit a link.

Report