The Mangled Web of Scientific Misinformation
Next to lusting after Carl Sagan and pining for the Space Race, lamenting the woeful state of science writing is a favourite past time of science writers and scientists alike.
DiSalvo’s tale has been told many times, and the key ideas can be filtered out as,
Scientists mistrust journalists because the popular market for news can, and very often does, affect how stories are told.
[T]he ambiguity surrounding many scientific findings doesn’t translate well to popular messaging.
In other words, through their act of converting complex science into broadsheet coverage science writers sometimes slip, over-dramatizing and over-simplifying. Or, in even more other words, bad science journalism is science journalists’ fault. As the last wall standing between the outside world and the edited page, this stance makes sense. But here’s a line you’ll never expect:
It’s not so simple.
Now don’t get me wrong I’m all for pointing fingers, particularly those extra-waggly self-deprecating ones. I just think that, sometimes, it can be nice to spread that blame around. Science journalists do not work in isolation, and sometimes errors start further—much further—up the chain. But this is no exploration of the extreme, villifying that lone, over-hyping researcher who bursts onto the scene in a blaze of press conference-induced glory. Rather, this is an investigation of the mundane; of the modest, common ways mistakes that make their way into the news.
Three studies in the past few years have challenged, or at least balanced, the view of the science writer as the source of scientific media misinformation. They all focus on medicine, but as health and medical news dominates popular science coverage (at 90% or more) this seems like a fair concession.
“Fat Gene” found! In a press release…
In their study, “Lost in Translation? A Comparison of Cancer-Genetics Reporting in the Press Release and Its Subsequent Coverage in the Press,” Jean Brechman, Chul-joo Lee and Josenph Cappella explore the differences in tone, story content and the level of “deterministic language” between 23 press releases and 71 related news stories.
Based on a narrow set of news stories from major U.S. newspapers between 2004 to 2007, the authors assessed articles which reported on new scientific research, referenced only one study at a time, and were driven by a press release.
The authors found that genetics stories were over-simplified and deterministic 67.5% of the time, answering DiSalvo when he claimed that, “What isn’t quite clear [is how] any given research study magically becomes ‘A + B = C’ in an article about the study.”
The authors focused on tracking the distortion of the research’s core claim, from: “B gene is associated with X, given Y” to “B causes X” to “B! X! Ahhhhhhhh!” They found that for press release-newspaper story pairs that made basically the same central point, the press release had more deterministic language 34.4% of the time, and the newspaper 33.1%.
Most important of all, however, the authors write,
While claims within popular press often lacked specificity and overinterpreted preliminary findings, other errors commonly attributed to science journalism, such as lack of qualifying details and use of oversimplified language (e.g., “fat gene”), were observed in press releases… Of the 10 cases using the “outcome gene” term, its usage originated in the press release 70% of the time.
Brechman et al. eventually decide that they, “find no evidence to support the claim that the process of language distortion occurs as scientific news is translated from the intermediate press release to coverage in the press.”
Point, science journalists.
It’s not just the ‘how?’ It’s also the ‘what?’
Brechman, Lee and Cappella showed that press releases can do a lot to set the tone of news coverage (Gay Gene! Math Gene!), but these discrepancies are but a pittance compared to the blame we science journalists can dodge thanks to the research of Steven Woloshin, Lisa Schwart, Samuel Casella, Abigail Kennedy and Robin Larson.
In their study, “Press Releases by Academic Medical Centers: Not So Academic?” Woloshin et al. analyzed 200 of the 989 press releases issued by ten major U.S. medical research institutions in 2005. They split the pile into two main groups: research that was grounded in human trials, and studies based on laboratory or animal testing.
Of the 113 studies focusing on people, only 17% were meta-analyses or randomized trials—the types of studies which give you a close-to-definitive answer to your medical question. The authors found that 40% of the studies were “inherently limited” based on small sample sizes or lack of a control group, and less than half of the press releases couched the medically marvellous statements with relevant caveats.
For the other set of press releases, those focused on mice, cell cultures or other tangential subjects, 64 out of 87 suggested that the study’s results were directly relevant to human health, with the majority saying nothing about how ridiculous this is. (Seriously. This is probably scientists’ and science journalists’ biggest pet peeve. Knock it off.)
Now remember, these press releases were written by the staff of major medical institutions. The kind who know what makes a good study. If they think the science is solid enough to go out into the world, why shouldn’t a journalist think it’s ready for the front page (of section D, let’s be serious)?
The authors also found that 12% of press releases were based on unpublished research. As a way forward, they write that,
The quickest strategy for improvement would be for centers to issue fewer releases about preliminary research, especially unpublished scientiﬁc meeting presentations, because findings often change substantially— or fail to hold up—as studies mature.
Of the modern scientist-journalist relationship, DiSalvo writes,
“Scientists, as sources to journalists in the maelstrom, have become increasingly fearful that the credibility of their findings is being stretched thin to grab readers’ attention.” But maybe it is time for the scientists to look a bit closer to home.
Perhaps these studies don’t go quite as far to protect the collective journalist ego as I would hope. After all, public releations staff, journalists, and science writers of every colour perform—from the scientist’s point of view—essentially the same function.
But wait… there’s more!
You! In the ivory tower! Yeah, you!*
By far my favourite of the bunch, Francois Gonon, Erwan Bezard and Thomas Boraud’s “Misrepresentation of Neuroscience Data Might Give Rise to Misleading Conclusions in the Media: The Case of Attention Deficit Hyperactivity Disorder.”
The authors were working away, collecting studies on their area of interest: Attention Deficit Hyperactivity Disorder.
“Whilst preparing our review on ADHD we noticed several types and cases of data misrepresentation,” they write. This apparently annoyed them so much that they decided to investigate it.
Here, we point out the misrepresentation of the neurobiological facts at its initial level, i.e. inside individual scientific articles.
The trio analyzed 360 neuroscience research articles and found that journal authors tend to stray from reality in three ways: disagreeing with their own results, over-inflating their results, or over-extending the value of their research.
In two cases, the authors suggest that Volkol et al. and Barbaresi et al.‘s conclusions, the ones which appeared in the studies’ abstracts, disagreed with their published results. Gonon et al. found that these two studies were collectively picked up by the media 61 times, with only one reporter highlighting the discrepancy. To make matters worse, these conflicting conclusions were picked up by other scientific researchers a few dozen times.
Gonon, Bezard and Boraud also found it to be incredibly common for a researcher to state their research finding firmly in the abstract, but that upon closer inspection, the studies’ own results often do not justify such a bold claim. As an example, the authors focused on studies which investigated the link between one specific gene, DRD4, and its effect on ADHD frequency. The important point for Gonon was that, while the link between DRD4 and ADHD is statistically strong, the risk associated with having a flawed version of the gene is relatively small. Of the 159 studies that looked at the link between DRD4 and ADHD, however, only 25 mentioned this fact which, in the eyes of this study’s authors, was a major over-simplification. Following up, they found that 82% of the news stories followed through which this over-simplified and often over-dramatized view of the relationship between DRD4 and ADHD.
Finally, “from our survey of 101 articles [based on animal models] we found that… 23 studies extrapolate to new therapeutic prospects.”
They say that researchers who, in their own research articles, make the leap from animal models to the potential for treatments that could change or save lines, they,
feed [the] illusory short-term hopes in patients and their families. For example, animal studies about cellular therapies for spinal cord injury have been put forward by for-profit institutions selling these therapies to unfortunate patients although these interventions are not yet proven safe and effective by properly conducted clinical trials.
(Interestingly, the tendency of research articles to over-reach from their furry findings to human health scaled with the impact factor of the publishing journal. Huh.)
But of the lowly journalist, they write,
Our examples of data misrepresentation in scientific reports seem to be correlated with similar misrepresentation in the lay media. Thus, we speculate that data misrepresentation in the scientific literature might play a part in the distortion of data into misleading conclusions in the media.
The blame is spread evenly—no one party is the source of scientific misinformation. Science communication is an interacting web, with each participant affecting the quality of those around them. As an optimistic glance to the future, DiSalvo nails it when he says,
If we are willing to extend some trust and work as partners trying to reach the same goal—thoughtful communication of important scientific findings to the public—then the science sandbox needn’t be such a treacherous place to play.
Brechman, J., Lee, C., & Cappella, J. (2009). Lost in Translation?: A Comparison of Cancer-Genetics Reporting in the Press Release and Its Subsequent Coverage in the Press Science Communication, 30 (4), 453-474 DOI: 10.1177/1075547009332649
Gonon F, Bezard E, & Boraud T (2011). Misrepresentation of neuroscience data might give rise to misleading conclusions in the media: the case of attention deficit hyperactivity disorder. PloS one, 6 (1) PMID: 21297951
Woloshin S, Schwartz LM, Casella SL, Kennedy AT, & Larson RJ (2009). Press releases by academic medical centers: not so academic? Annals of internal medicine, 150 (9), 613-8 PMID: 19414840