Skip to content

The Mangled Web of Scientific Misinformation

August 9, 2011

Admit it. It *is* a sexy turtleneck.

I’ve done it. As an educated guess, I’d reckon you’ve done it too. Most recently, David DiSalvo at Forbes did it with his story,

Why Scientists and Journalists Don’t Always Play Well Together.”

Next to lusting after Carl Sagan and pining for the Space Race, lamenting the woeful state of science writing is a favourite past time of science writers and scientists alike.

DiSalvo’s tale has been told many times, and the key ideas can be filtered out as,

Scientists mistrust journalists because the popular market for news can, and very often does, affect how stories are told.

And,

[T]he ambiguity surrounding many scientific findings doesn’t translate well to popular messaging.

In other words, through their act of converting complex science into broadsheet coverage science writers sometimes slip, over-dramatizing and over-simplifying. Or, in even more other words, bad science journalism is science journalists’ fault. As the last wall standing between the outside world and the edited page, this stance makes sense. But here’s a line you’ll never expect:

It’s not so simple.

Now don’t get me wrong I’m all for pointing fingers, particularly those extra-waggly self-deprecating ones. I just think that, sometimes, it can be nice to spread that blame around. Science journalists do not work in isolation, and sometimes errors start further—much further—up the chain. But this is no exploration of the extreme, villifying that lone, over-hyping researcher who bursts onto the scene in a blaze of press conference-induced glory. Rather, this is an investigation of the mundane; of the modest, common ways mistakes that make their way into the news.

Three studies in the past few years have challenged, or at least balanced, the view of the science writer as the source of scientific media misinformation. They all focus on medicine, but as health and medical news dominates popular science coverage (at 90% or more) this seems like a fair concession.

“Fat Gene” found! In a press release…

In their study, “Lost in Translation? A Comparison of Cancer-Genetics Reporting in the Press Release and Its Subsequent Coverage in the Press,” Jean Brechman, Chul-joo Lee and Josenph Cappella explore the differences in tone, story content and the level of “deterministic language” between 23 press releases and 71 related news stories.

Based on a narrow set of news stories from major U.S. newspapers between 2004 to 2007, the authors assessed articles which reported on new scientific research, referenced only one study at a time, and were driven by a press release.

The authors found that genetics stories were over-simplified and deterministic 67.5% of the time, answering DiSalvo when he claimed that, “What isn’t quite clear [is how] any given research study magically becomes ‘A + B = C’ in an article about the study.”

The authors focused on tracking the distortion of the research’s core claim, from: “B gene is associated with X, given Y” to “B causes X” to “B! X! Ahhhhhhhh!” They found that for press release-newspaper story pairs that made basically the same central point, the press release had more deterministic language 34.4% of the time, and the newspaper 33.1%.

Most important of all, however, the authors write,

While claims within popular press often lacked specificity and overinterpreted preliminary findings, other errors commonly attributed to science journalism, such as lack of qualifying details and use of oversimplified language (e.g., “fat gene”), were observed in press releases… Of the 10 cases using the “outcome gene” term, its usage originated in the press release 70% of the time.

Brechman et al. eventually decide that they, “find no evidence to support the claim that the process of language distortion occurs as scientific news is translated from the intermediate press release to coverage in the press.”

Point, science journalists.

It’s not just the ‘how?’ It’s also the ‘what?’

Brechman, Lee and Cappella showed that press releases can do a lot to set the tone of news coverage (Gay Gene! Math Gene!), but these discrepancies are but a pittance compared to the blame we science journalists can dodge thanks to the research of Steven Woloshin, Lisa Schwart, Samuel Casella, Abigail Kennedy and Robin Larson.

In their study, “Press Releases by Academic Medical Centers: Not So Academic?Woloshin et al. analyzed 200 of the 989 press releases issued by ten major U.S. medical research institutions in 2005. They split the pile into two main groups: research that was grounded in human trials, and studies based on laboratory or animal testing.

Of the 113 studies focusing on people, only 17% were meta-analyses or randomized trials—the types of studies which give you a close-to-definitive answer to your medical question. The authors found that 40% of the studies were “inherently limited” based on small sample sizes or lack of a control group, and less than half of the press releases couched the medically marvellous statements with relevant caveats.

For the other set of press releases, those focused on mice, cell cultures or other tangential subjects, 64 out of 87 suggested that the study’s results were directly relevant to human health, with the majority saying nothing about how ridiculous this is. (Seriously. This is probably scientists’ and science journalists’ biggest pet peeve. Knock it off.)

Now remember, these press releases were written by the staff of major medical institutions. The kind who know what makes a good study. If they think the science is solid enough to go out into the world, why shouldn’t a journalist think it’s ready for the front page (of section D, let’s be serious)?

The authors also found that 12% of press releases were based on unpublished research. As a way forward, they write that,

The quickest strategy for improvement would be for centers to issue fewer releases about preliminary research, especially unpublished scientific meeting presentations, because findings often change substantially— or fail to hold up—as studies mature.

Of the modern scientist-journalist relationship, DiSalvo writes,

“Scientists, as sources to journalists in the maelstrom, have become increasingly fearful that the credibility of their findings is being stretched thin to grab readers’ attention.” But maybe it is time for the scientists to look a bit closer to home.

Perhaps these studies don’t go quite as far to protect the collective journalist ego as I would hope. After all, public releations staff, journalists, and science writers of every colour perform—from the scientist’s point of view—essentially the same function.

But wait… there’s more!

You! In the ivory tower! Yeah, you!*

By far my favourite of the bunch, Francois Gonon, Erwan Bezard and Thomas Boraud’s “Misrepresentation of Neuroscience Data Might Give Rise to Misleading Conclusions in the Media: The Case of Attention Deficit Hyperactivity Disorder.

The authors were working away, collecting studies on their area of interest: Attention Deficit Hyperactivity Disorder.

“Whilst preparing our review on ADHD we noticed several types and cases of data misrepresentation,” they write. This apparently annoyed them so much that they decided to investigate it.

Here, we point out the misrepresentation of the neurobiological facts at its initial level, i.e. inside individual scientific articles.

The trio analyzed 360 neuroscience research articles and found that journal authors tend to stray from reality in three ways: disagreeing with their own results, over-inflating their results, or over-extending the value of their research.

In two cases, the authors suggest that Volkol et al. and Barbaresi et al.‘s conclusions, the ones which appeared in the studies’ abstracts, disagreed with their published results. Gonon et al. found that these two studies were collectively picked up by the media 61 times, with only one reporter highlighting the discrepancy. To make matters worse, these conflicting conclusions were picked up by other scientific researchers a few dozen times.

Gonon, Bezard and Boraud also found it to be incredibly common for a researcher to state their research finding firmly in the abstract, but that upon closer inspection, the studies’ own results often do not justify such a bold claim. As an example, the authors focused on studies which investigated the link between one specific gene, DRD4, and its effect on ADHD frequency. The important point for Gonon was that, while the link between DRD4 and ADHD is statistically strong, the risk associated with having a flawed version of the gene is relatively small. Of the 159 studies that looked at the link between DRD4 and ADHD, however, only 25 mentioned this fact which, in the eyes of this study’s authors, was a major over-simplification. Following up, they found that 82% of the news stories followed through which this over-simplified and often over-dramatized view of the relationship between DRD4 and ADHD.

Finally, “from our survey of 101 articles [based on animal models] we found that… 23 studies extrapolate to new therapeutic prospects.”

They say that researchers who, in their own research articles, make the leap from animal models to the potential for treatments that could change or save lines, they,

feed [the] illusory short-term hopes in patients and their families. For example, animal studies about cellular therapies for spinal cord injury have been put forward by for-profit institutions selling these therapies to unfortunate patients although these interventions are not yet proven safe and effective by properly conducted clinical trials.

(Interestingly, the tendency of research articles to over-reach from their furry findings to human health scaled with the impact factor of the publishing journal. Huh.)

But of the lowly journalist, they write,

Our examples of data misrepresentation in scientific reports seem to be correlated with similar misrepresentation in the lay media. Thus, we speculate that data misrepresentation in the scientific literature might play a part in the distortion of data into misleading conclusions in the media.

The blame is spread evenly—no one party is the source of scientific misinformation. Science communication is an interacting web, with each participant affecting the quality of those around them. As an optimistic glance to the future, DiSalvo nails it when he says,

If we are willing to extend some trust and work as partners trying to reach the same goal—thoughtful communication of important scientific findings to the public—then the science sandbox needn’t be such a treacherous place to play.

———————————————————————————————————————

*Thanks to Hadas Shema  for allerting me to this study.

References:

Brechman, J., Lee, C., & Cappella, J. (2009). Lost in Translation?: A Comparison of Cancer-Genetics Reporting in the Press Release and Its Subsequent Coverage in the Press Science Communication, 30 (4), 453-474 DOI: 10.1177/1075547009332649

Gonon F, Bezard E, & Boraud T (2011). Misrepresentation of neuroscience data might give rise to misleading conclusions in the media: the case of attention deficit hyperactivity disorder. PloS one, 6 (1) PMID: 21297951

Woloshin S, Schwartz LM, Casella SL, Kennedy AT, & Larson RJ (2009). Press releases by academic medical centers: not so academic? Annals of internal medicine, 150 (9), 613-8 PMID: 19414840

10 Comments leave one →
  1. August 10, 2011 1:50 am

    Yeah, that and also most scientists are bad writers.

    • August 10, 2011 11:37 am

      I thought it would’ve taken at least 8 or 9 comments for the ol’ scientists-are-crap-writers proclamation to come out

  2. August 10, 2011 7:07 pm

    I don’t disagree with anything you’ve said here, but I also know how frustrating it can be to work with clients from large companies/institutions who don’t ‘get’ what PR is or what does actually constitute news. Perhaps a more positive way to approach this would be to compile a checklist of what actually does constitute news worth reporting on the scientific front.

    Some suggestions: funding of a major new study; commencement of clinical trials; completion of clinical trials…. What else?

  3. August 10, 2011 7:29 pm

    Ummm… A couple of points:

    * first, as an aside, I’ve never heard sceintists write press releases. The “staff of major [medical] institutions” was at least in one lab I worked at, the secretary. Yep, she booked flights, did the project budgets, bought supplies – and wrote press releases.

    * That’s not the main problem. The main problem is the acknowledgement that it’s OK for “science journalists” to rely on press releases for their coverage. Do good political reporters simply repeat the official statements of the current administration and call it news?

    The quality (or lack of it) of press releases should simply not matter. It’s advertising, not news. An actual science reporter would go to the paper itself. I’d expect a _good_ science reporter to contact the author for clarifications, and another researcher in the field for a less biased take on the whole thing.

    If the quality of the advertising blurb determines the quality of the “reporting”, then good riddance to the hack that falls for the hype.

  4. August 11, 2011 9:12 am

    @Janne Media releases are not advertising, and a PR person would make the papers available to the media on request, as well as making the authors available for interview.

    More generally: so here’s a great example of a very small study that’s been peer reviewed (glowingly) with caveats that the study is very very small:

    http://www.cbc.ca/news/health/story/2011/08/10/leukemia-gene-therapy.html

    Should this not have been reported? The PR process always aligns itself with the organization’s goals (which in this case may be to acquire further funds for research so a larger study can be carried out – it’s not always about glory seeking).

  5. evodevo permalink
    August 11, 2011 5:15 pm

    There’s also the question of funding (present and future) to factor in the process. Who is the researcher trying to impress?

    • August 11, 2011 5:50 pm

      That’s absolutely a factor. But if they are deliberately inflating their study’s worth for economic reasons, then it’s hardly fair to complain when that inflation gets carried through to the news report.

  6. August 17, 2011 9:35 am

    If only people outside of science blogs and the lab would hear this discussion more often. As a former researcher who now writes about children’s health and parenting from a scientific perspective – for the general parenting crowd – I so wish parents (i.e. those not scientifically trained) got better, more nuanced information. Still not sure why they didn’t hear how truly speculative the Wakefield autism study appeared from day one. Not sure either why they don’t learn how relatively small the “effects” (i.e. links!) between breastfeeding and every other outcome variable truly are in the literature. It’s a shame.

Trackbacks

  1. Twitted by vonOberst
  2. Bridging the gap between scientist and writer… when they’re the same person « projectsteph

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

Join 3,338 other followers

%d bloggers like this: