And the winner….
Of the first ever CMBR award for push science journalism is…..
David Dobbs for his New York Times article A Depression Switch?
Wait, the what award?
A couple of months ago I started a project to try to figure out if there is any big disparity between what practicing science journalists are doing, and what science communication academics think we should be doing. That project culminated into an article with a series of tips for new science journalists, an overview piece on some of the issues facing science journalism, and this, a critical assessment of examples of strong science journalism.
I made an evaluation matrix, to help me try to remain objective when reading and judging the stories. I was hoping to grade the stories based on things I had determined would help reach a broad non-science audience. I scored them on eight different things, like: acknowledges the process of science, limits the number of new science concepts per story, uses metaphors, simple language, and references to everyday objects.
The stories were pulled from the 2009 AAAS Kavli Science Journalism Awards. And of those, for my own sanity, I stuck to print stories. Also, during interviews with some prominent science journalists, like Carl Zimmer, David Dobbs, and Nicola Jones, I asked them for examples of what they felt was their own best work. (Sorry Ed, I forgot that question.)
The full list I used was:
They’re all, as you’d imagine, really good reads. But this analysis wasn’t about that. I was grading on what I thought would help get audiences who aren’t usually interested in science engaged. The numbers in brackets next to each article was its score out of a possible eight. You can see the full breakdown in the spreadsheet I used. I’m not going to pretend this evaluation was super rigorous – there’s all sorts of issues with selection bias, and my grading was probably affected by how much my interests aligned with the subject matter, what I had for lunch, and how tired I was. I took some steps to be objective, though – I had a grading key so it wasn’t totally at the whim of my mood, and I used a random number generator to pick my reading order. Either way, this was an excuse to read some good writing, and to take off my press fedora and blogger conquistador helmet, and put on my media critic hat.
I imagine it would be a little beret.
Now one thing about how the results turned out really irked me, and if you look at the scores, it’ll probably jump out at you, too. Carl Zimmer, easily regarded as one of the best and most well respected science journalist, totally bombed my test. I have a couple of theories about what might be going on that I want to throw out there, however.
1) Carl Zimmer’s stories were all selections found in the AAAS awards. One of these winners, also Zimmer’s self-selected best piece, got a 1.5 out of 8. So here’s what I’m thinking – these stories were all very science heavy. They showed the science very accurately, and did a great job of explaining them in the context of science culture and its development. These are things that it would be easy to see the AAAS valuing, their commitment is to science and its associated culture. Unfortunately these stories were, I think, a bit too science heavy for a non-science audience to grasp. The firefly story brought in and described many complex interactions: flash patterns, nuptial gifts, predator prey relations. It also made it difficult to determine exactly what the focus of the story was. This is fine for a scientifically literate and interested audience, but I don’t know how it would fare otherwise. A selection bias could have been present in the AAAS selections towards these more scientist-friendly stories.
Now some counters to this:
– Gary Wolf’s WIRED article was also a AAAS selection, and it scored a 7.
– Maybe I’ve got it all wrong. The New York Times recently founded that some of its most shared stories come from the science section.
2) Zimmer’s pieces all ran as part of the normal day-to-day affairs of the New York Times. They were newspaper stories, not magazine features like Dobbs’ or Wolf’s. The tighter limits on space, (and one can assume time) leaves less wiggle room for character development and story progression. This point seems to stand up pretty well – feature articles tended to score higher than their shorter counterparts. This may be stupidly obvious, more story and more character makes a more engaging article, but I’m unsure of what it means for trying to handle the day-to-day news in science. Maybe that is where we’ll see a bit of a split; keep the science news for the science people, and push features, documentaries, and other long-form stories out to the broader public.
I’m not really sure what is going on, or how to explain it, but some of the numbers I saw in the “Total” row surprised me. I’d love to hear any other theories to my little dilemma.
That being said, I’ve also come across some good counter-points to my own grading criteria from the time I made this evaluation matrix until now. One of the assessment criteria I used was story placement – was it in a science section? A science magazine like Nature? I felt this was an important category because I thought a reader who doesn’t consider themselves interested in science would not come across stories, even truly fantastic, engaging stories, if they were buried away in the science section.
But yikes, was I wrong. Ed Yong and others put me in my place! They pointed out that this old model of expecting the reader to seek out your work is long dead. A good story, no matter what section it originated in, will blast its way around the internet through Twitter, Facebook, or Reddit. This means that getting exposure to non-science audiences depends more on story quality than it does on more traditional things, like being on the front page, or being in a big circulation paper.
That’s all for now. I’m sure I’ll come up with some new theories soon. I guess I should go ahead and give everyone a +1 to their score, then.