Skip to content

How readable is your writing?

December 18, 2010

What grade are you in?

If you’re an adult, that’s probably not a question you’ve had to think about in a while. According to the pieces of paper on my wall, I’m in 17th grade. If you have a PhD, you’re a bit higher. If you never had an opportunity at post-secondary education, then you’re in 12th grade, or lower still if you didn’t complete highschool. Your level of education isn’t a perfect measure of your reading ability, but it is a decent estimate.

Now, let’s say that two weeks ago you got wrapped up in the excitement about the discovery of a bacteria that can supposedly use arsenic in its DNA. It was intriguing, everyone seemed to be really excited about it, and you wanted to know more.

So, where could you turn?

If you sometimes go by Dr., you probably had a decent chance at working your way through the original study. If you are early in your university career, Ed Yong had you covered. If you are still in highschool, and you wanted to know what other scientists thought of the research, then Carl Zimmer had a story just for you.

These suggestions aren’t based on the level of technical depth or the amount of detail in the science, but are instead based on the different stories’ estimated reading levels. One of the most common assessments of reading level (and the one that comes bundled with Microsoft Word) is the Flesch-Kincaid readability test. Using the number of syllables, words and sentences in a piece of writing, the test spits out the school grade for which the writing is best suited.

Newspapers, which try to make their writing as accessible as possible, often try to keep their reading level at or below that expected of a high-schooler. Improving readability makes it easier for people to understand the point, and will probably improve the odds of keeping the reader around all the way to the end of an article. Much like writing that over-uses jargon, if readers have to work to get through a piece of writing, they will probably just tune out.

Now, I’m not sure exactly how they do it (and it may not be perfect*), but Google has started letting you filter your searches by readability. The results aren’t as detailed as the Flesch-Kincaid test. Rather, a site’s content is split into three buckets: basic, intermediate, or advanced.

If the writing isn't in the right bucket, it won't be useful for the reader.

A goal of many science bloggers is, much like newspapers, to communicate their information to people who are interested, but lack any background knowledge or experience. Improving readability, then, will at least in theory mean their content is more accessible. Obviously there is more to communicating science than syllable, word, and sentence ratios, but it’s interesting to come across some numbers that can be used to start comparing blog quality in addition to quantity.

The Top Five Most Readable Science Bloggers

Author Site Name Network Readability**
Suzanne Franks Thus Spake Zuska Scientopia 1.21
Greg Laden Greg Laden’s Blog ScienceBlogs 1.28
Phil Plait Bad Astronomy Discover 1.46
PZ Myers Pharyngula ScienceBlogs 1.46
Ryan Anderson The Martian Chronicles AGU 1.52

**Where Readability = [ (1 x Basic score) + (2 x Intermediate) + (3 x Advanced) ] / 100

I’ve collected the readability scores from 80 science bloggers in a handy little spreadsheet. The blogs I looked at were those from the AGU, PloS, Wired***, Discover, Guardian, and Scientopia blog networks. I also selected a handful of ScienceBlogs and Independent bloggers, picking those who showed up on the Science 3.0 most prolific blogger list, or were assessed in some previous research on science bloggers (which I wrote about here.)

The readability scores for all of the bloggers fall between one, where all of the posts are Basic, and 3, where the writing is all considered Advanced. I also calculated network averages, which tend to cluster right around the middle. The differences were really quite small, with a spread of 1.80 to 2.06. The most readable blog network was ScienceBlogs, followed by the Guardian, AGU, Discover, Wired, Scientopia, and PloS.

It’s important to remember that this is just a readability calculation, and it doesn’t necessarily mean the writing is more informative or interesting. But, it’s important for writers to knock down as many barriers as they can between their ideas and the audience.

One of the most important things I learned in journalism school was to shed the convoluted writing style I picked up writing essays in university. We were taught to write in a straight line. Subject verb object. Repeat. The content and technical depth in two stories can be identical, but if you present it in a way that is easier to read, your information will reach more eyes.

Possible issues with using Google’s readability measurements:

  1. ***Some sites may not have been properly assessed by Google, and will give skewed readings. I’m pretty sure something funny was going on with the Wired network results. Each of the bloggers’ posts were found to be in one category, rather than being spread out across the three levels of readability. I left the results in the spreadsheet, but I don’t know how much I trust them.
  2. Some bloggers have a lot of throw-away posts, where they write two or three sentences and link to an outside source or post a video. These sorts of posts might push their blogs heavily towards the Basic bucket. Similarly, science bloggers who quote big chunks of scientific studies might get pushed towards Advanced.
  3. Comments might get included in Google’s readability assessment, which doesn’t reflect the blogger’s writing style.
20 Comments leave one →
  1. neurospasm permalink
    December 18, 2010 8:57 pm

    Some bloggers have a lot of throw-away posts, where they write two or three sentences and link to an outside source or post a video. These sorts of posts might push their blogs heavily towards the Basic bucket.

    *cough* Leg Graden *cough*

  2. Andrea Kuszewski permalink
    December 18, 2010 9:13 pm

    I do believe the quotes from scientific literature can skew the rating to the higher range, depending on the length of the quote. However, that says nothing about the blogger’s ability to break down that research into a readable, understandable format.

    Interesting results, though. And I don’t trust any scores that fall 100% in one category. Statistically speaking, it’s very improbable—unless the sample size is extremely small.

  3. Andrea Kuszewski permalink
    December 18, 2010 9:19 pm

    A reason why I think the comments affect the rating: this post of mine ( got a rating of “intermediate” and I only wrote 8 words in the post and linked an xkcd comic. However, I had a monologuer write 2 L O N G comments which probably accounted for that score. Just sayin.

    • December 19, 2010 3:37 pm

      It’s true that these ratings would also include text found in sidebars, comments, ads, headers, etc etc. But, all things equal, the main difference between blogs would seem to be in the actual content. Any differences in the other stuff would probably wash out. I think.

      • Andrea Kuszewski permalink
        December 19, 2010 8:44 pm

        I have several loyal readers who like to comment in epic-length posts that could be, in and of themselves, academic papers. ;)

  4. December 18, 2010 11:29 pm

    Interesting. Some 20 years ago I was writing the Alaska Science Forum (a weekly column that went to media outlets all over Alaska–I was the last scientist to write it.) I was told to keep the readability (as measured by the Word of that time) down to about 6th grade and the length to 600 words. I rarely made either, but I did generally manage to stay below 9th grade level and 700 words. Now I try to use the same ground rules on my blog, though I’m hardly covering cutting-edge science.

  5. December 19, 2010 10:35 am

    Mainstream media is trying to write for 6th grade. Pop sci magazines may allow up to 9th grade.

    But what is good about blogging is that there is freedom to greatly expand the audience both ways – some bloggers can write for little kids, others for postdocs, and everything in-between.

    Instead of a single narrow band of readability, now we have a very broad band, including everyone – from those who can barely read (or have to be read to) to experts. There is now something for everyone.

    Also, someone covering a story at a very basic level can link to a higher-level post for readers who want to learn more. Likewise, the higher-level blogger can link to a simpler post by another blogger for readers who find the post difficult to understand. More the merrier….

  6. December 20, 2010 1:04 am

    I’m with Andrea: I think material from the original lit can skew the results. I like to think that I write at a basic level, but I always include the original reference of the paper, so I’m not sure how this would come out in the wash.

    • December 20, 2010 9:24 am

      There seems to be a common misunderstanding running through a lot of the feedback to this article, so I’ve made some edits to hopefully clear it up.

      The important point is that (I’m fairly certain) Google’s readability rankings don’t really have anything to say about the complexity of the subject matter, the depth at which you write about the science, or other interpretations of Basic, Intermediate and Advanced. If Google’s formula is similar to the Flesch-Kincaid test, what is taken into account is the syllable/word ratio and the word/sentence ratio. This makes the readability of a blog a matter of writing style, and is the reason why I think that writing at a more basic and readable level is always better.

  7. dlende permalink
    December 20, 2010 1:30 pm

    Colin, a fascinating post, and a great way to think about writing and accessibility. I was wondering, however, why Neuroanthropology didn’t get included in the PLoS Blogs?

    Here’s the link:

    • December 20, 2010 1:31 pm

      Oh, I must of missed that one. I had opened up all of the tabs from the sites I was going to include, and must have accidentally closed that one (I’m sure it happened more than once).

  8. December 21, 2010 5:59 am

    For all the many flaws in readability rankings (some of which you rightly point out Colin), I do find it interesting & heartening that my writing has apparently become simpler over time.

    WordPress: (Basic <1%; Intermediate 56%; Advanced 42%)
    Scienceblogs: (Basic 5%; Intermediate 66%; Advanced 28%)
    Discover: (Basic <1%; Intermediate 84%; Advanced 15%)

    I'm struggling to think of an alternative hypothesis to explain the trend.

  9. December 27, 2010 12:16 pm

    This is pretty cool, thanks for taking time to do this.

    I’ll echo Bora that what’s great about science blogs is how there is such a broad selection of ‘levels’ for the reader (a breadth of depth?). If we were all aiming to reach the biggest audience possible this whole medium would be very redundant and probably pretty boring for those looking for some in-depth expert commentary.

  10. December 29, 2010 4:34 pm

    Wonder how Google readability compares to the <a href="; FOG Index? According to Google my blog’s readability is 2.79.

  11. January 2, 2011 11:02 pm

    Ooh metrics and spreadsheets! My heart was pounding with excitement a few lines in.

    And I like this topic…I even wrote a not so great undergraduate paper about it and ironically the paper itself would not be an example of clear writing. Hah! (in my defense I never imagined it would be seen by anyone other than my grading tutor)

    Anyway, of course I immediately rated my blog and was pleased to see no advanced posts at all, because I would hate to alienate my mother and sister–my only regular readers (not that my blog is a science blog per se, but I do springboard many of my posts from studies, or other blog posts about studies.)

    Interesting post!

    Now if only Google would add some humour level analytics…

  12. January 6, 2011 12:57 pm

    Now that I’ve taken a long break from trying to work on my writing skills, this was a nice tool to know about. I’m happy to see my writing so far is a (bizarrely even?) 50% Basic/ 50% Intermediate – precisely the mix I was hoping for!

    I’d like to add that writing at a basic level does not have to equal writing for little kids. That implies such writing is “dumbed down” and therefore not worth being written for any but a baby. I don’t view a single thing I’ve written as dumbed-down or for babies, I view it as trying to reach people who otherwise might not give a crap… mostly.

    There’s a lot of intelligent and highly competent adults that simply have no clue of even the basics of biology, chem, physics, etc. That’s who I’m hoping to get at if I ever get my writing in front of people.

    Escorting such folks in without inundating them too quickly with the deeper details is, IMO, a perfectly valid way of approaching people. Not saying it’s the only good way, mind you, just that it is a way that works for some, and so is worth doing.

  13. January 10, 2013 12:22 pm

    Thanks for the post for composing “How readable is your writing?
    CMBR”. Iwill definitely be back again for more reading
    through and commenting here soon. Thank you, Demetra


  1. Rate the readability of your blog | Code for Life
  2. Quick Links | A Blog Around The Clock
  3. Reading about Readability

Leave a Reply to Colin Schultz Cancel reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: