01. Readability

This post is about readability scores. As such, I have include the scores for this post at the end of the article.

Whatever you write, someone is going to eventually read. And the easier it is to read, the better they'll be able to understand it. Naturally, a number of equations and formulae have been developed to assess the readability of things in a more quantifiable way than the heuristic-driven "look's good to me" approach. Two of the more prominent readability assessment tools are the Flesh Reading-Ease Test, and the Flesh-Kincaid Grade Level Formula. The former produces a score between 0 and 100, with higher scores meaning easier to read. The latter produces a number that corresponds to the grade level at which a person would need to be to easily understand the writing. For some framework, Dr. Seuss books almost exclusively sit at the 95-100 mark in the FREST (in fact, they can reach numbers above 100), while Time magazine and Moby Dick both sit in the mid-fifties. Plain English is defined as the range between 60 and 70. The FREST score can even fall into negatives, but this usually only occurs when using the formula on individual sentences that meet certain criteria (see below)

Both tests have really straight-forward variables, and there are numerous calculators online that allow you to post your text into them and immediately receive scores both on these two tests and numerous others. I make a point to whenever I can throw my work into these calculators to get a sense of how I'm doing. I've gotten scores in the midteens before, which is a red flag if you ever see one. But there is a caveat here: I mostly write academic manuscripts and papers. How does this effect readability?

Well, let's look at the variables used in these two tests. both actually use the same variables, but different coefficients and y-intercepts to produce the different scales and ranges (in FREST a high number is good, while in the Grade Level test you want a low number). The variables are simple:

  1. The average number of words per sentence
  2. The average number of syllables per word

The longer your average sentence, and the longer your average word, the worse your score becomes in both tests. The general takeaway is simple: the more concise you are are, the more readable your texts are. But there's a funky twist: syllables per word is HEAVILY WEIGHTED compared to words per sentence. What this means is that you can have your average words/sentence be above thirty, but still have a score in the sixties if you can keep your average syllables/word under 1.3. The moment your syllables/word average hits about 1.55, it's impossible to get your score above sixty. And if sixty is considered Plain English (so readable to most anyone with a middle school education), then you have a problem when you are dealing with scientific terminology. That previous sentence has a FRET score of 34, in no small part because of the abundance of four syllable terms (scientific and education; terminology has five!).

So if I'm writing about gamification, or education, or learning, you can see that suddenly I have a lot of work cut out for me if I want to improve my score. The interesting contrast is that the FREST and its friends are prioritizing (oof, that word will hurt me) longer simpler sentences rather than short and concise sentences with polysyllabic words. But in science and academic endeavors, meaning-rich, multisyllabic words are explicitly used to keep sentences short and to reduce circumlocution (describing things in a long roundabout way rather than using a technical term). It's a fascinating conflict between how Plain English is measured, versus what you have to do in scientific writing.

Functionally, this is why there are all of these studies and news articles about how scientific writing is getting harder to read. Of course it's getting harder to read, the more terms we need to use, the more we're practically sabotaging our readability.. according to this particular metric. Other readability tests use different metrics, but fundamentally they're usually looking at brevity and common language as their main metrics. They're not designed to tell you if your scientific writing is readable by other scientists. Certainly you can simply shift what your target scores are, aim for maybe undergraduate reading level, or a score of 40 or better instead of 60 or better (these are the sorts of adjustments I make), but the subjective interpretation of the scores (what is Plain English) isn't a useful metric for scientific writing. 

In science communication though, it absolutely is necessary. Short words, short sentences, short articles (it's why most of my posts are under 500 words). But you still do eventually have a problem. If I'm writing about gamification (an example I keep using as proxy for any polysyllabic term), I can keep my syllables down, but at some point I'm going to just want to start using the word gamification instead of a more "readable" longform description. This is the whole purpose of terms, a point I'll elaborate on eventually in Lexicalamity. Suddenly I'm forced to start using polysyllabic words, and as I introduce more and more of these words to my readership, my score is going to go down. Conceptually, the reading level of my readers is going up in such a way to compensate, but there's not a metric for that. Terminology is fundamental to good science, but without the communal understanding of the purpose of terminlogy is, it is perceived by general audiences as a negative for readability.

Very interesting stuff.

For those of you curious,

Flesch Reading Ease score: 53.2, fairly difficult to read.
Gunning Fog: 14, hard to read. 
Flesch-Kincaid Grade Level: 11.1, Grade level: Eleventh Grade.
The Coleman-Liau Index: 10, Grade level: Tenth Grade
The SMOG Index: 10.1, Grade level: Tenth Grade
Automated Readability Index: 11.3, Grade level: 15-17 yrs. old (Tenth to Eleventh graders)
Linsear Write Formula : 13.6, Grade level: College.

04. Jargon

03. SciComm & ISE