Readability Formulas

Introduction
Authors that write for large audiences often need to know whether their texts are understandable to their readers. Reading comprehension tests with live participants can provide accurate information about the clarity and accessibility of a document. However, sometimes it is not feasible to conduct comprehension tests, so researchers have sought to create additional tools that will predict the readability of a document. These readability formulas use various forms of linguistic analysis to determine the reading level needed to understand a given text. This level is typically expressed in the form of a readability score, the value of which varies from system to system.[1]

History
In his 1883 work, Analytics of Literature, A Manual for the Objective Study of English Prose and Poetry, L.A. Sherman demonstrated that the average English sentence had shortened significantly in the preceding centuries. In doing so, Sherman also showed how statistical analysis could be applied to written texts. This approach would become important to American educators in the early 20th Century, as the number of "first generation" students in the nation's school systems began to rise. Teachers at the time reported that their pupils found standard textbooks too difficult. Thus, some educators turned to statistical analysis to rate the readability of students' books.

Edward Lee Thorndike

1912, Popular Science Monthly, Vol. 80

Beginning in the early 1920s, Columbia psychology professor Edward L. Thorndike produced a series of books that listed English vocabulary words by order of frequency. Drawing on Thorndike's work, Bertha Lively and S.L. Pressey developed the first readability formula for children in 1923 which they used to determine the reading level of educational materials. Thorndike's work also influenced Columbia professor Irving Lorge, who co-authored a book with Thorndike and would later develop his own readability formula.

Meanwhile, interest in readability was growing outside of the field of education. In 1921, Harry D. Kitson published The Mind of the Buyer, in which he used word syllable counts and sentence length in magazine articles to study readership patterns. Rudolf Flesch, who was a colleague of Irving Lorge at Columbia, would rely on Kitson's work in helping to create the readability formula known as the Flesch-Kincaid formula.[2]

Syllable Counting Formulas Several early readability metrics determine the readability level of a text by counting the number of syllables in words. This is often combined with an analysis of sentence length.

The Flesch-Kincaid Formula Rudolf Flesch was born in Austria, but escaped Nazi persecution to settle in the United States in 1938. As a student at Columbia University, he became interested in the role of literacy levels in mass communication. In the 1940's, Flesch developed a readability formula that correlated the number of single-syllable words with average sentence length to rate the difficulty of a text. An updated version of this formula, known as the New Reading Ease Formula, states:

New Reading Ease Score = 1.599 nosw – 1.015 sl – 31.517

Where: nosw = number of one syllable words per 100 words, and

sl = average sentence length in words.

In 1976, a study conducted by the U.S. Navy led to a variation on the Flesch Reading Ease Formula that also produced a grade-level score, known as the Flesch-Kincaid Formula. The Flesch-Kincaid Formula has become one of the most widely used readability metrics in the English-speaking world. It was incorporated into computer programs beginning in the late 1960s, including a program developed by General Motors for industrial applications.[1] More recently, an automated version of the Flesch-Kincaid Formula was added to Microsoft Word.[15]

A free, online version of the Flesch-Kincaid Formula can be found here: http://www.readabilityformulas.com/free-readability-formula-tests.php

The Gunning Fog Index

Robert Gunning developed the Gunning Fog Index in the 1940s to help "take the fog out of writing." Gunning proposed counting difficult words, which he defined as words with three or more syllables, to measure the readability of a text. Like the Flesch-Kincaid Formula, Gunning's Index correlates syllable count with average sentence length. The Gunning Fog Index states:

Reading Grade Level = .4 (average sentence length + percentage of words of 3 or more syllables)[1] [4]

A free, automated version of the Gunning Fog Index is available here: http://www.readabilityformulas.com/free-readability-formula-tests.php

The Fry Readability Graph

Building on the work of Flesch and Gunning, Edward Fry developed a new readability formula in the 1960s. Fry's formula used the following four-step process to plot a text's readability on a graph:

1) Select three one-hundred word passages from the beginning, middle and end of the book. Skip all proper nouns.

2) Count the total number of sentences in each hundred-word passage (estimating to nearest tenth of a sentence). Average these three numbers.

3) Count the total number of syllables in each hundred-word sample. . . . Average the total number of syllables for the three samples.

4) Plot on the graph the average number of sentences per hundred words and the average number of syllables per hundred words. Most plot points fall near the heavy curved line. Perpendicular lines mark off approximate grade level areas.

The Fry Graph

From Wikimedia Commons [Public Domain]

Because it doesn't involve counting the number of words per sentence like the Flesch-Kincaid Formula and the Gunning Fog Index, the Fry Readability Graph can be used where fragmentary sentences are more common.[5] [6]

A free, automated version of the Fry Graph can be found here: http://www.readabilityformulas.com/free-fry-graph-test.php

Word Frequency Formulas Some readability formulas use an analysis of word frequency (that is, how commonplace a word is) combined with the average sentence-length to determine the readability of a text.

The Dale-Chall Formula The Dale-Chall Formula was first developed in 1948 by Edgar Dale and Jeanne Chall. It provides a readability score based on the familiarity of a text's vocabulary correlated with the average length of its sentences. To determine the familiarity of a text's vocabulary, the Dale-Chall formula relies on an approximately 3,000-word list of common words, known as the Dale List.

To complete the process, 100-word samples are taken from the text. Word familiarity is determined by comparing the words in the sample to the Dale list. Once a percentage of "difficult" words has been determined, this percentage is correlated with average sentence length using the following formula:

Raw Score = 0.15799(PDW) + 0.0496 x ASL

Where PDW = Percentage of Difficult Words, and

ASL = Average Sentence Length in words[7] [8]

The first page of the Dale List[14]

A free automated version of the Dale-Chall Formula is available here: http://www.readabilityformulas.com/free-dale-chall-test.php

The Lexile Framework The Lexile Framework was developed by Jackson Stenner and a team of researchers at MetaMetrics, Inc. and released in 1988. Like the Dale-Chall Formula, the Lexile Framework correlates vocabulary frequency with sentence length to arrive at a readability score. However, the Lexile Framework was constructed to measure vocabulary frequency against a 5-million-word database called the American Heritage Intermediate Corpus and then correlate the results with the average sentence length. The Lexile Framework is widely used today by educators to measure the difficulty level of books.[9] [10]

Syntax-based Formulas Some newer formulas use an analysis of syntactical structures to determine a text's readability level.

The Golub Syntactic Density Score The Golub Syntactic Density Score was developed by Lester Golub in 1974. To calculate the reading level of a text, a sample of several hundred words is taken from the text. The number of words in the sample are counted, as are the number of T-units. A T-unit is defined as an independent clause and any dependent clauses attached to it. Various other syntactical units are then counted and entered into the following table:

The Golub Syntactic Density Table [17]

Each word count is weighted and the results added together. The sum is then divided by the total number of T-units. Finally, the quotient is compared with the Grade Level Conversion Table, which provides a final readability score.[17]

The Coh-Metric Measurements Like the Lexile Framework, the Coh-Metric Measurements are designed to work solely as an automated system. The Coh-Metric system analyses word frequency as well as syntactic complexity. In effect, the system diagrams each sentence in a sample text, and then assigns that sentence a level of complexity based on its syntactical features.[11]

Discourse-based Formulas For one recent formula, the ETS TextEvaluator, an initial genre determination helps to target the readability testing.

The logo of ETS

From Wikimedia Commons [Public Domain]

The ETS TextEvaluator The ETS TextEvaluator was developed by ETS (Educational Testing Service), a New Jersey-based educational company.

Step 1: Genre Analysis

Initially, texts are classified as informative, literary, or mixed.

Step 2: Complexity Analysis

The complexity of the text is measured using a metric designed to match the genre of the text. Lexical, syntactic, and discourse features are correlated to deliver a final readability score.[12]

Professional Applications Readability formulas have been widely used in education, but their use has also extended into other domains. Various readability formulas have been used to study political literature, corporate annual reports, customer service manuals, drivers' manuals, health information, informed consent forms, lead-poison brochures, online privacy notices, medical journals, and environmental information.[2] Mid-century formulas like the Dale-Chall formula, the Flesch Reading Ease Formula, and the Gunning Fog Index were widely used in establishing new writing practices in business and government. Indeed, Rudolf Flesch's development of the Flesch-Kincaid Formula was a product of his work for the U.S. military. In addition, both Rudolf Flesch and Robert Gunning worked as consultants for journalist trade groups (the Associated Press and the United Press, respectively), thereby influencing the editorial practices of American newspapers.[14]

Tips for Technical Writers Writing for Syllable-Counting Formulas: Writers trying to lower their readability score for these formulas should reduce their syllable count and potentially also their average sentence length.

Writing for Word Frequency Formulas: Writers attempting to lower their readability score for these formulas should reduce the complexity of their vocabulary and the length of their sentences.

Writing for Syntax and Discourse-based Formulas: Writers hoping to lower their readability score for these formulas may want to consult www.plainlanguage.gov for information on reducing syntactical complexity.

Criticism Syllable-counting readability formulas have been criticized in particular for relying on simplistic criteria (syllable counts and word counts) while overlooking other important linguistic features.[11] In addition, some studies have found inconsistent results between readability scores delivered by syllable-counting formulas like the Flesch-Kincaid Formula and the Gunning Fog Index and the results of reader comprehension tests.[13] This criticism has inspired the development of more complex readability formulas over time. It has also led some critics to recommend replacing readability formulas altogether with writing protocols and usability testing.

Other critics have pointed out that examining the same text using different readability formulas can result in widely different readability scores, even when these scores are adjusted for the different scoring parameters of each system. This has lead to questions about the reliability of the formulas. Supporters of readability formulas have argued this criticism is founded on a misunderstanding of criterion scores. A criterion score is the percentage of multiple-choice style questions that a reader with a particular reading level would be able to answer correctly after reading a text. So, for example, the Dale-Chall formula assumes a criterion score of 50%,

while the Gunning Fog Index assumes a criterion score of 90%. Hence, these two formulas may produce different readability scores for the same text.[2]

Other critics have highlighted the difficulty that text features like hyphens, acronyms, abbreviations, and URLs can create for automated versions of readability formulas, which may also explain the different results they produce.[16]

Conclusion
Readability formulas were made to test school books at a time when new immigrants were filling classrooms. The first formulas were built to be worked out by hand. Thus, they had to be easy to understand. But with the rise of computers, readability formulas have grown more complex and less transparent. Some formulas now take the form of trademarked software. However, many of the measures used by early formulas have survived in new forms. The formulas have also moved beyond the classroom. They have been used in many fields, from news to science writing. And although their precision has been questioned, many readability formulas are in common use today.


 * Author's Note—The preceding paragraph received the following readability scores:

Flesch Reading Ease Score: 64.9 (Average Difficulty)

Flesch-Kincaid Grade Level: 7th Grade

Gunning Fog Index: 10.7 (Hard to read)

The Fry Graph: 13th Grade

The Dale-Chall Formula: 7.4 (Grades 9–10)

Golub Syntactic Density Score: 2.6 (3rd–4th Grade)

Last updated by Adam McBride-Smith on 11/26/18.