Why are SAT and ACT English benchmarks so low?

Why are SAT and ACT English benchmarks so low?

In my recent post on the timing of the Math section on the digital vs. the paper-based SAT, I alluded to the striking difference in proficiency levels in Math vs. English set by the College Board (530 vs. 480). My colleague Mike Bergin left a comment suggesting that I look a bit deeper into the discrepancy, and I realized that although I’ve mentioned it a number of times since the cutoffs were introduced eight years ago (it’s amazing how time flies!), I’ve never really explored the issue—which turns out to have just as much to do with the state of higher education as it does with college-admission tests.

But first, some background: When the SAT was redesigned in 2016, the College Board introduced College Readiness “Benchmarks” for both English (Reading/Writing) and Math, comparable to those that had long existed for the ACT. Those scores (Math, 22: Science, 23, English: 18, Reading: 22, with the latter two rolled into a single ELA benchmark of 20) were intended to indicate that a student would have a “50% chance of earning B or higher grade and approximately a 75-80% chance of earning a C or higher grade in the corresponding college course or courses.

The SAT/ACT concordance charts appear to have been last updated in 2018, and to the best of my knowledge they are still being used. Unfortunately, they do not list correspondences between ACT English/Reading and SAT Writing/Reading on a 36 vs. 1600 scale. It is reasonable, however, to assume that these scores would be roughly in line with the overall concordance.

Looking at these, an ACT score of 18 (the English benchmark alone) is shown to correspond to an overall SAT score of 960-980, or approximately 480-490 per section.

However, an ACT score of 20 (the overall ELA benchmark) corresponds to an SAT score of 1030 to 1050, or approximately 520 per section—only 10 points lower than the SAT Math benchmark, and a full 40 points above the 480 benchmark listed.

The ACT Reading benchmark of 22 corresponds to an overall SAT score of 1100 to 1120, or about 550-560 per section.

So when setting benchmarks, the College Board appears to have taken the absolute lowest possible correspondence score for the English section alone and applied it broadly to the verbal portion of the exam as a whole.

As I’ve written about previously, the logical—and cynical—explanation for this clumsy sleight of hand is that the 2016 SAT redesign was based in large part on the College Board’s goal of muscling its way into the state testing market and edging out the ACT—a hugely lucrative prospect given the almost universal adoption of Common Core standards, to which the SAT (but not the ACT) would be explicitly aligned. Lower benchmarks = higher graduation rates, hence more satisfied school districts. The fact that this approach might actually result in the (further) degradation of the education system was beside the point.

But none of that actually answers the question of why ACT English benchmarks were set so low in the first place.

The Math, Reading, and Science benchmarks range from 22-23; English, at 18, is an extreme outlier.

Or, said otherwise, why do many students who lack a basic conception of how written English works nevertheless have 50% chance of earning at least a B in a college English course?

Two words: Freshman Composition.

One more word: Adjuncts.

According to the ACT, the English benchmark was based on freshman college grades in “English Composition I,” a class that perhaps most strongly epitomizes the shift toward the academy’s reliance on adjunct faculty members. Underpaid (sometimes only a few thousand dollars per class) and poorly treated by universities, and largely dependent on positive student evaluations, adjuncts are more likely than tenured faculty members to inflate grades. As a result, the level of student work necessary to achieve at least a B in classes taught by them may be very substantially lower than that required in other classes.

I was unable to find current up-to-date statistics for the percentage of freshman composition courses taught by adjunct faculty; however, as far back as 2008, Inside Higher Ed was reporting that just “42 percent of all faculty members teaching English in four-year colleges and universities and only 24 percent in two-year colleges hold tenured or tenure-track positions.

Part-time faculty members now make up 40 percent of the faculty teaching English in four-year institutions and 68 percent in two-year institutions.” https://www.insidehighered.com/news/2008/12/11/adjunctification-english

When part-time Ph.D. holders are not used, graduate students, who are obviously more susceptible to pressure from administrators to assign passing grades, may also be responsible for freshman English. A guide for UCLA TAs, for example, states that instructors should ensure (emphasis mine) that “students who do not come from privileged writing backgrounds can produce satisfactory, good or excellent writing”—something that is effectively impossible to guarantee. Particularly since the UC system stopped considering test scores, it is entirely possible that some entering freshmen will write so far below a college level that no graduate student, no matter how gifted, can bring them up to par in the course of a semester. And obviously, students are responsible for their own work as well.

Given this context, the fact that students stand a significant chance of earning a B or higher in freshman composition even with very low English test scores is not exactly heartening. It also points to the ways in which crises at the tertiary level can trickle down to the secondary level, making entering freshmen less prepared for college-level work (and then, in a vicious cycle, putting increased pressure on precariously positioned faculty to inflate grades). The worryingly low SAT and ACT English benchmarks are both a result and a cause.

The shortest answer isn’t always right—but “shorter is better” is still a good rule to follow

The shortest answer isn’t always right—but “shorter is better” is still a good rule to follow

I recently encountered someone who, after many years of hearing tutors advise students to “pick the shortest” answer on ACT English and SAT Writing, decided to see how often that option actually was correct. After going through a bunch of ACTs, she discovered that the shortest answer was in fact correct only a relatively small percentage of the time. She was quite incensed about this fact, and took it as evidence that students should not be encouraged to select their answers based on length.

Now, for a tutor who advises a blunt, just-pick-the-shortest-answer-if-you’re-not-sure approach, this is a reasonable criticism.

Otherwise, however, I think it misses the point.

Fundamentally, “shorter is better” is a general guideline; it is not intended to be an ironclad rule for choosing answers. If the shortest answer were indeed always correct, even just on rhetoric questions, then SAT and ACT grammar would be far too easy to game, and many more students would receive high scores than is actually the case. (more…)

How to Use a Dash

How to Use a Dash

Dashes are a form of punctuation that is pretty much guaranteed to show up on both the ACT® English Test and the multiple-choice SAT® Writing Test. Because they tend to be used more frequently in British than in American English, they are typically the least familiar type of punctuation for many students. That said, they are relatively straightforward.

Dashes are tested in three ways. The first is extremely common, the second less common, and the third rare.


1) To set off a non-essential clause (2 Dashes = 2 Commas)

In this case, dashes are used exactly like commas to indicate non-essential information that can be removed without affecting the basic meaning of a sentence. If you have one dash, you need the other dash. It cannot be omitted or replaced by a comma or by any other punctuation mark. This is the most important rule regarding dashes that you need to know.

Incorrect: John Lockewhose writings strongly influenced The Declaration of Independence, was one of the most important thinkers of the eighteenth century. (more…)

When do two commas NOT signal a non-essential clause?

When do two commas NOT signal a non-essential clause?

Note: I’m addressing this issue in part because a colleague informed me that it’s popped up in regards to my books on Reddit. If anyone comes across those questions, feel free to direct people here.

Among the simplest and most straightforward grammatical rules students studying for the SAT or ACT often learn is two commas are often used to signal non-essential information: words, phrases, and clauses that are not central to the essential meaning of a sentence, and that can be crossed out without affecting its basic grammatical structure.

The problem, of course, is that commas can be tested in many ways, and that two commas can be present in a given section for numerous reasons. Now, much of the time, two commas in an underlined section will in fact signal non-essential information, but if you’re aiming for a very high Writing/English score on the SAT or ACT, you also need to understand when this is not the case. (To read about information that is non-essential click here.) (more…)

Read until you get to the period (how to avoid a common careless error)

Read until you get to the period (how to avoid a common careless error)

Not to long ago (5/30/18), I happened to post the following Question of the Day on Facebook:


It wasn’t that long ago that putting food in liquid nitrogen was something you’d only see in a high school science class, but it’s also becoming a mainstay of modernist cooking. It’s odorless, tasteless, and harmless because it’s so cold (–320.44°F to be exact), it boils at room temperature and evaporates out of your food as it rapidly chills it.


B. tasteless, and harmless, and because
C. tasteless and harmless, because
D. tasteless, harmless and because,


The exception to the “no verb after whom” rule

The exception to the “no verb after whom” rule

Note: this exception is addressed in the 4th edition of The Ultimate Guide to SAT® Grammar and the 3rd edition of The Complete Guide to ACT® English, but it is not covered in earlier versions.


Both SAT Writing and ACT English focus test two specific aspects of the who vs. whom rule.


1) Who, not whom, should be placed before a verb.


Incorrect: Alexander Fleming was the scientist whom discovered penicillin.

Correct: Alexander Fleming was the scientist who discovered penicillin. (more…)