In my recent post on the timing of the Math section on the digital vs. the paper-based SAT, I alluded to the striking difference in proficiency levels in Math vs. English set by the College Board (530 vs. 480). My colleague Mike Bergin left a comment suggesting that I look a bit deeper into the discrepancy, and I realized that although I’ve mentioned it a number of times since the cutoffs were introduced eight years ago (it’s amazing how time flies!), I’ve never really explored the issue—which turns out to have just as much to do with the state of higher education as it does with college-admission tests.

But first, some background: When the SAT was redesigned in 2016, the College Board introduced College Readiness “Benchmarks” for both English (Reading/Writing) and Math, comparable to those that had long existed for the ACT. Those scores (Math, 22: Science, 23, English: 18, Reading: 22, with the latter two rolled into a single ELA benchmark of 20) were intended to indicate that a student would have a “50% chance of earning B or higher grade and approximately a 75-80% chance of earning a C or higher grade in the corresponding college course or courses.

The SAT/ACT concordance charts appear to have been last updated in 2018, and to the best of my knowledge they are still being used. Unfortunately, they do not list correspondences between ACT English/Reading and SAT Writing/Reading on a 36 vs. 1600 scale. It is reasonable, however, to assume that these scores would be roughly in line with the overall concordance.

Looking at these, an ACT score of 18 (the English benchmark alone) is shown to correspond to an overall SAT score of 960-980, or approximately 480-490 per section.

However, an ACT score of 20 (the overall ELA benchmark) corresponds to an SAT score of 1030 to 1050, or approximately 520 per section—only 10 points lower than the SAT Math benchmark, and a full 40 points above the 480 benchmark listed.

The ACT Reading benchmark of 22 corresponds to an overall SAT score of 1100 to 1120, or about 550-560 per section.

So when setting benchmarks, the College Board appears to have taken the absolute lowest possible correspondence score for the English section alone and applied it broadly to the verbal portion of the exam as a whole.

As I’ve written about previously, the logical—and cynical—explanation for this clumsy sleight of hand is that the 2016 SAT redesign was based in large part on the College Board’s goal of muscling its way into the state testing market and edging out the ACT—a hugely lucrative prospect given the almost universal adoption of Common Core standards, to which the SAT (but not the ACT) would be explicitly aligned. Lower benchmarks = higher graduation rates, hence more satisfied school districts. The fact that this approach might actually result in the (further) degradation of the education system was beside the point.

But none of that actually answers the question of why ACT English benchmarks were set so low in the first place.

The Math, Reading, and Science benchmarks range from 22-23; English, at 18, is an extreme outlier.

Or, said otherwise, why do many students who lack a basic conception of how written English works nevertheless have 50% chance of earning at least a B in a college English course?

Two words: Freshman Composition.

One more word: Adjuncts.

According to the ACT, the English benchmark was based on freshman college grades in “English Composition I,” a class that perhaps most strongly epitomizes the shift toward the academy’s reliance on adjunct faculty members. Underpaid (sometimes only a few thousand dollars per class) and poorly treated by universities, and largely dependent on positive student evaluations, adjuncts are more likely than tenured faculty members to inflate grades. As a result, the level of student work necessary to achieve at least a B in classes taught by them may be very substantially lower than that required in other classes.

I was unable to find current up-to-date statistics for the percentage of freshman composition courses taught by adjunct faculty; however, as far back as 2008, Inside Higher Ed was reporting that just “42 percent of all faculty members teaching English in four-year colleges and universities and only 24 percent in two-year colleges hold tenured or tenure-track positions.

Part-time faculty members now make up 40 percent of the faculty teaching English in four-year institutions and 68 percent in two-year institutions.”

When part-time Ph.D. holders are not used, graduate students, who are obviously more susceptible to pressure from administrators to assign passing grades, may also be responsible for freshman English. A guide for UCLA TAs, for example, states that instructors should ensure (emphasis mine) that “students who do not come from privileged writing backgrounds can produce satisfactory, good or excellent writing”—something that is effectively impossible to guarantee. Particularly since the UC system stopped considering test scores, it is entirely possible that some entering freshmen will write so far below a college level that no graduate student, no matter how gifted, can bring them up to par in the course of a semester. And obviously, students are responsible for their own work as well.

Given this context, the fact that students stand a significant chance of earning a B or higher in freshman composition even with very low English test scores is not exactly heartening. It also points to the ways in which crises at the tertiary level can trickle down to the secondary level, making entering freshmen less prepared for college-level work (and then, in a vicious cycle, putting increased pressure on precariously positioned faculty to inflate grades). The worryingly low SAT and ACT English benchmarks are both a result and a cause.