Following the first administration of the new SAT, the College Board released a highly unscientific survey comparing 8,089 March 2016 test-takers to 6494 March 2015 test-takers. 

You can read the whole thing here, but in case you don’t care to, here are some highlights:

  • 75% of students said the Reading Test was the same as or easier than they expected.
  • 80% of students said the vocabulary on the test would be useful to them later in life, compared with 55% in March 2015.
  • 59% of students said the Math section tests the skills and knowledge needed for success in college and career.

Leaving aside the absence of some basic pieces of background information that would allow a reader to evaluate just how seriously to take this report (why were different numbers of test-takers surveyed in 2015 vs. 2016? who exactly were these students? how were they chosen for the survey? what were their socio-economic backgrounds? what sorts of high schools did they attend, and what sorts of classes did they take? what sorts of colleges did they intend to apply to? were the two groups demographically comparable? etc., etc.), this is quite a remarkable set of statements.

Think about it: the College Board is essentially bragging — bragging — about how much easier the new SAT is.

Had a survey like this appeared even a decade ago, it most likely would be have been in The Onion. In 2016, however, the line between reality and satire is considerably more porous.

To state the obvious, most high school juniors have not ever taken an actual college class (that is, a class at a selective four-year college), and it is exceedingly unlikely that any of them have ever held a full-time, white collar job. They have no real way of knowing what skills — vocabulary, math, or otherwise — will actually be relevant to their futures.

Given that exceedingly basic reality, the fact that the College Board is touting the survey as being in any way indicative of the test’s value is simultaneously hilarious, pathetic, and absurd.

So, a few things.

First, I’ve said this before, but I’ll reiterate it here: the assertion that the SAT is now “more aligned with what students are learning in school” overlooks the fact that the entire purpose of the test has been altered. The SAT was always intended to be a “predictive” test, one that reflected the skills students would need in college. Unlike the ACT, it was never intended to be aligned with a high school curriculum in the first place.

Given the very significant gap between the skills required to be successful in the average American high school and the skills necessary to be successful at a selective, four-year college or university, there is a valid argument to be made for an admissions test aligned with the latter. But regardless of what one happens to think about the alignment issue, to ignore it is to sidestep what should be a major component of the conversation surrounding the SAT redesign.

Second, the College Board vs. ACT, Inc. competition illustrates the problem of applying the logic of the marketplace to education.

In order to lure customers from a competitor, a company must of course aim to provide those customers with an improved, more pleasurable experience. That principle works very well for a company that manufactures, say, cars, or electronics.

If your customers are students and your product is a test, however, then the principle becomes a bit more problematic.

The goal then becomes to provide students with a test that they will like. (Indeed, if I recall correctly, when the College Board first announced the redesign, the new test was promoted as offering an improved test-taking experience.)

What sort of test is that? 

A simpler test, of course.

A test that inflates scores, or at least percentile rankings.

A more gameable test: one on which it is technically possible to obtain a higher score by filling in the same letter for every single question than by answering any of the questions for real.

A test that makes students feel good about themselves, while strategically avoiding anything that might directly expose gaps in their basic knowledge — gaps that their parents probably don’t know their children possess and whose existence they would most likely be astounded to discover. (Trust me; I’ve seen the looks on their faces.)

Most of the passages on the English portion of the ACT are written around a middle school level, as are the Writing passages on the new SAT. Unlike the ACT, which assigns separate scores to the English and Reading portions, the new SAT takes things a step further and combines Reading and Writing portions into a single Verbal score. As a result, the SAT allows students reading below grade level to hide their weaknesses much more effectively.

Indeed, I’d estimate that most of my ACT students, many of whom switched from the SAT because the reading was simply too difficult, were reading at somewhere between a seventh- and a ninth-grade level. Those students are pretty obviously the ones the College Board had in mind when it redesigned the verbal portion.

Forgive me for sounding like an old fogey from the dark ages of 1999 here, but should a college admissions test really be pandering to these types of students? (Sandra Stotsky, one of two members of the Common Core validation committee to reject the standards, has suggested that the high school Common Standards be applied to middle school students as a benchmark for judging whether they are ready for high school.)

And for colleges, do the benefits of collapsing the distinction between solid-but-not-spectacular readers and the exceptional readers truly outweigh the drawbacks? Those sorts of differences are not always captured by grades; that is exactly what has traditionally made the SAT useful. 

Obviously, the achievement gap is the omnipresent elephant in the room. Part of the problem, however, is the college admissions system poses such vastly different challenges for different types of students; there’s no way for a single test to meet everyone’s needs. 

I’m not denying that for students aiming for elite colleges, the college admissions process can easily spiral out of control. I’ve stood on the front lines of it for a while now, and I’ve seen the havoc it can wreak — although much of the time, that havoc also stems from unrealistic expectations, some of which are driven by rampant grade inflation. An 1100 (1550) SAT was much easier to reconcile with B’s and an occasional C than with straight A’s. 

A big part of the stress, however, is simply a numbers game: there are too many applicants for too few slots at too few highly desirable schools. Changing the test won’t alter that fact. 

If anything, a test that produces more high-scoring applicants will ultimately increase stress levels because yet more students will apply to the most selective colleges, which will in turn rely more heavily on intangible factors. Consequently, their decisions are likely to become even more opaque. 

At the other extreme, the students at the bottom may in fact be lacking basic academic vocabulary such as “analyze” and “synthesize,” in which case it does seem borderline sadistic to test them on words like “redolent” and “obstreperous.”  It’s pretty safe, however, to assume that students in that category will generally not be applying to the most selective colleges. But in changing the SAT so that the bottom students are more likely to do passably well on it, the needs of the top end up getting seriously short shrift. No one would argue that words like “analyze” aren’t relevant to students applying to the Ivy League; the problem is that those students also need to know words like “esoteric” and “jargon” and “euphemism” and “predicated.”

The easiest way to reduce the gap between these two very disparate groups is of course to adjust to the test downward to a lower common denominator while inflating scores. But does anyone seriously think that is a good solution? Lopping off the most challenging part of the test, at least on the verbal side, will not actually improve the skills of the students at the bottom. It also fails to expose the students at the top to the kind of reading they will be expected to do. And even if the formerly ubiquitous flashcards disappear and stress levels temporarily dip, the underlying issues will remain, and in one guise or another they will inevitably resurface. 

I’m not naive enough to think that the SAT redesign will have an earth-shattering effect on most high school students. The students who have great vocabularies and read non-stop for pleasure won’t suddenly stop doing so because a handful of hard words are no longer directly tested on the SAT. The middling ones who were going to forget all of those flashcards they tried to memorize will come out pretty much the same in the end. The ones who never intended to take the test will sit through it in school because they have no choice, but I know of no research to suggest that are more likely to complete a four-year degree as a result. Plenty of students whose parents initially thought Khan Academy could replace Princeton Review will discover that their children need some hand-holding after all and sign them up for a class — especially if all of their friends suddenly seem to be scoring above the 95th percentile. Not to mention the thousands of kids who will ignore the redesign altogether and take the ACT, just as they intended to do in the first place.

Rather, my real concern is about the message that the College Board is sending. Launching a smear campaign to rebrand the type of moderately challenging vocabulary that peppers serious adult writing as “obscure” might have been necessary to win back market share, but it was a cheap and irresponsible move. It promotes the view that a sophisticated vocabulary is something to be sneered at; that simple, everyday words are the only ones worth knowing. Even if that belief is rampant in the culture at large, shouldn’t an organization like the College Board have some obligation to rise above it? It suggests that knowledge acquired through memorization is inherently devoid of value. It misrepresents the type of reading and thinking that college-level work actually involves. It exploits the crassest type of American anti-intellectualism by smarmily wrapping it in a feel-good blanket of social justice. And it promotes the illusion that students can grapple with adult ideas while lacking the vocabulary to either fully comprehend them or to articulate cogent responses of their own. 

What is even more worrisome to me, however, is that the College Board’s assertions about the new test have largely been taken at face value. Virtually no one seems to have bothered to look at an actual recent SAT, or interviewed people who actually teach undergraduates (as opposed to administrators or admissions officers), or even stopped to consider whether the evidence actually supports the claims —  that whole “critical thinking” thing everyone claims to be so fond of. 

And that is a problem that goes far, far beyond the SAT.