Reuters breaks major story on SAT cheating in Asia

As predicted, the College Board’s decision to bar tutors from the first administration of the new SAT had little effect on the security of the test; questions from the March 5th administration of the new SAT quickly made an appearance on various Chinese websites as well as College Confidential

Reuters has now broken a major story detailing the SAT “cartels” that have sprung up in Asia, as well as the College Board’s inconsistent and lackluster response to what is clearly a serious and widespread problem.

It’s a two-part series, and it clearly takes the College Board to task for allowing the breaches.

As SAT was hit by security breaches, College Board went ahead with tests that had leaked

How Asian test-prep companies swiftly exposed the brand-new SAT

The fact that old SATs (particularly those used for international administrations) are regularly recycled has been the College Board’s dirty not-so-little secret for a while now. Apparently, that practice will continue with the new exam.

What even people who are aware that the College Board recycles tests do typically realize, however, is that the organization does so according a pattern, and thus tutors/companies in the know can often predict which test will be administered in a given location and prepare their students accordingly.

One way of mitigating the problem would of course be to create single-use tests; however, those would be more expensive to produce and would not solve the problem of test-takers in earlier time zones passing questions and answers to test-takers in later ones. 

In addition, SATs test forms have disappeared from the locked boxes in which they were sent (the problem is described in more detail on this College Confidential thread). The problem also lies with the testing centers and proctors themselves. And since students are reconstructing the tests, this is not something that can be solved by barring tutors from the exam. 

Interestingly, this is the first major article I’ve encountered to openly call attention to the the stake that the College Board — and colleges themselves —  has in allowing the cheating to continue. Because international students almost uniformly pay sticker price, they are a major source of revenue for colleges. They also provide a steady stream of STEM students (although the numbers are higher at the graduate level than at the undergraduate level).

Not coincidentally, China sends more students to the United States than any other country; indeed, the number of Chinese students has nearly doubled from about 160,000 in 2010-11 to around 300,000 in 2014-15.

You can also read the College Board’s response, according to which the leaks are the merely fault of a handful of “bad actors.”  

Tellingly, there is not a single mention of the recycled tests. 

Also tellingly, David Coleman failed to comment.

Why is rSAT aligned with high school rather than college?

As I’ve written about recently, the fact that the SAT has been altered from a predictive, college-aligned test to a test of skills associated with high school is one of the most overlooked changes in the discussion surrounding the new exam. Along with the elimination of ETS from the test-writing process, it signals just how radically the exam has been overhauled.

Although most discussions of the rSAT refer to the fact that the test is intended to be “more aligned with what students are doing in school,” the reality is that this alignment is an almost entirely new development rather than a “bringing back” of sorts. The implication — that the SAT was once intended to be aligned with a high school curriculum but has drifted away from that goal — is essentially false. 

In reality, the SAT was created as a tool to allow elite colleges to identify students from non-prep school backgrounds who were performing at a level far above what their circumstances would predict. Because the curriculum at the average public high school in, say, the middle of Ohio, differed significantly from that of an Andover or a Choate, the test was deliberately detached from a specific curriculum.

Whatever their shortcomings, the creators of the original test were in that particular regard a good deal more clear-sighted than their modern counterparts in recognizing the disadvantages a curriculum-aligned test would create.

It is true that in recent years the SAT has moved away from the type of extreme abstraction that originally characterized the test, and taken steps toward more real-world applications (for example, the replacement of analogies with sentence completions). At the same time, though, the things it tested — at least on the verbal side — were largely aligned with the sorts of skills that freshman at the most selective colleges could be reasonably expected to exhibit. The test was written to be challenging even to students at the very top of the pile. At the same time, though, it gave students reading at a level beyond what their grades would suggest a chance to show what they actually knew. And as grade inflation increased, it gave admissions officers a way of putting all those A’s in context. 

The SAT’s shift away from college-level reading and vocabulary is, however, entirely unsurprising. Despite the “college and career readiness” rhetoric, it’s hard to argue that a test that focuses on everyday words like raise (real example) is truly as challenging as one that tests actual — not to mention common — college-level vocabulary such as paradigm, volatile, and scrutinize.

But given the pressure to close the achievement gap and the amount of money expended by wealthy parents on tutoring, the difficulty of the old exam was becoming a serious liability. Colleges were pushing to increase enrollments of first-generation and minority students, a problematic goal test-wise given that students in those groups tended to score well below their better-off peers. As a result, increasing numbers of schools were going test-optional in order to admit classes with the appropriate demographic mix without suffering in the USNWR rankings. The College Board had to do something to induce more colleges to keep the test.

Then, of course, there was the competition with the ACT, which in 2012 surpassed the SAT as the most popular college admissions test.

Much has been made of that last point, I think rightly so. But I also think that only half the story is typically taken into account  — and the other half, always lurking between the lines, seems to me the more interesting half.  

When people cite the ACT as the most popular college admissions test, they’re probably thinking of the students who are intending to apply to college, and who therefore sign up to take the ACT at a designated location on a Saturday morning. Plenty of students do that, of course; however, in a (now-dwindling) number of states, students are required to take the ACT as a graduation requirement. Whether or not the ACT was developed for that purpose, it is effectively treated as a state test that students can conveniently also use to apply to college.

The fact that the SAT has now entered into fierce competition with the ACT for the state-test market has of course been widely reported, as has the fact that states’ adoption of the SAT coincided with the release of the new exam.

In general, however, those two situations tend to be presented as events that are only generally related. Everyone knows that the SAT is fighting for market share with the ACT. Oddly enough, though, no one seems to be suggesting a cause-and-effect relationship — namely, that the SAT was redesigned in part so that it could fight for a share of the state test market. Those state-test takers were the reason the ACT had surpassed the ACT; targeting them was thus the fastest and easier way to recapture market share.

Perhaps the implication was so obvious that it seemed unnecessary to spell out, but I think there’s more going on here than is immediately apparent. 

In order to compete effectively in the state test market, the SAT had to be reworked into a high-school aligned test. Indeed, many of the passages on the Critical Reading section of the old SAT were written at such a high level that administering the test to high school students as a graduation requirement would have been positively unthinkable.

This also explains why the ELA “college readiness” benchmarks were lowered so dramatically. The reduction was necessary to allow a sufficient number of students who would not normally even take the SAT to be deemed “college ready” — and, in some states, be guaranteed the right to skip remedial classes in college, thus saving their institutions substantial amounts of money. That’s a very lucrative change for cash-strapped public institutions. Professors, or more likely part-time, underpaid adjuncts, will have no choice but to make do with students who are not in fact ready for college-level work. But if the goal is simply to boost college graduation rates without regard for what a college diploma actually means, then that is a perfectly logical decision. 

The other piece of the puzzle involves… you guessed it, Common Core. Facing increasing amounts of backlash for Common Core tests such as the PARCC, states are understandably looking for an alternative test that meets state standards. Although some states are appearing to abandon Common Core, they are often simply retaining the standards under a new name.

Even if David Coleman is now attempting to distance himself from the mess he created, states are still more or less locked into finding tests aligned to Common Core, or whatever the actual standards may be now called. The SAT is the obvious solution. In addition to being much, much shorter than PARCC (just under 3 hours vs. 11 hours), it already bears the gold stamp of alignment. Regardless of the fact that Coleman has begun to backtrack, insisting that the new SAT merely reflects the skills most important for advanced high school and early college students, the association between the test and Common Core is pretty much set. 

The ACT, in contrast, admitted earlier and more publicly that it is not perfectly aligned with Common Core — perhaps not the best move given the current climate. 

In reality, it is of course questionable whether the SAT was ever intended to be in perfect alignment with Common Core: CCSS.ELA-LITERACY.RI.11-12.2, for example, indicates that students should be able to Determine two or more central ideas of a text and analyze their development over the course of the text, including how they interact and build on one another to provide a complex analysis. Perhaps I’m missing something, but that particular skill does not appear to be tested on rSAT. 

In any case, the result is that schools will effectively be swapping one Common Core test of questionable validity for another. In fact, rSAT bears an uncanny resemblance to the much-loathed PARCC, at least on the ELA side. More of a resemblance, incidentally, than it bears to the old SAT. 

Because the SAT is still branded as the SAT, however, it is likely to be perceived as less controversial. I would imagine that most parents are unaware of the extent to which the test has been diluted. Indeed, backlash is more likely to come from the decision to use a college-admissions test for high school graduation.

Furthermore, the high school market brings with it another advantage: while individual colleges can choose to opt out of the SAT, individual high schools cannot. If the mandate is implemented at the state level, then thousands of students are guaranteed to take the test. The College Board gets paid for what is essentially a captive market. 

How colleges benefit from inflated SAT scores

When discussing the redesigned SAT, one common response to the College Board’s attempts to market the redesigned test to students and families by focusing on the ways in which it will mitigate stress and reduce the need for paid test-preparation, is to insist that that those factors are actually beside the point; that the College Board can market itself to students and families all it wants, but that the test is about colleges’ needs rather than students’ needs. 

That’s certainly a valid point, but I think that underlying these comments is the assumption is that colleges are primarily interested in identifying the strongest students when making admissions decisions. If that were true, a test that didn’t make sufficient distinctions between high-scoring applicants wouldn’t be useful to them. But that belief is based on a misunderstanding of how the American college admissions system works. So in order to talk about how the new SAT fits into the admissions landscape, and why colleges might be so receptive to an exam that produces higher scores, it’s helpful to start with a detour.

To back way up, the modern American university is essentially a hybrid creature. At the graduate and faculty levels, it’s based on the nineteenth century German research model. Faculty and graduate students are hired/accepted based on their research and publication records (“publish or perish”), and are expected to win grants and advance knowledge in ways that will amplify their institutions’ prestige — academia’s principal form of currency. Intellectual firepower is paramount; that is why graduate students, or at least Ph.D. students, are typically chosen by faculty members in the relevant department.

Then there are undergraduate programs, which operate on a “best graduates” rather than a “best students” model. Excelling academically is not of course frowned upon; however, the real goal is not only to admit the most intellectually capable students, but rather those deemed most likely to ultimately reflect well on their potential alma mater. And since institutions can expend tens of thousands of dollars to educate a single student, they are, to put it crudely, seeking return on investment. Although few admissions officers would put it that way (at least publicly), their job is effectively to identify the students most likely to provide that return. 

In addition, while academics are (ideally) the primary focus of undergraduate life, they are also designed to be one of many aspects of college life. It is understood that only a tiny percentage of students will go on to pursue academic careers; while there are of course many wonderful, genuinely caring professors who take a serious interest in teaching, there is often a tacit understanding that undergraduates will be kept out of professors’ hair so that the latter can do their research and pursue tenure in peace, while professors will not make undue demands of their students who, after all, have numerous extracurricular obligations to fulfill. To manage that balance, teams of deans, advisors, and graduate students are required.

Lest you think this is nothing more than a caricature of academia, in the two-and-a-half-years I sat in on faculty meetings in a humanities department at a certain Ivy League university in Cambridge, MA (a university whose dean of admissions has come out quite prominently in support of the new SAT), I recall undergraduates being mentioned by name exactly twice — and one of those times involved faculty complaints.

I also recall once receiving a phone call from an admissions officer who wanted to talk about a recent admit (“great kid”) with a particular interest, but didn’t know that the field in question — a common one — was handled by a different department. Admissions people and academic departments are sometimes not even on the same page.  

Given the goal of providing undergraduates with an “experience” rather than just a series of classes and exams, admission to a B.A. program is in some ways more like admission to a club than to an academic institution; it’s a question of “fit,” and the desire to maintain institutional (brand) identity.

At most universities, the people who will actually teach students are largely absent from the admissions process. Indeed, at many schools, the people who actually teach those admitted freshman are more likely to be underpaid part-time adjuncts than they are to be tenured faculty. 

For anyone familiar with the college admissions landscape, this probably isn’t news, but when gauging the reception to the new SAT, it bears repeating: colleges are not simply admitting individual students based on academic achievement but rather attempting to shape a class, one with the requisite ethnic, athletic, geographic, extra-curricular, monetary, and social attributes. Even if outright quotas are illegal, schools nonetheless have unofficial “targets” to reach. Indeed, admissions directors are in some places now known as “chief enrollment officers.” 

Assuming a student meets a minimum academic baseline, always left deliberately vague and contingent upon the student’s background, secondary qualities are in considerable part responsible for determining whether a student is offer a spot in the incoming freshman class. (At least that’s the case at most selective private colleges — it’s a little different at large public universities, which tend to be more numbers-driven.)

This “holistic” approach to admissions of course has a long and sometimes sordid history. As detailed in Jerome Karabel’s The Chosen, among other places, the very concept was invented in order to cap the number of Jewish students admitted to the Ivy League. The 1920s fear of enrolling too many “grinds” has perhaps been softened into platitudes about diversity and well-roundedness, but one can still hear barely concealed echoes of it. Few colleges, after all, would publicly admit to wanting students who spent most of their time studying. 

In recent decades, of course, the purpose of holistic admissions has shifted considerably, but the fact remains that colleges are free to select their students according to their own institutional priorities, and standardized tests are only valuable insofar as they allow schools to create freshman classes with the desired profile.

That, I suspect, is why most universities would never switch to a lottery-based admissions system for applicants who met a minimum set of academic criteria. Even the theoretical possibility of fantastic applicants with low scores and all the right other attributes outweighs the chaos and stress inflicted by the current system. 

Furthermore, universities are businesses, nonprofit status notwithstanding, and like any businesses, they have budgetary constraints. While an extremely select group of elite schools (which, ironically, tend to enroll only a small percentage of low-income students) have virtually unlimited funds for financial aid, that is hardly the case for most schools. The higher the number of competitive applicants, the more leeway there is to choose applicants who can pay sticker price.  

This is where rSAT comes in.

Even if scores shift generally upwards, they can only shift upward so high; 800 is still the top score in both sections. The top students, the ones who would have scored at the top on the old test, will continue to score at the top of the scale on the new test. In addition, however, students who would have obtained strong but not exceptional scores on the old SAT will be more likely to score in the stratosphere.

I suspect that this particularly true for students who are extremely strong in Math and merely above-average in Reading. (Many of my former students who obtained ACT Reading scores in the 34-36 range after failing to break 700 on the old SAT would very likely have fallen into that category.) The difference between the very top students vs. the merely very good ones is thus blurred, leading admissions committees to rely more heavily on “soft” criteria and making the overall process more opaque. 

As one commenter on my recent “Race to the Bottom” post pointed out, this is likely intended to be a feature of the system, not a bug.

One problem with the argument that colleges will reject a test that compresses the top end of the curve, making it more difficult to distinguish between high-scoring students, lies in the now-universal acceptance of the ACT. Colleges that had previous resisted accepting that test eventually gave in for fear out of losing too many applicants (Harvey Mudd was the last to capitulate, in 2007). And in case I need to spell it out, more applicants = lower acceptance rate = higher USNWR ranking = more applicants the following year (case in point: Northeastern University).

So the decision regarding which test would be accepted was ultimately driven by the applicants, not by the schools themselves. Even if scores rise across the board, individual schools can still tout their newly higher scores as a selling point; parents accustomed to the 1600 scale will assume the scores mean pretty much the same thing they meant in 1985. Not all schools will benefit equally either; schools with features that make them attractive to large numbers of applicants for other reasons (sports, location, etc.) will get the biggest boost. 

It also seems reasonable to assume that two groups of students stand to benefit most from score inflation: motivated underprivileged students from poor-to-mediocre schools who might have scored in the 400s on the old test but who could break the 500 or even the 600 mark on the new one; and somewhat above average middle-class-and-higher students who would likely fall short of the 600 or 700 mark respectively on the old test but manage to clear it on the new one.

The first group will help schools boost the number of minority admits while mitigating some of the pushback over test scores; the second will help them more easily accept full-paying students who in the past might have been eliminated, thus helping to offset the financial needs of the first group. Even if such students are at the low end of the range, they are still more likely to be in the “admissible” pool with the new test. For a college that charges upwards of $60K/year and needs more students paying full-freight, the sudden appearance of more students in that category is a boon. 

In contrast, students who would have scored at the very top on the old exam will find it more difficult to stand out if their abilities are not in evidence elsewhere in their applications. 

At the most selective schools, the superabundance of high-scoring applicants will also give admissions committees increased liberty to select the students with the most desirable non-academic traits without appearing to compromise the quality of their admits.

As things stand, only a relatively small number of students are admitted on academic achievement alone. Those students’ achievements tend to go far, far beyond the SAT, and even in an applicant pool filled with high achievers, they stick out. The SAT isn’t usually that big a deal for them; they’re more concerned with things like cancer research.

For students below that level, however, colleges will have an easier time justifying their decisions to deny applicants with exceptionally high test scores who don’t quite meet the extracurricular/personality bar. Both Harvard and Princeton have recently faced accusations of discrimination from Asian applicants who were rejected despite their extremely high scores and grades, while other, less academically accomplished students were admitted. Although both universities were cleared, that’s not the sort of publicity either school wants to invite. An increase in the number of non-Asian high scorers, which a flattening of the top of the curve would likely produce, would reduce the potential for those sorts of those allegations. 

Finally, colleges at the lower end of the hierarchy stand to benefit, but in a very different way. Some states have implemented policies to guarantee that students who are enrolled in public colleges and who meet a certain benchmark on the SAT will not take remedial classes. Such classes are costly to administer, and a lowered benchmark would reduce the need for them. 

An analysis of problems with PSAT scores, courtesy of Compass Education

Apparently I’m not the only one who has noticed something very odd about PSAT score reports. California-based Compass Education has produced a report analyzing some of the inconsistencies in this year’s scores.

The report raises more questions than it answers, but the findings themselves are very interesting. For anyone who has the time and the inclination, it’s well worth reading.

Some of the highlights include:

  • Test-takers are compared to students who didn’t even take the test and may never take the test.
  • In calculating percentiles, the College Board relied on an undisclosed sample method when it could have relied on scores from students who actually took the exam.
  • 3% of students scored in the 99th percentile.
  • In some parts of the scale, scores were raised as much as 10 percentage points between 2014 and 2015.
  • More sophomores than juniors obtained top scores.
  • Reading/writing benchmarks for both sophomores and juniors have been lowered by over 100 points; at the same time, the elimination of the wrong-answer penalty would permit a student to approach the benchmark while guessing randomly on every single question.

Race to the bottom

Following the first administration of the new SAT, the College Board released a highly unscientific survey comparing 8,089 March 2016 test-takers to 6494 March 2015 test-takers. 

You can read the whole thing here, but in case you don’t care to, here are some highlights:

  • 75% of students said the Reading Test was the same as or easier than they expected.
  • 80% of students said the vocabulary on the test would be useful to them later in life, compared with 55% in March 2015.
  • 59% of students said the Math section tests the skills and knowledge needed for success in college and career.

Leaving aside the absence of some basic pieces of background information that would allow a reader to evaluate just how seriously to take this report (why were different numbers of test-takers surveyed in 2015 vs. 2016? who exactly were these students? how were they chosen for the survey? what were their socio-economic backgrounds? what sorts of high schools did they attend, and what sorts of classes did they take? what sorts of colleges did they intend to apply to? were the two groups demographically comparable? etc., etc.), this is quite a remarkable set of statements.

Think about it: the College Board is essentially bragging — bragging — about how much easier the new SAT is.

Had a survey like this appeared even a decade ago, it most likely would be have been in The Onion. In 2016, however, the line between reality and satire is considerably more porous.

To state the obvious, most high school juniors have not ever taken an actual college class (that is, a class at a selective four-year college), and it is exceedingly unlikely that any of them have ever held a full-time, white collar job. They have no real way of knowing what skills — vocabulary, math, or otherwise — will actually be relevant to their futures.

Given that exceedingly basic reality, the fact that the College Board is touting the survey as being in any way indicative of the test’s value is simultaneously hilarious, pathetic, and absurd.

So, a few things.

First, I’ve said this before, but I’ll reiterate it here: the assertion that the SAT is now “more aligned with what students are learning in school” overlooks the fact that the entire purpose of the test has been altered. The SAT was always intended to be a “predictive” test, one that reflected the skills students would need in college. Unlike the ACT, it was never intended to be aligned with a high school curriculum in the first place.

Given the very significant gap between the skills required to be successful in the average American high school and the skills necessary to be successful at a selective, four-year college or university, there is a valid argument to be made for an admissions test aligned with the latter. But regardless of what one happens to think about the alignment issue, to ignore it is to sidestep what should be a major component of the conversation surrounding the SAT redesign.

Second, the College Board vs. ACT, Inc. competition illustrates the problem of applying the logic of the marketplace to education.

In order to lure customers from a competitor, a company must of course aim to provide those customers with an improved, more pleasurable experience. That principle works very well for a company that manufactures, say, cars, or electronics.

If your customers are students and your product is a test, however, then the principle becomes a bit more problematic.

The goal then becomes to provide students with a test that they will like. (Indeed, if I recall correctly, when the College Board first announced the redesign, the new test was promoted as offering an improved test-taking experience.)

What sort of test is that? 

A simpler test, of course.

A test that inflates scores, or at least percentile rankings.

A more gameable test: one on which it is technically possible to obtain a higher score by filling in the same letter for every single question than by answering any of the questions for real.

A test that makes students feel good about themselves, while strategically avoiding anything that might directly expose gaps in their basic knowledge — gaps that their parents probably don’t know their children possess and whose existence they would most likely be astounded to discover. (Trust me; I’ve seen the looks on their faces.)

Most of the passages on the English portion of the ACT are written around a middle school level, as are the Writing passages on the new SAT. Unlike the ACT, which assigns separate scores to the English and Reading portions, the new SAT takes things a step further and combines Reading and Writing portions into a single Verbal score. As a result, the SAT allows students reading below grade level to hide their weaknesses much more effectively.

Indeed, I’d estimate that most of my ACT students, many of whom switched from the SAT because the reading was simply too difficult, were reading at somewhere between a seventh- and a ninth-grade level. Those students are pretty obviously the ones the College Board had in mind when it redesigned the verbal portion.

Forgive me for sounding like an old fogey from the dark ages of 1999 here, but should a college admissions test really be pandering to these types of students? (Sandra Stotsky, one of two members of the Common Core validation committee to reject the standards, has suggested that the high school Common Standards be applied to middle school students as a benchmark for judging whether they are ready for high school.)

And for colleges, do the benefits of collapsing the distinction between solid-but-not-spectacular readers and the exceptional readers truly outweigh the drawbacks? Those sorts of differences are not always captured by grades; that is exactly what has traditionally made the SAT useful. 

Obviously, the achievement gap is the omnipresent elephant in the room. Part of the problem, however, is the college admissions system poses such vastly different challenges for different types of students; there’s no way for a single test to meet everyone’s needs. 

I’m not denying that for students aiming for elite colleges, the college admissions process can easily spiral out of control. I’ve stood on the front lines of it for a while now, and I’ve seen the havoc it can wreak — although much of the time, that havoc also stems from unrealistic expectations, some of which are driven by rampant grade inflation. An 1100 (1550) SAT was much easier to reconcile with B’s and an occasional C than with straight A’s. 

A big part of the stress, however, is simply a numbers game: there are too many applicants for too few slots at too few highly desirable schools. Changing the test won’t alter that fact. 

If anything, a test that produces more high-scoring applicants will ultimately increase stress levels because yet more students will apply to the most selective colleges, which will in turn rely more heavily on intangible factors. Consequently, their decisions are likely to become even more opaque. 

At the other extreme, the students at the bottom may in fact be lacking basic academic vocabulary such as “analyze” and “synthesize,” in which case it does seem borderline sadistic to test them on words like “redolent” and “obstreperous.”  It’s pretty safe, however, to assume that students in that category will generally not be applying to the most selective colleges. But in changing the SAT so that the bottom students are more likely to do passably well on it, the needs of the top end up getting seriously short shrift. No one would argue that words like “analyze” aren’t relevant to students applying to the Ivy League; the problem is that those students also need to know words like “esoteric” and “jargon” and “euphemism” and “predicated.”

The easiest way to reduce the gap between these two very disparate groups is of course to adjust to the test downward to a lower common denominator while inflating scores. But does anyone seriously think that is a good solution? Lopping off the most challenging part of the test, at least on the verbal side, will not actually improve the skills of the students at the bottom. It also fails to expose the students at the top to the kind of reading they will be expected to do. And even if the formerly ubiquitous flashcards disappear and stress levels temporarily dip, the underlying issues will remain, and in one guise or another they will inevitably resurface. 

I’m not naive enough to think that the SAT redesign will have an earth-shattering effect on most high school students. The students who have great vocabularies and read non-stop for pleasure won’t suddenly stop doing so because a handful of hard words are no longer directly tested on the SAT. The middling ones who were going to forget all of those flashcards they tried to memorize will come out pretty much the same in the end. The ones who never intended to take the test will sit through it in school because they have no choice, but I know of no research to suggest that are more likely to complete a four-year degree as a result. Plenty of students whose parents initially thought Khan Academy could replace Princeton Review will discover that their children need some hand-holding after all and sign them up for a class — especially if all of their friends suddenly seem to be scoring above the 95th percentile. Not to mention the thousands of kids who will ignore the redesign altogether and take the ACT, just as they intended to do in the first place.

Rather, my real concern is about the message that the College Board is sending. Launching a smear campaign to rebrand the type of moderately challenging vocabulary that peppers serious adult writing as “obscure” might have been necessary to win back market share, but it was a cheap and irresponsible move. It promotes the view that a sophisticated vocabulary is something to be sneered at; that simple, everyday words are the only ones worth knowing. Even if that belief is rampant in the culture at large, shouldn’t an organization like the College Board have some obligation to rise above it? It suggests that knowledge acquired through memorization is inherently devoid of value. It misrepresents the type of reading and thinking that college-level work actually involves. It exploits the crassest type of American anti-intellectualism by smarmily wrapping it in a feel-good blanket of social justice. And it promotes the illusion that students can grapple with adult ideas while lacking the vocabulary to either fully comprehend them or to articulate cogent responses of their own. 

What is even more worrisome to me, however, is that the College Board’s assertions about the new test have largely been taken at face value. Virtually no one seems to have bothered to look at an actual recent SAT, or interviewed people who actually teach undergraduates (as opposed to administrators or admissions officers), or even stopped to consider whether the evidence actually supports the claims —  that whole “critical thinking” thing everyone claims to be so fond of. 

And that is a problem that goes far, far beyond the SAT.