Now that the Supreme Court has issued its expected ruling dismantling Affirmative Action, it is reasonable to assume that widescale test-optional admissions are here to stay. While applicants are still free to discuss their ethnic backgrounds in their essays, and colleges may consider that information as part of the holistic review process, the ruling issues a clear warning to schools not to attempt to use such information in an attempt to circumvent either the letter or the spirit of the ruling.

Although the new landscape is murky, and colleges are understandably hesitant to undertake an action that might result in additional court cases, it is also clear that they will take any legally permissible steps necessary to enroll URM applicants. As Bates professor Tyler Harper pointed out in a New York Times piece responding to the ruling, this is likely to result in an admissions process that is even more subjective, opaque, and open to “racial gaming” than it is at present—in the absence of test scores, accusations of unfairly (dis)advantaging certain categories of applicants become much that more difficult to prove in court, and, there will undoubtedly continue to be much wailing and gnashing of teeth about How the Woke Mob Is Destroying the Great American University.

However, there is rhetoric and virtue-signaling, and then there is a more complex reality. In fact, the test-optional movement cuts both ways. On a small scale, it will undoubtedly result in the admission of some underrepresented minority students who would not have gained acceptance to particular institutions otherwise. On a very broad scale, in contrast, those effects are likely to be more muted. From universities’ standpoint, there are benefits to dropping standardized testing requirements that are entirely unrelated to promoting equity, and that are likely to benefit the most advantaged applicants.

1) Rankings and Competition 

Simply put, universities that waive their standardized-testing requirements will attract more applicants, and elite colleges cannot afford to fall behind peer institutions in the application-inflation arms race. Selectivity drives prestige, reputation, and desirability—no Ivy League Dean of Admissions wants to be responsible for reporting a 15% drop in applications—and continues to factor into USNWR rankings.

Furthermore, minuscule acceptance rates may paradoxically encourage even larger numbers of under-qualified students to apply. The more random and lottery-like the selection process appears, the greater the likelihood that uncompetitive applicants with poor guidance and little knowledge of the inner workings of the elite admissions process will imagine they have a chance and throw in a what-the-heck application to Harvard.

Institutional bragging rights can also trickle down to things like alumni satisfaction and thus donation rates. Adults who attended universities that were far less selective a decade or two ago may benefit from a belated boost in the perceived prestige of their degree and thus be more likely to contribute money to their alma matter. Although giving rates are no longer factored into USNWR rankings, they may still have a measurable impact on a school’s finances.

Note that in terms of USNWR metrics, MIT is essentially the exception that proves the rule: The decision to go back to requiring test scores was prompted by recognition of the fact that students with SAT Math scores below 700 consistently graduated at lower rates—a serious problem from a USNWR perspective, given that more than 20% of a university’s ranking is determined by its graduation rate (the most heavily weighted factor). By effectively setting a baseline requirement, MIT acted to improve the chance that all of its freshmen, regardless of their background, would be positioned to make it to graduation, thus protecting its spot at or near the top of the list.

MIT is also, however, one of the few institutions that is unlikely to exert pressure on professors to lower its standards in response to less-prepared students; and given the highly accomplished and self-selected nature of applicant pool, standardized testing is unlikely to deter admissible candidates from applying.

2) Declining Number High School Graduates 

Beginning in 2025, the number of high-school graduates is projected to drop, with two-year colleges seeing a 12% decline in attendees, and four-year institutions seeing and 9% drop (for the class of 2029, as compared to 2012).

Particularly at less-selective schools with lower endowments, filling seats in the class will be a matter of survival. Going test-optional is a straightforward strategy for keeping applications and enrollment numbers up, along with the application fees and tuition dollars they generate.

3) Greater Leeway to Accept Full-Pay Students 

Although the correlation between wealth is test scores is well established, there are many, many high-income/middle- or low-scoring students. Eliminating the SAT/ACT requirement will give these students a substantial advantage, particularly at less selective institutions but also at relatively prestigious but under-endowed/need-aware ones, where full-pay status can tip an applicant over into the “accept” pile.

Unlike lower-income students, affluent high-grade/low test-score applicants are also more likely to attend private high schools where grade inflation is most pronounced, and to possess the “soft” factors that make them attractive candidates. Indeed, test-optional schools such as Sarah Lawrence, Colby, and Tufts have traditionally been among the most expensive in the country and are notorious for having some of the wealthiest student bodies. Test-optional policies are now opening most other colleges up to an entirely new pool of full- or near-full-pay enrollees.

Essentially, wealthy candidates become doubly advantaged: those who do achieve high scores will remain strong applicants, while those who do not will find their chances boosted as well. In contrast, less affluent students who do not submit test scores and lack detailed recommendations touting their exceptional personal qualities are likely to find themselves disadvantaged in the admissions process. But by couching their move in terms of equity, colleges can have their cake and eat it too.


Beyond these factors, I suspect that the test-optional movement would never have gained such traction were general sentiment toward standardized testing not so negative. Although legacy admissions are beginning to generate increasing backlash—it will be interesting to see how this plays out—that practice has never inspired the sheer hysteria and vitriol among the chattering classes that the SAT and ACT manage to conjure up.

In a front-page New York Times article on the fallout from the Supreme Court decision, for example, Stephanie Saul referred to students being “tortured” by standardized testing. This kind of hyperbole is really striking, not to mention inappropriate. It’s one thing for teenagers to exaggerate this way, but adults should be, shall we say, a tad more judicious in their choice of terminology. “Torture” is something inflicted by Russian soldiers on the Ukrainians whose cities they destroyed; taking a university-entrance exam is a normal, albeit stressful, teenage rite of passage all over the world.

Besides, compared to the Chinese gaokao British A-levels, or, yes, the Finnish matriculation exam (five tests lasting six hours each!), the SAT and ACT are not particularly demanding. The fact that these tests are perceived as such a daunting hurdle, and that a journalist could feel comfortable describing them in such exaggerated and juvenile terms on the front page of a newspaper of record, underlines the parochialism and lack of seriousness in the American education system. It also raises questions about the extent to which adults are actually stoking students’ anxiety about testing rather than trying to assuage it.

Yes, affluent families can and do pay dearly for help with preparation; however, the tutoring process still requires students to put in the work—sometimes quite a lot of it—and there is no guarantee at the end. Indeed, the parents in the Varsity Blues scandal did not just hire tutors for this exact reason. The wealthy may benefit from standardized testing en masse, but on an individual basis they generally dislike it just as much as anyone else.

Essentially, the problem is not that money can buy high test scores, but rather that it cannot do so easily or reliably enough.

When I was working as a tutor, one of the things that struck me about certain students who failed to improve their scores was that they seemed to have no experience with the kind of studying that meaningful progress would have required. (And had they attempted it, no doubt it would have felt like torture.)

These were mostly A-/B+ students who came from well-off in families and attended supposedly excellent schools, public as well as private. A number of them had learned to talk the talk about the beauty of learning, sometimes very convincingly, but were rarely held to truly high standards or required to master substantive content (at least in the humanities). Although they were generally quite verbal and projected a veneer of sophistication, they had academic gaps that could be downright shocking. I got the distinct impression that a low or middling SAT or ACT score was their first test result that was genuinely reflective of their knowledge level, and the first with real consequences. Of course they and their parents resented the exam.

In addition, regardless of how radically the SAT has been overhauled over the last century, in the popular imagination it has never fully managed to shake its early associations with IQ testing and the eugenics movement. (Indeed, comments-board proponents of standardized testing are frequently wont to make cringeworthy, Bell Curve-esque pronouncements about “certain groups” being innately smarter than others). A poor SAT score is therefore not just a piece of feedback indicating that one has not mastered a set of academic competencies, or that it is merely necessary to study harder. Rather, it is a deeply felt personal insult, an indictment of one’s intelligence and overall “aptitude” for learning.

I suspect that this why otherwise rational middle-aged adults so often seem to revert to their teenage selves when discussing the SAT.

Moreover, given the privileged position of “choice” in U.S. society, the idea that students could be required to take a test that might reveal unflattering information about them can seem downright unamerican. AP tests, which are elective, do not have this taint.

Or, in a more progressive manifestation of hyper-individualism, the soothing mantra that “every child learns in a different way” is used to preemptively shut down any serious questioning of how and how much students are actually learning at purportedly stellar schools. (I understand that this idea has the force of gospel, but allow me to point out that if every student literally had a completely unique learning style, my books would be useless.) This encouraged ignorance is further abetted by $5,000 or even $10,000 neuropsych evaluations and a psychology industry co-opted by the wealthy to smooth the way. Of course, wealthy students benefit disproportionately from testing accommodations, but applying, and sometimes appealing, for extra time is a headache regardless.

In addition, families that are merely affluent but fall well below the 1% and live in high-cost areas quite understandably resent having to shell out for tutoring. True, they could technically choose to opt out, but when everyone else is doing it, it doesn’t really seem like a choice at all.

It is therefore entirely unsurprising that the SAT and ACT would generate the type of backlash they do.

It is also why logical, evidence-based arguments about the advantages of standardized testing have so little impact, as the drafters of the 2020 University of California report discovered. The real opposition to testing, I would posit, is psychological and thus impervious to facts and statistics.

Given this context, it is hard for me not to see all the talk about equity as providing a convenient cover for a certain segment of the population to do what it wanted to do all along—eliminate a stressful and unpleasant experience, without which their progeny would be very limited in their college options. That is very different from legacy admission, which offers no comparable downside.

In many ways, the controversy around standardized testing is a case of history repeating itself. What is often ignored by those who would criticize the SAT on the basis of its origins is that the first tests produced the exact opposite of the intended result: the highest scorers, rather than the expected WASPS from Groton and Exeter, were Jewish boys from public schools on Long Island. As Jerome Karabel explains in The Chosen, his exhaustive history of admissions practices in the Ivy League, when Harvard, Princeton, Yale, and Columbia based their selection process on exclusively academic grounds, they were confronted by an influx of “socially undesirable” students that led them ultimately to create the holistic process that persists today:

The creation of the country’s first Office of Admissions, established at Columbia in 1910, was a direct response to the “Jewish problem.” Headed by Adam Leroy Jones, it used subjective criteria in evaluating candidates as it attempted to create a favorable “mix” int the student body. Through an emphasis on qualities such as “character” and “leadership,” which could not be quantified, as well as the strategic deployment of discretion in determining which candidates who had not passed all the exams might still be admitted, Jones was able to report that the students who enrolled under his tutelage were “very much more desirable” than the ones accepted in previous years.

Sounds familiar, right?

As Karabel also point out, college admissions over the past hundred years have essentially been the story of the fight over the meaning of “merit,” with schools continually redefining the term to suit their needs during any given era. In this context, pleas for college admissions to be both holistic and “merit-based,” while well intentioned, fundamentally misunderstand the system. Subjectivity and opacity are features, not bugs.

The reality is that privileged people do not generally relinquish their privilege willingly. That was true a hundred years ago, and it is just as true today. Regardless of the rhetoric about equity, colleges have a bottom line to protect, and the SAT and ACT could not have been made optional without an incentive to some of their highest-paying potential customers. The mighty dollar, not the striving for social justice, has the larger say.