The Atlantic’s Jeffrey Selingo recently published about an article about an entirely predictable consequence of grade- and score- inflation on the selective college admissions process — namely, that the glut of applicants with sky-high GPAs and test scores is making those two traditional metrics increasingly less reliable as indicators of admissibility.
It’s not that those two factors no longer count, but rather that they are increasingly taken as givens. So while top grades and score won’t necessarily help most applicants, their absence can certainly hurt.
As Selingo writes:
In the past 15 years, though, these lodestars have come to mean less and less. The SAT has been redesigned twice in that time, making it difficult for admissions officers to assess, for instance, whether last year’s uptick in average scores was the result of better students or just a different test. What’s more, half of American teenagers now graduate high school with an A average, according to a recent study. (Note: grades have risen more significantly among private-school students than among public-school students.) With application numbers at record highs, highly selective colleges are forced to make impossible choices, assigning a fixed number of slots to a growing pool of students who, each year, are harder to differentiate using these two long-standing metrics.
Eighty percent of American colleges accept more than half of their applicants, but at the country’s most selective schools, there is something of a merit crisis: As test scores and GPAs hold less sway, admissions offices are searching for other, inevitably more subjective metrics.
As I wrote about way back when the scoring for the redesigned SAT was released, this “uptick” in scores is a feature, not a bug. In fact, the test was redesigned, in no small part, for the explicit purpose of boosting scores. Between 2005 and 2016, for example, Reading scores declined from a high of 508 to 494, only to jump back up almost 40 points to 533 in 2017, on the same 800 scale (with further distortions provided by the combination of Reading and Writing into a single score).
Keep in mind that the 2016 numbers themselves reflect “re-centered” post-1995 scores; the 2016 figure equates to about a 425 on the pre-1995 exam, so the 2017 figures probably equate to something more like a 390 on the pre-1995 scale. (Interestingly, when I searched the College Board website for the pre/post-1995 conversion, I repeatedly got a message that the page was blocked. Make of that what you will.)
The College Board was always fairly upfront about the fact that a main goal of the redesign was to improve the test-taking experience — that is, to make students (that is, customers) happy. And a significant part of this model, of course, involves giving out more high scores. (When the first set of rPSAT scores was released, a colleague who teaches SAT-prep classes recounted to me how he tried to explain to his students how the numbers had been fudged, but they refused to listen.)
At the same time, the increase in higher-scoring applicants would contribute to application bloat, driving acceptance rates down and making selective colleges even more desirable so that they would have even more subjective leeway in deciding whom to accept.
It wasn’t terribly hard to predict what the eventual fallout would look like.
Although the College Board’s move may have produced some initial satisfaction, the downsides (for applicants, at least) are now becoming apparent — and yet people seem surprised.
The problem, of course, is that you can’t have it both ways: when the top of the academic pool is artificially expanded, the application process is effectively set up to encourage students to apply to huge numbers of schools, and the number of freshman spots at elite schools remains more or less the same, a lot of genuinely qualified applicants are going to get disappointed.