Over the last few days, chatter about the release of the College Board’s new “adversity index”—a number designed to encapsulate the amount of socioeconomic disadvantage applicants have faced—has finally eclipsed talk of the college admissions scandal (well, mostly).
As the NY Times reports:
The College Board announced on Thursday that it will include a new rating, which is widely being referred to as an “adversity score,” of between 1 and 100 on students’ test results. An average score is 50, and higher numbers mean more disadvantage. The score will be calculated using 15 factors, including the relative quality of the student’s high school and the crime rate and poverty level of the student’s neighborhood.
I think I may be the only person having this reaction, but honestly, I think that this is a whole lot of fuss over what is in some ways a nothingburger. Not a complete nothingburger, mind you—there are some genuinely concerning implications—but also a smaller deal than many people are making it out to be.
First, to be perfectly clear, colleges have for many, many years been well aware of the correlation between socioeconomic status and test scores (correlation, not cause and effect). Standardized test scores are always viewed in the context of applicants’ backgrounds, with the understanding that what constitutes a middling score for a wealthy applicant might represent an exceptional achievement from one with access to few resources. The fact that scores are viewed in context this way does not mean that they are completely meaningless, or that the test is somehow “rigged”—it is a key component of the holistic admissions process as well as a straightforward acknowledgement of the fact that the educational landscape in the United States is far from equal.
Traditionally, the school profile—a document providing information such as the size of the graduating class, percent of students attending two- and four-year colleges, average SATs and grades across subjects, etc.—in addition to a high school’s admissions/enrollment history at a particular college, has provided the necessary context for adcoms. What the College Board’s “adversity index” purports to do is to provide a snappier snapshot of applicants’ demographic position by boiling numerous factors (e.g., neighborhood income, percent of students receiving free and reduced lunch) down into a single score. Whether it does so accurately, or fairly, or for what ends, is a separate question. It’s obviously a very important question, and I’ll get to that part a little later. However, in terms of innovation, it is best seen as part of a continuum rather than some sort of radical new foray into social engineering destined to destroy college admissions as a bastion of meritocracy.
Although reports indicate that about 50 colleges, including Yale, have been quietly beta testing the program, I have difficulty imagining that it will make a significant difference in the way most applicants are viewed. A college like Yale has its established feeders, schools like Exeter, Harvard-Westlake, and Stuyvesant, whose students it continues to accept in disproportionate numbers and whose demographic data it has no need to plumb. Ditto for other elite high schools that routinely send students to the Ivies.
Where the index might have some use is for applicants from schools that rarely send students to competitive colleges; a student whose academic profile is starkly out of keeping with their index number might signal to a harried, overworked admissions officer that a particular applicant’s file merits a closer look. And indeed, with admissions officers getting inundated with ever more applications (thanks to an increasingly out-of-control system), it is entirely unsurprising that they would welcome a tool for simplification. A single number is much easier to process than multiple documents.
What it is unsurprisingly getting oversimplified and misunderstood in some of these discussions is that no applicant would ever be admitted based solely on this type of shorthand (at least it is extraordinarily unlikely, especially at a school like Yale): every other aspect of the application that would normally get taken into account, will still get taken into account. But if a college with 40,000 applicants explicitly sets out a goal of increasing first-gen/low-income enrollment—currently the case at many elite colleges—then the index is useful a tool to help point adcoms to potentially desirable candidates. In this sense, the CB is not the main driver, foisting its wares on unsuspecting colleges; it is responding to a larger shift, one effected as much by colleges themselves.
To the critics claiming that wealthy people will suddenly be moving into low-income neighborhoods, or send their children to underperforming high schools, just to increase a “disadvantage” score: no, they won’t. Rich people are not suddenly going to start yanking their children out of Scarsdale High School or Horace Mann and send them to public school in Yonkers to get a slight tip in one small metric that has far less impact than, say, a high school’s track record in sending students to a particular college (Want your kid to go to Harvard? Send them to Boston Latin? Princeton? Send them to Lawrenceville), or whether a student has taken classes at a post-AP level (classes generally offered only by very elite high schools). That is pure fantasy, not least because colleges already consider student achievement in the context of their high schools—as families savvy about college admissions tend to be aware—and there is no indication that the well-off have moved en masse to poor neighborhoods to game the system in this particular way. Besides, savvy, well-off families also tend to recognize that colleges take many factors into account, and that merely attending a high school that posts low average test scores is unlikely to compensate for a host of other considerations.
All that said, there are a number of very concerning aspects to the index, starting with the fact that the College Board is collecting and reporting extensive data about students on their behalf, without permitting them to view that data. Given that the CB’s move toward becoming an unofficial data-collection outlet for big tech, that alone should send major warning signals.
I’m more than a little freaked out by the “family stability” metric listed on the sample report. What precisely does that mean? (Alas, there was no way to view the explanation offered on the report.) What percentage of students in a school/neighborhood come from single-parent families? A combination of parental marital status and housing values and housing vacancies in students’ neighborhoods? This really seems like territory where the CB just doesn’t belong. At best, it seems misguided and tone deaf; at worst, it seems to be veering close to social credit-score territory.
Then of course there’s the fact that a single number, even one based on numerous factors, is by definition reductive and cannot help but fail to capture the nuances of many students’ situations. What about a student who lives in a high-income neighborhood and attends high-achieving school but has to contend with an abusive parent or an out-of-control sibling with a drug problem? What about the student who lives in a poor neighborhood and attends a school with low test scores, but has a stable, loving, educated family that insists on high academic achievement? Presumably, these are scenarios that simply won’t get taken into account.
At colleges with the staffing and resources to provide in-depth review of each application, students who don’t conform to a particular profile of advantage or disadvantage are less likely to get lost in the shuffle (assuming, of course, that the relevant familial information is included in the application, which may or may not be the case), but it is very easy to see how schools where adcoms are already overtaxed could end up over-relying on the numbers.
Furthermore, numbers have an insidious way of becoming ends in themselves (see: Goodhart’s Law). If colleges are eventually ranked on the index, then it’s easy to see the numbers themselves turning into the goal. Again, the kids from super-elite high schools are unlikely to get penalized because they are already on the inside track in so many other ways—it’s not as if Harvard is about to decide not to accept anyone from Andover—but middle-class applicants who don’t know how to work the system might find themselves at an overall disadvantage.
One of the subtler, more illuminating explanations I found for why the CB has decided to roll this program out came from a commenter on an Inside Higher Ed article (see link at top of page), who pointed out that this may be the CB’s response to the fact that an increasing number of schools are allowing applicants to self-report SAT scores, as opposed to paying for multiple score reports. If the CB can “adjust” the reports to include information that is potentially useful to colleges, then schools are more likely to continue requiring applicants to pay the CB for them.
In addition, the sample released reports indicate that they contain information about schools’ AP participation/scores, but it isn’t clear whether they contain equivalent information regarding IB. (Since this is the CB, I would assume not.) Could the CB possibly be using the index as one more way to nudge schools into its biggest cash cow, the AP program? I’m shocked, shocked, I tell you!
Of course, what is present in the index is as important as what’s absent, namely information regarding race or ethnicity. Given the currently political direction of the country, the index can be seen as the CB’s attempt to get a jump on what it assumes will soon be a post-affirmative action world (despite some critics insisting that the CB is using all these measures as a way of helping colleges consider race without directly appearing to do so). In that regard, the index can be also seen as a ploy to help the CB stay ahead of the curve—to keep it, so to speak, “relevant.”