As I’ve written about recently, the fact that the SAT has been altered from a predictive, college-aligned test to a test of skills associated with high school is one of the most overlooked changes in the discussion surrounding the new exam. Along with the elimination of ETS from the test-writing process, it signals just how radically the exam has been overhauled.
Although most discussions of the rSAT refer to the fact that the test is intended to be “more aligned with what students are doing in school,” the reality is that this alignment is an almost entirely new development rather than a “bringing back” of sorts. The implication — that the SAT was once intended to be aligned with a high school curriculum but has drifted away from that goal — is essentially false.
In reality, the SAT was created as a tool to allow elite colleges to identify students from non-prep school backgrounds who were performing at a level far above what their circumstances would predict. Because the curriculum at the average public high school in, say, the middle of Ohio, differed significantly from that of an Andover or a Choate, the test was deliberately detached from a specific curriculum.
Whatever their shortcomings, the creators of the original test were in that particular regard a good deal more clear-sighted than their modern counterparts in recognizing the disadvantages a curriculum-aligned test would create.
It is true that in recent years the SAT has moved away from the type of extreme abstraction that originally characterized the test, and taken steps toward more real-world applications (for example, the replacement of analogies with sentence completions). At the same time, though, the things it tested — at least on the verbal side — were largely aligned with the sorts of skills that freshman at the most selective colleges could be reasonably expected to exhibit. The test was written to be challenging even to students at the very top of the pile. At the same time, though, it gave students reading at a level beyond what their grades would suggest a chance to show what they actually knew. And as grade inflation increased, it gave admissions officers a way of putting all those A’s in context.
The SAT’s shift away from college-level reading and vocabulary is, however, entirely unsurprising. Despite the “college and career readiness” rhetoric, it’s hard to argue that a test that focuses on everyday words like raise (real example) is truly as challenging as one that tests actual — not to mention common — college-level vocabulary such as paradigm, volatile, and scrutinize.
But given the pressure to close the achievement gap and the amount of money expended by wealthy parents on tutoring, the difficulty of the old exam was becoming a serious liability. Colleges were pushing to increase enrollments of first-generation and minority students, a problematic goal test-wise given that students in those groups tended to score well below their better-off peers. As a result, increasing numbers of schools were going test-optional in order to admit classes with the appropriate demographic mix without suffering in the USNWR rankings. The College Board had to do something to induce more colleges to keep the test.
Then, of course, there was the competition with the ACT, which in 2012 surpassed the SAT as the most popular college admissions test.
Much has been made of that last point, I think rightly so. But I also think that only half the story is typically taken into account — and the other half, always lurking between the lines, seems to me the more interesting half.
When people cite the ACT as the most popular college admissions test, they’re probably thinking of the students who are intending to apply to college, and who therefore sign up to take the ACT at a designated location on a Saturday morning. Plenty of students do that, of course; however, in a (now-dwindling) number of states, students are required to take the ACT as a graduation requirement. Whether or not the ACT was developed for that purpose, it is effectively treated as a state test that students can conveniently also use to apply to college.
The fact that the SAT has now entered into fierce competition with the ACT for the state-test market has of course been widely reported, as has the fact that states’ adoption of the SAT coincided with the release of the new exam.
In general, however, those two situations tend to be presented as events that are only generally related. Everyone knows that the SAT is fighting for market share with the ACT. Oddly enough, though, no one seems to be suggesting a cause-and-effect relationship — namely, that the SAT was redesigned in part so that it could fight for a share of the state test market. Those state-test takers were the reason the ACT had surpassed the SAT; targeting them was thus the fastest and easier way to recapture market share.
Perhaps the implication was so obvious that it seemed unnecessary to spell out, but I think there’s more going on here than is immediately apparent.
In order to compete effectively in the state test market, the SAT had to be reworked into a high-school aligned test. Indeed, many of the passages on the Critical Reading section of the old SAT were written at such a high level that administering the test to high school students as a graduation requirement would have been positively unthinkable.
This also explains why the ELA “college readiness” benchmarks were lowered so dramatically. The reduction was necessary to allow a sufficient number of students who would not normally even take the SAT to be deemed “college ready” — and, in some states, be guaranteed the right to skip remedial classes in college, thus saving their institutions (as well as some tax-payers) substantial amounts of money. That’s a very lucrative change for cash-strapped public institutions. Professors, or more likely part-time, underpaid adjuncts, will have no choice but to make do with students who are not in fact ready for college-level work. But if the goal is simply to boost college graduation rates without regard for what a college diploma actually means, then that is a perfectly logical decision.
The other piece of the puzzle involves… you guessed it, Common Core. Facing increasing amounts of backlash for Common Core tests such as the PARCC, states are understandably looking for an alternative test that meets state standards. Although some states are appearing to abandon Common Core, they are often simply retaining the standards under a new name.
Even if David Coleman is now attempting to distance himself from the mess he created, states are still more or less locked into finding tests aligned to Common Core, or whatever the actual standards may be now called. The SAT is the obvious solution. In addition to being much, much shorter than PARCC (just under 3 hours vs. 11 hours), it already bears the gold stamp of alignment. Regardless of the fact that Coleman has begun to backtrack, insisting that the new SAT merely reflects the skills most important for advanced high school and early college students, the association between the test and Common Core is pretty much set.
The ACT, in contrast, admitted earlier and more publicly that it is not perfectly aligned with Common Core — perhaps not the best move given the current climate.
In reality, it is of course questionable whether the SAT was ever intended to be in perfect alignment with Common Core: CCSS.ELA-LITERACY.RI.11-12.2, for example, indicates that students should be able to Determine two or more central ideas of a text and analyze their development over the course of the text, including how they interact and build on one another to provide a complex analysis. Perhaps I’m missing something, but that particular skill does not appear to be tested on rSAT.
In any case, the result is that schools will effectively be swapping one Common Core test of questionable validity for another. In fact, rSAT bears an uncanny resemblance to the much-loathed PARCC, at least on the ELA side. More of a resemblance, incidentally, than it bears to the old SAT.
Because the SAT is still branded as the SAT, however, it is likely to be perceived as less controversial. I would imagine that most parents are unaware of the extent to which the test has been diluted. Indeed, backlash is more likely to come from the decision to use a college-admissions test for high school graduation.
Furthermore, the high school market brings with it another advantage: while individual colleges can choose to opt out of the SAT, individual high schools cannot. If the mandate is implemented at the state level, then thousands of students are guaranteed to take the test. The College Board gets paid for what is essentially a captive market.
Great post, Erica!
I came here from the NYTimes article on how states are shifting to using the SAT/ACT in lieu of government-sponsored exams. Your analysis on the College Board’s motivation for redesigning the test is highly prescient–they are after the high school test market. I’ve been following the whole redesign brouhaha and remember the bit about state testing from a year ago. But I am one of those who elided that issue with the more general one of the ACT surpassing the SAT’s market share. Could it be, as you mentioned, that the “dumbing down” of the reading section is to make the test more palatable as a high school test? And that the removal of “obscure” vocabulary, which seems noble, is but a smoke screen?
So thanks for bringing the discussion back to the College Board’s original motivations. It makes the CB’s PR offensive seem all the more offensive.
Could it be, as you mentioned, that the “dumbing down” of the reading section is to make the test more palatable as a high school test? And that the removal of “obscure” vocabulary, which seems noble, is but a smoke screen?
Yes, I think that’s exactly it. The old SAT Critical Reading section is far and away too hard for a state test; the backlash would have been unbelievable. There was absolutely no way to make rSAT fly in the state testing market without getting rid of all the difficult vocab and more abstract humanities/science pieces.
Can you imagine putting something like Linda Nochlin’s “Why Have There Been No Great Women Artists?” (https://deyoung.famsf.org/files/whynogreatwomenartists_4.pdf), which appeared on the October ’09 exam, on a state test given to kids not even intending to go to college? The passage was modified considerably for the SAT, but it’s still at an advanced college level. There was no way other to get into the state-test market than to take the reading down a good five, maybe ten, notches.
I only made the connection between the SAT and the state tests recently, and by chance. I was looking into the College Board’s bid to replace the ACT in Colorado, and I kept coming across mentions of the fact that the SAT was being proposed as a replacement for the PARCC. That seemed awfully strange, but then when I started looking at PARCC questions, things started to make more sense.
Ah, yes. The “Pablita” essay, as I affectionately dubbed it, was reserved for the end of my “2100 SAT” class. Only a few of these super bright kids really got it. Totally inappropriate–and potentially not very useful statistically–for state tests. That’s what I find interesting. The media is in lockstep over the idea that the reading on the SAT is unequivocally more difficult. Sure, the passages tend to be longer, and scientific papers by Watson & Crick are hardly Sunday reading, but Nochlin’s essay is on a totally different level. The level of style, the complexity of ideas and advanced vocab is simply not found on the new SAT. And the distractors on the new test are a mere fly compared to the swarm of mosquitos on the old test.
I think superficially it is easier to say the new SAT reading is harder, but few–if any writing about the SAT in the Atlantic and so forth–are SAT experts themselves. Interestingly, I don’t think this is what the College Board had in mind at all in redesigning the test. They wanted everyone to say how the reading section had become easier. Once again, the College Board seems out of step with how it is perceived. Speaking of which, I had an op-edish piece published today on Synapse discussing this very issue: https://medium.com/synapse/has-cheating-doomed-the-new-sat-e4aefd23b616#.vw1hy0wsh
Re: the Nochlin essay, yes, only the very strongest readers “got” it. But that was the point: a kid who could read that without a problem was probably a kid who could be thrown into a humanities class at pretty much any top college and not struggle to understand the reading. That’s why the test was useful to admissions committees. It wasn’t necessarily that kids who didn’t do well on the test couldn’t be successful, just that it was a pretty safe bet that a kid who did well on the test would be able to keep up with the work.
I don’t actually agree that the College Board wanted people to think the test was easier. They chose the (buzz)word “relevant” for a reason — I think they wanted people to think the new test was just as rigorous, if not more so, than the old one, but that it somehow better reflected what went on in high school classrooms. The fact that “relevant” was actually a euphemism for “easier” is something that simply wouldn’t occur to 99% of people (especially high school kids, who, having never had to study “SAT vocabulary,” would most likely not know what a euphemism was or be able to connect the concept to what the College Board was doing.) The new SAT reading is certainly more concentrated and more tedious, but I’m not sure how deep a well-read person would have to get to notice that the passages are considerably more concrete and straightforward.