In the spring of 2015, when the College Board was field testing questions for rSAT, a student made an offhand remark to me that didn’t seem like much at the time but that stuck in my mind. She was a new student who had already taken the SAT twice, and somehow the topic of the Experimental section came up. She’d gotten a Reading section, rSAT-style. 

“Omigod,” she said. “It was, like, the hardest thing ever. They had all these questions that asked you for evidence. It was just like the state test. It was horrible.” 

My student lived in New Jersey, so the state test she was referring to was the PARCC. 

Even then, I had a pretty good inkling of where the College Board was going with the new test, but the significance of her comment didn’t really hit me until a couple of months ago, when states suddenly starting switching from ACT to the SAT. I was poking around the internet, trying to find out more about Colorado’s abrupt and surprising decision to drop the ACT after 15 years, and I came across a couple of sources reporting that not only would rSAT replace the ACT, but it would replace PARCC as well.

That threw me a little bit for a loop. I knew that PARCC was hugely unpopular and that a number of states had backed out of the consortium, but still… something smelled a little funny about the whole thing. Why would states allow PARCC to be replaced by rSAT? They were two completely different tests…right?

I mulled it over for a while, and then something occurred to me: Given that any exam that Colorado administered would have to be aligned with Common Core (or whatever it is that Colorado’s standards are called now), it seemed reasonable to assume that the switch from PARCC to rSAT could only have been approved if the two tests weren’t really that different.

At that point, it made sense to actually look at the PARCC. Like most people in the college-admissions test world, I had really had no reason to look at the PARCC before; state tests were uncharted territory for me.

Luckily, PARCC had recently released a broad selection of 2015 items on its website — more than enough to provide a good sense of what the test is about. After a modicum of fruitless hunting around (not the easiest website to navigate!), I managed to locate the eleventh grade sample ELA questions. When I started looking through them, the overlap with rSAT was striking. Despite some superficial differences, the similarities between the two tests were impossible to miss. Even if there were some differences in the lengths of the passages and way in which the questions were worded, the two tests were definitely cousins. Close cousins. I asked a couple of other tutors about the Math portion, and they more or less concurred — not identical, but similar enough.

On one hand, that wasn’t at all surprising. After all, both are products of Common Core, their development overseen by Coleman and Co. As such, it’s hardly surprising that they embody the hallmarks of Coleman’s, shall we say, heavy-handed and amateurish idiosyncratic approach to analysis of the written word.

On the other hand, it was quite surprising. The PARCC, unquestionably, was developed as a high school exit test; the SAT was a college entrance test. Why should the latter suddenly bear a strong resemblance to the former? 

Just as interesting as what the tests contained was what they lacked — or at least what they appeared to lack, based on the sample questions posted on the PARCC website. (And goodness knows, I wouldn’t want to pull a Celia Oyler and incur the wrath of the testing gods.)

Consider, for example, that both rSAT and PARCC:

  • Consist of two types of passages: one “literary analysis” passage and several “informational texts,” consisting of science/social topics but, apparently, no humanities (art, music, theater).
  • One passage or paired passage from a U.S. historical document.
  • Focus on a very limited number of question types: literal comprehension, vocabulary-in-context, and structure. Remarkably, no actual ELA content knowledge (e.g. rhetorical devices, genres, styles) is tested. 
  • Rely heavily on two-part “evidence” questions, e.g. “Select the answer from the passage that supports the answer to Part A” vs. “Which of the following provides the best evidence for the answer to the previous question?” The use of these questions, as well as the questionable definition of “evidence” they entail, is probably the most striking similarity between rSAT and PARCC. It is also, I would argue, the hallmark of a “Common Core test.” Considering the number of Standards, the obsessive focus on this one particular skill is quite significant. But more about that in a little bit. 
  • Test simple, straightforward skills in bizarrely and unnecessarily convoluted ways in order to compensate for the absence of substance and give the impression of “rigor,” e.g. “which detail in the passage serves the same function as the answer to Part A?”
  • Include texts that are relatively dense and include some advanced vocabulary, but that are fairly straightforward in terms of structure, tone, and point-of-view: “claim, evidence; claim, evidence,” etc. There is limited use of “they say/I say,” or the type of sophisticated rhetorical maneuvers (irony, dry humor, wordplay) that tend to appear in actual college-level writing. That absence is a notable departure from the old version of the SAT and, contrary to claims that these exams test “college readiness,” is actually misaligned with college work.

Here I’d like to come back to the use of two-part “evidence” questions?” Why focus so intensely on that one question type when there are so many different aspects of reading that make up comprehension?

I think there are a few major reasons:

First, there’s the branding issue. In order to market Common Core effectively, the Standards needed to be boiled down into an easily digestible set of edu-buzzwords, one of the most prominent of which was EVIDENCE. (Listening to proponents of CCSS, you could be forgiven for thinking that not a single teacher in the United States — indeed, no one anywhere — had ever taught students to use evidence to support their arguments prior to 2011.) As a result, it was necessary to craft a test that showed its backers/funders that it was testing whether students could use EVIDENCE. Whether it was actually doing such a thing was beside the point. 

As I’ve written about before, it is flat-out impossible to truly test this skill in a multiple-choice format. When students write papers in college, they will be asked to formulate their own arguments and to support them with various pieces of information. While their professors may provide a reading list, students will also be expected to actively seek sources out in libraries, on the Internet, etc., and they themselves will be responsible for judging for whether a particular source is valid and for connecting it logically and convincingly to their own, original argument. This skill has only a tangential relationship to even AP-style synthesis essays and almost zero relationship to the ability to recognize whether a particular line from paragraph x in a passage is consistent with main idea y. It is also very much contingent upon the student’s understanding of the field and topic at hand.

So what both the PARCC and rSAT are testing is really not whether students can use evidence the way they’ll be asked to use it in college/the real world, but rather whether they can recognize when two pieces of information are consistent with one another, or whether two statements expressed different ways express the same idea. (Incidentally, answers to many PARCC “evidence” question pairs can actually be determined from the questions alone.) 

The problem is that using evidence the way it’s used in the real world involves facts, but facts = rote learning, something everyone agrees should be avoided at all costs. 

Besides, requiring students to learn a particular set of facts would be so politically contentious as to be a non-starter (what facts? whose facts? who gets included/excluded? why isn’t xyz group represented…? And so on and so forth, endlessly.)

When you only allow students to refer back to the text and never allow them make arguments that involve anything beyond describing the words on the page in fanciful ways, you sidestep that persnickety little roadblock.

Nor, incidentally, do you have to hire graders who know enough about a particular set of facts to make reliable judgments about students’ discussions of them. That, of course, would be unmanageable from both a logistical and an economic standpoint. Pretending that skills can be developed in the absence of knowledge (or cheerily acknowledging that knowledge is necessary but then refusing to state what knowledge) is the only way to create a test that can be scaled nationally, cheaply, and quickly. Questions whose answers merely quote from the text are also extraordinarily easy to write and fast to produce. If those questions make up half the test, production time gets a whole lot shorter. 

The result, however, is that you never actually get to deal with any ideas that way. You are reduced to stating and re-stating what a text says, in increasingly mind-bending ways, without ever actually arriving at more than a glancing consideration of its significance. If high school classes become dedicated to this type of work, that’s a serious problem: getting to knock around with ideas that are a little bit above you is a big part of getting ready to go to college. 

You can’t even do a good old-fashioned rhetorical analysis because you don’t know enough rhetoric to do that type of analysis, and acquiring all that rhetorical terminology would involve “rote learning” and thus be strictly verboten anyway.

The result is a stultifying mish-mash of formal skills that tries to mimic something kinda high level, but that ends up being a big bucket of nonsense. 

There is also, I think, a profound mistrust of students baked into these tests. A friend of mine who teaches high school tells me that the administrators at her school have, for several years now, been dogging the teachers with the question “how do you know that they know?” Translation: what data have you collected to prove to the powers that be that your students are appropriately progressing toward college and career readiness? In addition to vaguely mimicking a high-level skills, forcing students to compulsively justify their answers in multiple-choice format gives those powers that be quite a lot of data.

There also seems to be a latent fear that students might be trying to pull one over on the their teachers, on the administration — pretending to understand things when they’re actually just guessing. That, I suspect, is a side-effect of too many multiple-choice tests: when students actually write things out, it’s usually a lot clearer what they do and don’t understand. But of course it’s a lot harder to reduce essays to data points. They’re far too messy and subjective.

So the result is to try to pin students down, force them to read in ways that no one would possibly read in real life (oh, the irony!), and repeatedly “prove” that they understand that the text means what it means because it says what it says… There is something almost pathetic about the grasp for certainty. And there’s something a good deal more pathetic about teachers who actually buy into the idea that this type of low-level comprehension exercise is some sort of advanced critical thinking skill that will magically make students “college ready.” 

But to return to my original point: one of the most worrisome aspects of the whole discussion about the SAT and the PARCC in regards to the state testing market is that the former is presented as a genuine alternative to the latter. Yes, the SAT is a shorter test; yes, it’s produced by the College Board rather than Pearson (although who knows how much difference there is at this point); yes, it’s paper-based. But it’s really just a different version of the same thing. The College Board is banking on the fact that the SAT name will deter people from asking too many questions, or from noticing that it’s just another shoddy Common Core test. And so far, it seems to be working pretty well.