In the spring of 2015, when the College Board was field testing questions for rSAT, a student made an offhand remark to me that didn’t seem like much at the time but that stuck in my mind. She was a new student who had already taken the SAT twice, and somehow the topic of the Experimental section came up. She’d gotten a Reading section, rSAT-style.
“Omigod,” she said. “It was, like, the hardest thing ever. They had all these questions that asked you for evidence. It was just like the state test. It was horrible.”
My student lived in New Jersey, so the state test she was referring to was the PARCC.
Even then, I had a pretty good inkling of where the College Board was going with the new test, but the significance of her comment didn’t really hit me until a couple of months ago, when states suddenly starting switching from ACT to the SAT. I was poking around the internet, trying to find out more about Colorado’s abrupt and surprising decision to drop the ACT after 15 years, and I came across a couple of sources reporting that not only would rSAT replace the ACT, but it would replace PARCC as well.
That threw me a little bit for a loop. I knew that PARCC was hugely unpopular and that a number of states had backed out of the consortium, but still… something smelled a little funny about the whole thing. Why would states allow PARCC to be replaced by rSAT? They were two completely different tests…right?
I mulled it over for a while, and then something occurred to me: Given that any exam that Colorado administered would have to be aligned with Common Core (or whatever it is that Colorado’s standards are called now), it seemed reasonable to assume that the switch from PARCC to rSAT could only have been approved if the two tests weren’t really that different.
At that point, it made sense to actually look at the PARCC. Like most people in the college-admissions test world, I had really had no reason to look at the PARCC before; state tests were uncharted territory for me.
Luckily, PARCC had recently released a broad selection of 2015 items on its website — more than enough to provide a good sense of what the test is about. After a modicum of fruitless hunting around (not the easiest website to navigate!), I managed to locate the eleventh grade sample ELA questions. When I started looking through them, the overlap with rSAT was striking. Despite some superficial differences, the similarities between the two tests were impossible to miss. Even if there were some differences in the lengths of the passages and way in which the questions were worded, the two tests were definitely cousins. Close cousins. I asked a couple of other tutors about the Math portion, and they more or less concurred — not identical, but similar enough.
On one hand, that wasn’t at all surprising. After all, both are products of Common Core, their development overseen by Coleman and Co. As such, it’s hardly surprising that they embody the hallmarks of Coleman’s, shall we say, heavy-handed and amateurish idiosyncratic approach to analysis of the written word.
On the other hand, it was quite surprising. The PARCC, unquestionably, was developed as a high school exit test; the SAT was a college entrance test. Why should the latter suddenly bear a strong resemblance to the former?
Just as interesting as what the tests contained was what they lacked — or at least what they appeared to lack, based on the sample questions posted on the PARCC website. (And goodness knows, I wouldn’t want to pull a Celia Oyler and incur the wrath of the testing gods.)
Consider, for example, that both rSAT and PARCC:
- Consist of two types of passages: one “literary analysis” passage and several “informational texts,” consisting of science/social topics but, apparently, no humanities (art, music, theater).
- One passage or paired passage from a U.S. historical document.
- Focus on a very limited number of question types: literal comprehension, vocabulary-in-context, and structure. Remarkably, no actual ELA content knowledge (e.g. rhetorical devices, genres, styles) is tested.
- Rely heavily on two-part “evidence” questions, e.g. “Select the answer from the passage that supports the answer to Part A” vs. “Which of the following provides the best evidence for the answer to the previous question?” The use of these questions, as well as the questionable definition of “evidence” they entail, is probably the most striking similarity between rSAT and PARCC. It is also, I would argue, the hallmark of a “Common Core test.” Considering the number of Standards, the obsessive focus on this one particular skill is quite significant. But more about that in a little bit.
- Test simple, straightforward skills in bizarrely and unnecessarily convoluted ways in order to compensate for the absence of substance and give the impression of “rigor,” e.g. “which detail in the passage serves the same function as the answer to Part A?”
- Include texts that are relatively dense and include some advanced vocabulary, but that are fairly straightforward in terms of structure, tone, and point-of-view: “claim, evidence; claim, evidence,” etc. There is limited use of “they say/I say,” or the type of sophisticated rhetorical maneuvers (irony, dry humor, wordplay) that tend to appear in actual college-level writing. That absence is a notable departure from the old version of the SAT and, contrary to claims that these exams test “college readiness,” is actually misaligned with college work.
Here I’d like to come back to the use of two-part “evidence” questions?” Why focus so intensely on that one question type when there are so many different aspects of reading that make up comprehension?
I think there are a few major reasons:
First, there’s the branding issue. In order to market Common Core effectively, the Standards needed to be boiled down into an easily digestible set of edu-buzzwords, one of the most prominent of which was EVIDENCE. (Listening to proponents of CCSS, you could be forgiven for thinking that not a single teacher in the United States — indeed, no one anywhere — had ever taught students to use evidence to support their arguments prior to 2011.) As a result, it was necessary to craft a test that showed its backers/funders that it was testing whether students could use EVIDENCE. Whether it was actually doing such a thing was beside the point.
As I’ve written about before, it is flat-out impossible to truly test this skill in a multiple-choice format. When students write papers in college, they will be asked to formulate their own arguments and to support them with various pieces of information. While their professors may provide a reading list, students will also be expected to actively seek sources out in libraries, on the Internet, etc., and they themselves will be responsible for judging for whether a particular source is valid and for connecting it logically and convincingly to their own, original argument. This skill has only a tangential relationship to even AP-style synthesis essays and almost zero relationship to the ability to recognize whether a particular line from paragraph x in a passage is consistent with main idea y. It is also very much contingent upon the student’s understanding of the field and topic at hand.
So what both the PARCC and rSAT are testing is really not whether students can use evidence the way they’ll be asked to use it in college/the real world, but rather whether they can recognize when two pieces of information are consistent with one another, or whether two statements expressed different ways express the same idea. (Incidentally, answers to many PARCC “evidence” question pairs can actually be determined from the questions alone.)
The problem is that using evidence the way it’s used in the real world involves facts, but facts = rote learning, something everyone agrees should be avoided at all costs.
Besides, requiring students to learn a particular set of facts would be so politically contentious as to be a non-starter (what facts? whose facts? who gets included/excluded? why isn’t xyz group represented…? And so on and so forth, endlessly.)
When you only allow students to refer back to the text and never allow them make arguments that involve anything beyond describing the words on the page in fanciful ways, you sidestep that persnickety little roadblock.
Nor, incidentally, do you have to hire graders who know enough about a particular set of facts to make reliable judgments about students’ discussions of them. That, of course, would be unmanageable from both a logistical and an economic standpoint. Pretending that skills can be developed in the absence of knowledge (or cheerily acknowledging that knowledge is necessary but then refusing to state what knowledge) is the only way to create a test that can be scaled nationally, cheaply, and quickly. Questions whose answers merely quote from the text are also extraordinarily easy to write and fast to produce. If those questions make up half the test, production time gets a whole lot shorter.
The result, however, is that you never actually get to deal with any ideas that way. You are reduced to stating and re-stating what a text says, in increasingly mind-bending ways, without ever actually arriving at more than a glancing consideration of its significance. If high school classes become dedicated to this type of work, that’s a serious problem: getting to knock around with ideas that are a little bit above you is a big part of getting ready to go to college.
You can’t even do a good old-fashioned rhetorical analysis because you don’t know enough rhetoric to do that type of analysis, and acquiring all that rhetorical terminology would involve “rote learning” and thus be strictly verboten anyway.
The result is a stultifying mish-mash of formal skills that tries to mimic something kinda high level, but that ends up being a big bucket of nonsense.
There is also, I think, a profound mistrust of students baked into these tests. A friend of mine who teaches high school tells me that the administrators at her school have, for several years now, been dogging the teachers with the question “how do you know that they know?” Translation: what data have you collected to prove to the powers that be that your students are appropriately progressing toward college and career readiness? In addition to vaguely mimicking a high-level skills, forcing students to compulsively justify their answers in multiple-choice format gives those powers that be quite a lot of data.
There also seems to be a latent fear that students might be trying to pull one over on the their teachers, on the administration — pretending to understand things when they’re actually just guessing. That, I suspect, is a side-effect of too many multiple-choice tests: when students actually write things out, it’s usually a lot clearer what they do and don’t understand. But of course it’s a lot harder to reduce essays to data points. They’re far too messy and subjective.
So the result is to try to pin students down, force them to read in ways that no one would possibly read in real life (oh, the irony!), and repeatedly “prove” that they understand that the text means what it means because it says what it says… There is something almost pathetic about the grasp for certainty. And there’s something a good deal more pathetic about teachers who actually buy into the idea that this type of low-level comprehension exercise is some sort of advanced critical thinking skill that will magically make students “college ready.”
But to return to my original point: one of the most worrisome aspects of the whole discussion about the SAT and the PARCC in regards to the state testing market is that the former is presented as a genuine alternative to the latter. Yes, the SAT is a shorter test; yes, it’s produced by the College Board rather than Pearson (although who knows how much difference there is at this point); yes, it’s paper-based. But it’s really just a different version of the same thing. The College Board is banking on the fact that the SAT name will deter people from asking too many questions, or from noticing that it’s just another shoddy Common Core test. And so far, it seems to be working pretty well.
Extremely interesting, thank you. One thing I have found happening more and more is that I struggle to resist letting my contempt for the test be apparent to students. For the most part, they are steuggling with the usual host of pressures and believe the test is measuring something much more elusive then it as . I usually tell them to think of it not as reading test, but as a “matching game” or “word search.” I assure the students who are terrified that they are not “fast readers” that this is not testing appreciation or comprehension; it consists of a search for a limited number of clues, repeated phrases, or synonyms. While this is effective, on some level it is also dispiriting, even to them, that this banal exercise is given so much importance.. What is the point of so many “which of the below is NOT mentioned as an example of?” Or events I, II, III and IV happened in what order?” Is this critical reading as it used to be understood? Which I think was as thoughtful reading?
I think a lot of long-time tutors feel that way right now. I actually now several highly accomplished veterans who are taking the SAT change as a sign that it’s time to walk away from the test entirely. Some of them have tutored for decades, and they simply can’t stomach the new exam. With the old test, at least there was a sense that preparing students for it actually meant teaching them something important. Kids might not *think* they needed all that weird vocab, but making them learn it was certainly a service to their future professors. And even a lot of so called “tricks” actually involved developing logical reasoning skills, or introducing them to mind-blowing concepts such as “the fact that x is true in one instance does not mean it’s true in all instances.” Now, there’s a definite sense that preparing for the SAT is, ironically, exactly what its critics have always accused it of being: a banal, pointless exercises that only measures how well you can take the test.
Looking at the PARCC, I was struck by how almost pathetically easy a test it is for a certain type of kid who understands instinctively how test-writers want you to think, but what an utter nightmare it must be for a kid who just isn’t tuned into that wavelength (or whose comprehension skills are genuinely lacking). It must be excruciatingly confusing. Given that the ELA exams also test no actual knowledge, I can also barely wrap my mind around the fact that some schools have restructured their entire curriculums around these tests. It’s the most unbelievable waste of class time imaginable; there is literally no other way to prep kids than to drill them endlessly on formal strategies. That’s one hefty glass of kool-aid these people have drunk.
The similarities are intentional. David Coleman came to the College Board to help PARCC and ensure the Common Core survived the PARCC organization’s incompetence, which was already on evidence in 2012.
Upon his arrival David Coleman began adjusting the SAT specifications to align with PARCC and the CC standards. The intention was to position SAT as the PARCC High School exam. Coleman was very transparent about his intentions with College Board executives. Many executives mounted principled objections to his strategy. Those that did were neutralized, being shunted into high paying sinecures or offered huge packages to keep quiet.
The size of the exit packages is verifiable on the College Board’s form 990s. Coleman spent millions in College Board’s capital reserves to keep his critics quiet.
Of course the similarities are intentional. I was actually very surprised learn that there was any attempt made to hide the link between CC and rSAT at all. Even if Coleman has recently tried to backtrack on the connection, the CB seemed, if anything, to be playing it *up*. When he was appointed, CC hadn’t generated quite the level of backlash that exists now, and so the repercussions of connecting rSAT to other CC tests weren’t yet apparent. Given the fact that no one was publicly discussing the enormity of the shifts at the CB or the severing of the traditional relationship with ETS, it seemed pretty apparent that people were getting paid off to keep their mouths shut. But it was only a matter of time before someone principled and disgruntled enough decided that the benefits of coming forward outweighed the risks. Assuming Alfaro is for real, the only question is whether anyone in the mainstream media cares to launch an investigation, or has the attention span and/or sufficient interest in reality to analyze the implications of the CB’s shenanigans.
When David Coleman realized that his position was untenable should he continue to cheerlead for the Common Core, there was a closing of the ranks. There was a concerted effort by his henchmen to conceal Coleman’s links to various right-Wong interests. His financial support of his coconspirators (Pimental, Zimba, et al) was not allowed to be spoken of. All references to Common Core were redacted from College Boards communications and publications.
Now David Coleman operates entirely in the shadows. He knows if his true motives were revealed that the public would revolt.
Thanks, that’s very consistent with my observations. Good to have confirmation. I’ve noticed that the media has stopped highlighting Coleman’s involvement in the development of CC, and my impression was that he had kind of hidden himself away in an effort to avoid any sort of media engagement. The links between Coleman and CC were played up so hard and for so long, though, that I can’t imagine it’s doing much good (although apparently it’s doing some good, since no one seems to have connected all the PARCC and SBAC fiascos to what’s going on at the CB to the point of spurring action). I’m curious about his links to “right-wing” henchman, though. My impression was that most of his henchmen actually came from the technocratic Left.
The right wing henchman are within the College Board. Of course, he does have a coterie of useful idiot left wing technocrats on his staff as well. Those left of center were all purged in Coleman’s Stalinist “staff reduction” shortly after his arrival. Coleman is basically a Log Cabin Republican. But he is shrewd enough to tell anybody what they need to hear if it will help him advance his agenda.