The new administration, Common Core, and the new SAT

Reuters’ Renée Dudley has come out with yet another exposé about the continuing mess at the College Board. (Hint: Coleman’s “beautiful vision” isn’t turning out to be all that attractive.)

This time around: what will happen to the new supposedly Common Core-aligned SAT if Common Core disappears under the incoming, purportedly anti-Core presidential administration? 

As Dudley writes:

The Core’s English Language Arts standards call on students to grapple with important readings, including hallowed U.S. documents such as the Declaration of Independence and works of American literature. Coleman’s redesigned SAT embraced the same concept. The Core’s reading standards “focus on students’ ability to read carefully and grasp information … based on evidence in the text” – a pillar of the new SAT. And the Core’s math standards call for “greater focus on fewer topics” – another principle echoed in Coleman’s new SAT.

Former College Board vice president [Hal] Higginbotham was among the first to raise concerns about hitching the SAT’s future to the Common Core. 

In his February 2013 response to Coleman’s “beautiful vision,” Higginbotham noted that some states wouldn’t begin implementing the learning standards until the 2014-2015 school year, the same time period in which Coleman wanted to launch the redesigned SAT. It would take years for teachers and students to get fully up to speed on the new curriculum, he and others argued.

“That circumstance leads me to wonder whether all students will have arrived at the starting line at the same time and whether the playing field for them will be level,” Higginbotham wrote in his memo to Coleman. Some students might be “more comfortable and competent than others in what will be presented” on a test aligned with the Common Core, he wrote.

As a consequence, a Common Core-based SAT “will inadvertently favor students from those geographies that have made the most progress” with the standards, Higginbotham wrote. Such a situation “raises fundamental questions of fairness and equity.”

and later: 

It’s unclear how Trump’s election – and his choice of a Common Core opponent for secretary of education – might affect the SAT and the College Board. Coleman hasn’t spoken publicly about the president-elect’s views.

I’ve followed Dudley’s series of articles on the Common Core with great interest, and for the most part, I think she’s done a very valuable service in terms of revealing some of the more serious problems plaguing the new exam — problems that include the recycling of recent exams so that students received the same exam they had already taken, the leaking of test forms before the exam, and the inclusion of items that did not meet the specifications set out by the College Board. 

In this case, however, Dudley’s reporting inadvertently (I assume) encourages some fundamental misunderstandings about Common Core, what it actually involves in terms of curriculum, and how it relates to the redesigned SAT. 

A few key points here.

First, in regards to the idea that Common Core could be uniformly rescinded: the federal government’s role in CCSS is limited, at least in terms of imposing the standards. CC was adopted by individual states, and individual states will decide whether to retain or abandon the Standards (or pretend to abandon them while renaming them State Standards).

To be fair, Dudley does mention that CCSS was adopted on a state-by-state basis; her concern is that anti-Core sentiment at the top may translate into more states dropping the Standards. 

That, however, brings me to my second point. As Diane Ravitch points out, the DOE may be effectively outsourced to Jeb Bush and Co., major proponents of Common Core. Coleman even released an announcement *praising* Betsy DeVos’s appointment as Secretary of Education.

Despite nominal political divisions, all of these people are effectively on the same side, at least where charters, school “reform” (privatization), school choice, etc. are involved. There may be degrees of disagreement over, say, the value of vouchers or the accreditation of for-profit vs. non-profit charters, but they are basically ideologically aligned. 

As Steven Singer has written about (link also courtesy of Ravitch), Devos, who has claimed to be opposed to the Core:  

[Is] a board member of Jeb Bush’s pro-Common Core think tank, Foundation for Excellence in Education, where she hangs out with prominent Democratic education reformers like Bill Gates and Eli Broad…

She founded, funds and serves on the board of the Great Lakes Education Project (GLEP), an organization dedicated to the implementation and maintenance of Common Core…

She’s even spent millions lobbying politicians in her home state of Michigan asking them NOT to repeal Common Core…

Next time, Dudley might want to take a piece of edu-speak to heart and “dig deep” before taking anyone in the president-elect’s circle literally. 

Third, the notion that schools can somehow teach a Common Core “curriculum,” and that students who have not used that curriculum (at least on the verbal side) will be at a significant disadvantage, reveals the extent to which popular understanding and coverage of the Core are muddled.

To reiterate: the redesigned SAT does not test any specific body of knowledge related to English, nor does the Core require significant concrete knowledge beyond vague formal skills (comparing and contrasting, identifying main ideas, etc.) whose mastery largely depends on students’ knowledge about the subject at hand.

In the eleventh grade standards, for instance, U.S. Historical Documents are provided as examples — Madison’s Federalist 10 is cited as a source for analyz[ing] how an author uses and refines the meaning of a key term or terms over the course of a text, but the text itself is not actually required reading. 

While a handful of documents are mentioned by name (The Declaration of Independence, the Preamble to the Constitution, the Bill of Rights, and Lincoln’s Second Inaugural Address), the primary directive is to analyze “seminal texts” and seventeenth-, eighteenth-, and nineteenth-century foundational U.S. documents of historical and literary significance. (http://www.corestandards.org/ELA-Literacy/RI/11-12/)

As for the new SAT, the majority of the Reading questions on that exam are effectively designed to test whether students understand that texts say what they say because they say it — in other words, comprehension. 

The questions are phrased in a byzantine manner, to be sure, but that is primarily to give the illusion that they are testing skills more sophisticated than the ones they are actually testing (and far less sophisticated than those tested on the old SAT). 

The combination of vague standards and quasi-random selection of historical passages for the exam means that the best-prepared students are those who have prior knowledge of the passages in question.

But because the College Board does not publish a comprehensive list of documents, movements, individuals, etc. with which students should be familiar (that would cross the line from “standards” to “content”), preparation for that portion of the exam largely depends on what students happen have covered in history class — which in turn depends on individual schools, even individual teachers. And that is a matter of chance, on many levels. 

Leveling the playing field? Hardly. 

That’s the fundamental problem with the coy, standards-aren’t-curriculum-but-they-sort-of-are game the College Board is trying to play. Students’ ability to employ skills such as analyzing language, identifying main ideas, or evaluating sources, is always to some extent dependent on their knowledge. The unspoken assumption of the Core seems to be that students will of course be learning formal skills in context of a well-structured, coherent curriculum, but that’s often not at all how things work in practice. 

If it is never made clear what specific content students must master, and teachers are trained to focus primarily on formal skills, students probably won’t acquire the knowledge they need to apply the formal skills in any meaningful way.

Failure to understand that means any coherent conversation about the problems with the Core is a non-starter. 

As for the relationship between student performance on the Verbal portion of the SAT and access to a Common-Core-aligned curriculum … Anyone who thinks that a student whose English classes have been devoted to endlessly reiterating the importance of using “evidence” — that is, citing from a text — to “prove” that a book says what it says will necessarily be better prepared for the SAT than a student who has learned something of substance, really does not understand the issues at play here at all. 

The SAT and zombies

The New York Times op-ed columnist Paul Krugman often talks about zombie ideas – ideas that are unsupported by any evidence but that continue to linger on in the mainstream, where they are kept alive by Very Serious People who should really know better, but who collectively choose to bury their heads in the sands because it suits their needs to do so.

As far as the SAT is concerned, I would like to nominate two myths in particular for zombie status:

1) Arcane vocabulary

As I’ve pointed out countless times before (hey, someone has to keep saying it), virtually all of the supposedly obscure vocabulary tested on the old SAT was in fact the type of moderately sophisticated and relatively common language found in the New York Times

As I’ve also pointed out before, this is a misconception that could be clearly rectified by anyone willing to simply look at a test, but alas, people who hold very strong convictions are wont to reject or ignore any evidence to the contrary. Call this Exhibit A for confirmation bias. 

2) The guessing penalty

To be clear, there is no automatic correlation between guessing and answering questions correct vs. incorrectly. A student can guess wildly and still get a question right, or answer with absolute certainty and get it wrong. In theory, it was possible to guess on every single question of the old SAT and still receive a perfect score; likewise, a student could conceivably answer every question confidently without getting a single one right.

The quarter-point deduction for wrong answers on the old SAT was designed as a counterbalance to prevent students from receiving scores that did not reflect their knowledge, and to prevent strategic guessers from exploiting the structure of the test to artificially inflate their scores (as can now be done on both the new SAT and the ACT).

More than any other ideas, though, at least one of these two seems to make an appearance in virtually every article discussing the SAT, regardless of how valid the other points are.

For example, in a recent discussion of this year’s slight drop in SAT scores (old test), Nick Anderson of The Washington Post states that “The College Board jettisoned much of the old test’s arcane vocabulary questions, dropped the penalty for guessing and made the essay optional” – a sentence that remarkably contains not just one but two SAT words! 

And in her otherwise excellent Reuters article on the College Board’s failure to ensure that SAT math questions conformed to the test specifications, Renee Dudley makes several references to the “obscure” vocabulary on the old test. Just for grins, I went through her article looking closely at the choice of vocabulary and found nearly a dozen “SAT words” (including some real faves like prescient and succinct).

She also alludes to the fact that “The new test contains no penalty for guessing wrong, and the College Board encourages students to answer every question.” 

As I read Anderson and Dudley’s articles, it occurred to me that the inclusion of these zombie ideas has actually become a sort of rhetorical tic, one that anyone writing about the changes to the SAT is effectively obligated to mention. 

Obviously, these references involve two of the biggest changes to the test and can hardly be avoided, but I think that something more than just that is going on here. 

Consider, for example, what isn’t said: although it is sometimes stated that rSAT math problems are intended to have more of a “real world” basis, the fact that geometry has been almost entirely removed from the exam is almost never explicitly mentioned.

In addition, the kind of disparaging language used to describe SAT vocabulary is notably absent when it comes to math. I have yet to encounter any piece of writing in which geometry was dismissed an “obscure” subject that lacked any relevance to (c’mon, say it with me) “college and career readiness.” Nor does one regularly read articles sympathetic to students who whine that they’ll never actually use the Pythagorean Theorem for anything outside geometry class.  

Why? Because depicting a STEM subject – any STEM subject – that way would be taboo, given the current climate. Even if the College Board has decided that geometry isn’t one of the “skills that matter most,” the virtual elimination of that subject from the test is a matter that must be pussyfooted around. 

On a related note, the arcane vs. relevant discussion also plays to fears that students will be insufficiently prepared to compete in the 21st century economy. The goal in emphasizing “relevant” vocabulary is to provide reassurance that the students won’t fall behind; that the College Board can now be trusted to ensure they are prepared for the real world. 

At the same time, this is essentially a rhetorical sleight of hand designed to disparage the humanities without appearing too obviously to do so – a euphemism for people who do not know what euphemisms are because, of course, such words have been deemed irrelevant, and why bother to learn things that aren’t relevant?

The unspoken implication is that acquiring a genuinely rich, adult-level vocabulary is not really an important part of education; that it is possible to be prepared for college-level reading equipped with only middle school-level words; and that it is possible to develop “high level critical thinking skills” without having a commensurate level of vocabulary at one’s disposal. In short, that it is possible to be educated without being educated.

That is of course not possible, but it provides a comforting fantasy.

Call this the respectability politics of anti-intellectualism – a way of elevating ignorance to the level of knowledge by painting knowledge not as something overtly bad but as something merely irrelevant. That is a much subtler and more innocuous-sounding construction, and thus a far more insidious one.

As for the “guessing penalty” myth… This phrase is in part designed to reinforce a narrative of victimization. Its goal is to elicit pity for the poor, under-confident students whose scores did not reflect what they knew because they were just too intimidated to bring themselves to pick (C), even if they were almost sure it was the answer. 

Framing things in terms of guesses rather than wrong answers makes it much easier to evoke sympathy for these students. After all, why should anyone – especially a member of an already oppressed group – be punished for guessing?

The conflation of guessing and punishment also helps perpetuate a central American myth about education, namely that more confidence = higher achievement. By that logic, it is assumed that students (sometimes implicitly but often explicitly understood as female, underrepresented minority, and first/generation low-income) would perform better if only they knew they wouldn’t lose additional points for taking a risk. If these students felt more confident, so the argument goes, their scores would improve as well.

In reality, however, there is often an inverse relationship between confidence and knowledge: if anything, the most confident students tend to be ones who least understand what they’re up against. (True story: the only student who ever told me he was going to answer every question right was scoring in the high 300s.) Helping these students feel more confident does nothing to increase their knowledge and can actually cause them to overestimate their abilities. In fact, when students begin to acquire more knowledge and obtain a more realistic understanding of where they actually stand, it is common for their confidence to actually decrease.

The really interesting part about the phrase “guessing penalty,” however, is that it can also be understood in another way – one that directly contradicts the way described above.

An alternate, perhaps more charitable, interpretation of this phrase is that students were formerly penalized for guessing too much. Not realizing that they would lose an extra quarter-point for wrong answers, they would try answer every question, including ones they had no idea how to do, and lose many more points than was necessary.

Understood this way, the term “guessing penalty” refers to the fact that the scoring system made it almost impossible for students to wild-guess their way to a high score. I suspect that this was the original meaning of the term. (As a side note, I can’t help but wonder: when people argued for the elimination of the quarter-point penalty, did they realize that they were actually arguing in favor of making the SAT easier to game?)

According to this view, students who cannot afford tutors or classes to teach them “tricks” about which questions to skip cannot possibly compete with their more privileged peers. Here again, the obvious goal is to frame the issue in terms of equity.

At this point, one might observe a contradiction: students on one hand were described as being so cowed by the thought of losing ¼ of a point that they could not even bring themselves to guess, and yet they were on occasion also presented as being so oblivious to that penalty that they tried to answer every question. 

But back to the subject at hand.

Another reason I suspect the socio-economic argument against the “guessing penalty” has so much traction is that it would seem to be backed up by commonsense reality.

While plenty of students managed to figure out the benefits of skipping sans coaching, it is also true that a certain type of student could benefit significantly from some help in that department. Given two students with the same level of foundational knowledge, starting scores, and ability to integrate new information, the one with the tutor would typically be at an advantage. That’s pretty hard to dispute.

Whether this particular type of help is inherently more problematic than other types of help – help that more privileged students will continue to receive, quarter-point penalty or no quarter-point penalty – is, however, subject to debate.

Based on my experience, I would actually argue that in fact the quarter-point deduction made the old SAT an overall harder test to tutor than it would have been otherwise, and far less vulnerable to the kind of simple tricks and strategies that mid-range students can, to some extent, use on both the new SAT and the ACT.

The reality is that teaching students to skip questions on the old SAT was not always such a straightforward process; in some cases, it was a downright nightmare. It was only really effective when students had a good sense of which questions they were likely to answer incorrectly – that is, when the only questions they consistently got wrong were the ones they had difficulty answering. Unfortunately, this was usually only the case for about the top 10-15% of students.

In contrast, trying to help a student who was consistently both confident and wrong figure out which and how many questions to skip was often an exercise in futility. Because such students often didn’t know what they didn’t know, and had a corresponding tendency to overestimate their knowledge, there was no clear correlation between how they perceived themselves to be doing and how they were actually doing. This was most problematic on the reading section, where easy and hard questions were intermingled; there was no way to tell them, for example, to focus on the first twenty questions. 

When students’ knowledge was really spotty, it was difficult to determine whether they should even be encouraged to skip more than a few questions on the entire test because there was absolutely no guarantee they’d get enough of the questions they did answer right to save their score from being a complete disaster. And it was also necessary to be careful when discussing which question types to avoid because if students came across one such question phrased in an unfamiliar way, they might not recognize it as something to avoid. 

As a tutor, I came to loathe those situations because they forced me to treat the test as a cheap guessing game, particularly if the students were short term. Eventually, I stopped tutoring people in that situation altogether because things were so hit-or-miss. Often, their scores did not improve at all, and sometimes they even declined.

In addition, some students flat-out refused to even try skipping, regardless of how much I begged/pleaded with/cajoled them. I had students who repeatedly promised me they would try skipping some questions on their next practice test and then answered every question anyway, every time. I never even managed to figure out how many questions, if any, they should skip, and so I couldn’t advise them.

At the opposite extreme, I had students who knew – knew – that they could skip at most one or two questions suddenly freak out on the real test and skip seven.

The point here is that no matter how much tutoring they had received, and no matter how many thousands of dollars their parents had paid, the kids were the ones who ultimately had to self-assess in the moment and make the decisions about what they likely could and could not answer. Sometimes they stuck to the plan, and sometimes they panicked or got distracted by the kid sitting in front of them tapping his pencil and spontaneously threw out everything we’d discussed. No one could do it for them. And if their assessments were inaccurate and they messed up, their score inevitably took a real hit. The limits of tutoring were exposed in a very blatant way.

One last point:

On top of everything I’ve discussed so far, there is also the issue of which groups of students get compared in discussions about equity. When it comes to test-prep, there is a foundational level below which strategy-based tutoring is largely ineffective. If we’re talking about the most profoundly disadvantaged students, then it’s unlikely the kind of classes or tutoring that are generally blamed for the score gap would bring these students up to anywhere remotely close to the range of their middle-class peers.

Yes, certain individual students might draw considerable benefit, but on the whole, the results would probably be quite small. The amount of intervention needed to truly close the gap would be staggering, and it would have to start long before eleventh grade. But that’s a deep systemic issue that goes far beyond the SAT, and thus it’s easier to simply make superficial changes to the test.

I suspect – although I do not have any hard evidence to back this up – that the effects of tutoring are felt most strongly somewhere in the middle: between say, the lower-middle class student and the upper-middle class student who attend similarly good schools, take similar classes, and have similar skills and motivation levels – students who stand to benefit more or less equally from tutoring. If the former cannot even afford to take a class while the latter meets with her $150/hr. private tutor twice a week for six months, there’s a pretty good chance the difference will show up in their scores.

This is of course still a problem, but it’s a somewhat different problem than the one that usually gets discussed.

Moreover, the elimination of the wrong-answer penalty will give privileged mid-range students an even larger advantage. Yes, students who do not have access to coaching can now guess randomly without worrying about losing additional points, but students who do have access to coaching can be taught to guess strategically, filling in entire sections with the same letter to guarantee a certain number of points while spending time on the questions they’re most likely to answer correctly.

This is particularly true on the reading section. Because there are fewer question types, and the passages are not divided up over multiple sections, students on the lower end of average who have modest goals can be more easily taught to identify what to spend time on and what to skip than was the case before.

The result is that the achievement gap is unlikely to disappear anytime soon, regardless of the College Board’s machinations.

The College Board informant returns (and the College Board goes after him)

The College Board informant returns (and the College Board goes after him)

This past June, Manuel Alfaro, a former Executive Director of Test Design and Development at the College Board, wrote a stunning series of tell-all posts on LinkedIn in which he detailed the numerous problems plaguing the redesigned SAT as well as the College Board’s attempts to alternately ignore and cover up those problems.

For several weeks, Alfaro posted nearly every day, each time revealing more disturbing details about the College Board’s bumbling ineptitude and equally clumsy attempts to hide it. 

Then, after 16 posts, he disappeared. 

I wrote about Alfaro’s major revelations here and here, so I’m not going to repeat them in this post; if you’d like to read the full series for yourself, you can do so via Alfaro’s LinkedIn page.

Given the accusations, it wasn’t hard to speculate about why Alfaro had gone silent so abruptly: presumably, the College Board’s team of legal vultures had either paid him off or threatened to make his life miserable if he didn’t keep his mouth shut. 

More than one commenter who appeared to have some personal familiarity with Alfaro pointed out that he isn’t the type of person to back down easily. 

As it turns out, both sides were right.

Alfaro has indeed returned, with two new posts (see here and here) revealing yet more lurid details about the College Board’s exploits. I strongly encourage you to read them. 

Unsurprisingly, the College Board has also come after him: Alfaro’s home was apparently raided by the FBI as a result of accusations that he was the person who released hundreds of test items to Reuters. But he also deliberately refrained from posting until after the April (school-day) and May SATs were released.

Why?

Because, he asserts, the administered test did not match the specifications laid out for the redesigned SAT, leaving a significant percentage of test-takers unable to finish the exam. Again, he claims, top College Board officials were aware of the problem but took no steps to rectify it prior to administration.

According to Alfaro, the extraordinary delay in releasing the tests was most likely due to the College Board’s need to rewrite the affected items, post-administration, to make them conform to the specifications. (Perhaps this what one College Board official meant when he stated that the delay was the result of a problem with the “metadata.”)

He therefore waited to begin posting until after the exams had been released in order to see whether — or rather, how and to what extent — the College Board had doctored them.  

So the issue is not only that the College Board has released an insufficient number of full-length exams; it is that even the exams that have been released may not be representative of the real test.  

Alfaro also states that in order to beat out the ACT for the Colorado state testing contract, the College Board spuriously claimed that the new exam tested scientific reasoning by counting every question that referred to a scientific topic — regardless of whether the question actually tested science in any way. 

I’m not sure what’s more disturbing: that the College Board actually argued that rSAT tested science even though it clearly does no such thing, or that Colorado school officials actually bought the College Board’s claims (as did schools officials in Illinois, Michigan, and Connecticut). After all, the only thing they needed to do was spend five minutes looking at the test

I think the that the major takeaway from all this is that the College Board is operating on the very cynical but all too often valid assumption that if one proclaims that something is true loudly and often enough, it ceases to be relevant whether that thing is actually true.

Thus, it is not necessary for the new SAT to actually require students to use evidence (the way the old SAT essay did, for example) — it is merely sufficient to call things “evidence-based.” Likewise, the College Board need only indignantly proclaim its commitment to “transparency,” regardless of whether there is evidence to suggest that such thing exists in any meaningful way.  

Most people — even those in charge of education for hundreds of thousands of students — will not bother to question important-sounding executives in suits who come bearing talking points about equity and slick PowerPoint presentations.

Provided that things are spun correctly and the necessary talking points are adhered to strictly enough, almost any absurdity can be made to sound reasonable. (Gee, who’d have thought that bringing in a McKinsey consultant would result in THAT?! Or maybe that was precisely the point.)

Such is the beauty of a post-fact world.

This isn’t exactly news at this point, but it bears repeating. In politics, enough people are clued into reality to spark a good deal of pushback after a certain threshold of ridiculousness is reached. (I was worried for a while that this wouldn’t be the case, but I was proven wrong). In education, however, people tend to be less informed about the details, and thus the issues are considerably easier to obscure.   

What makes the game the College Board is playing particularly dangerous is that it distorts key terms in the lexicon of education itself (critical thinking, evidence, higher-order thinking) so that they come to mean something far different, or even the opposite, of what they are traditionally understood to mean. Words become unmoored from their definitions.

And if any of this is questioned, the response is always along the lines of “it’s complicated.” Obfuscation is thus recast as nuance.

The result is an exercise in doublespeak in which the College Board says one thing and the public understands another. (“Isn’t it wonderful that students have to use evidence – that will really help them develop those higher-order thinking skills!”)

An organization that has a fundamental responsibility to help students learn to use language correctly is instead teaching a far different lesson, namely the importance of jargon and spin. 

As I’ve said before, from a sociological perspective it is utterly fascinating to watch this phenomenon play out in real time, but it is also terrifying to witness the ease with which people can be induced to ignore what is under their noses and to excuse the propagation of blatant falsehoods. (“I mean, everybody knows that guessing penalty doesn’t really mean there’s a penalty for guessing. It’s just called that.”) 

So is this ultimately the goal: to teach students to repeat a series of platitudes and buzz words, without any regard for their underlying meanings? I really am beginning to think this is the case. 

Critical thinking, for example, is often touted as the most important thing for students to develop, but people who exhibit a nuanced understanding of topics are typically derided as “wonks.” Witness the way the media bemoans Donald Trump’s lack of specifics but then turns around and sneers at Hillary Clinton for having the nerve to discuss her policies in detail, of all things. 

From what I’ve observed, the present goal of the education system seems to be to get students to about a seventh- or eighth-grade level very quickly and then more or less leave them there; real advanced work is for nerds. (And real advanced STEM work is for robotic Asian nerds — yes, there is a racially tinged component here.)  

I maintain that most people who extol the virtues of critical thinking would not much like the real thing if they saw it. It just involves too much work and too many facts. And worse, it’s not always fun

To be sure, this type of anti-intellectualism has been a consistent feature of American life since the nineteenth century, and granted I’m not an expert, but I’m not quite sure whether it has ever been embraced to quite this extent by educators themselves. 

The question is, have things progressed so far that the people who run the education system are incapable of noticing these things? And when people do point them out, will they have any effect? 

The SAT vs. ACT decision: how many practice tests do you need to take?

For those of you still deciding between the SAT and the ACT, one factor that you need to take into account is the number of practice tests you’re planning to take. I touched on this point in a recent post, but I’d like to revisit it here from a slightly different angle.

I’m insisting on it because of a couple of recent tutoring inquiries regarding students who want to start test prep early in junior year, and who are looking to raise their reading scores by enormous amounts (in the 200 point-range). But this post is also applicable to anyone looking to spend more than a few months prepping. 

To be clear, 200-point increases are extraordinarily difficult to achieve — new SAT, old SAT, whatever. But I’ve worked long-term with students who were serious about trying to make those kinds of gains, and if there is one thing they all had in common, it was the sheer number of practice tests they took. In some cases, 25 or more. 

In general, I am most definitely not a fan of the repeated practice tests approach. It’s infinitely more effective to work through material concept by concept, finding out where the gaps are and spending time plugging them, than to just take test after test. I’ve had students who took all of two practice tests who met their score goals easily, and students who took 30 (!) tests and never quite got to where they wanted to go. So if you’re worrying that that you need to take 25 practice tests just to have a fighting chance at a decent score, don’t worry — that’s probably not the case at all!

Likewise, if you are already a strong reader and/or are only trying to raise your SAT score by a modest amount, or if you are scoring so much better on the SAT than on the ACT that it doesn’t even make sense to look at the latter, this discussion doesn’t concern you so much.

However: if you are trying to raise your verbal score from average to “Ivy League-competitive” and your actual reading skills need work, or if you fall into one of the categories below, this is a real logistical concern that should at least be taken into account.

Most students who study for an extended period, either on their own or with a tutor, end up naturally going through a lot of tests. Even if they’re not doing full tests in one sitting, the individual practice sections can pile up pretty quickly once things really get going. 

Furthermore, some students are genuinely nervous test-takers who need to get as comfortable as possible with the testing process so that they’ll have as few surprises as possible when they take an exam for real. I’ve worked with students who needed to sign up for regular mock-testing at local companies for months on end, just so they wouldn’t have a nervous breakdown on test day. 

I also appreciate that reading tests pose a particular challenge for students who do not come from English-speaking families, or who did not grow up in the United States, but are aiming to gain admission to top colleges. Standardized tests include all sorts of cultural assumptions that American students take for granted, but internationals often do not have that luxury. They usually need to practice more. 

So if any of these things applies to you, and you are still trying to decided between tests, please consider the following:

Yes, the College Board has now released two additional practice tests, but that brings the grand total only to six, plus two PSATs. If you are planning to study for months and months, you will exhaust your supply of authentic tests very quickly.

You cannot compile a stash of old, released exams from your friends and the Internet because no old released exams exist.

If you want to take numerous full-length practice tests, you will either be forced to re-take the Official Guide tests — something I never recommend — or rely on third-party exams, which may or may not accurately reflect the content of the actual exam and which I never endorse either. 

(Note: as per “disgruntled” former College Board employee Manuel Alfaro’s revelations, released College Board exams #5 and #6 may not accurately reflect the content of the administered tests either.)

The bottom line is that if you’re planning to start prep in the fall of junior year (or earlier) for a spring test, you’re going to need a substantial amount of practice material; and if you’re not using official tests, you are likely to miss key issues that could have a noticeable effect on your score. 

And assuming that you can’t get access to the leaked exams, there is no way around it. 

For that reason, I am very strongly encouraging anyone who is scoring more or less comparably on the SAT and the ACT, and is seeking the type of score gains that will likely require long-term tutoring, to please seriously consider the ACT. There is only so much any tutor or student can do with the limited a supply of authentic SAT material, and to insist otherwise is unfair to everyone involved.

Yes, that test poses its own challenges, most notably involving speed, but at least it is possible to say that students can be prepared thoroughly and will have the opportunity to practice until they can get things right. Not to mention the fact that you can be sure the released exams you take were the same tests that were actually administered. 

Five reasons to continue avoiding the new SAT

When the redesigned SAT was rolled out this past March, most test-prep professionals that there would be a few bumps; however, there was also a general assumption that after the first few administrations of the new test, the College Board would regain its footing, the way it did in 2005, after the last major change.

Unfortunately, that does not appear to be happening. If anything, the problems appear to be growing worse.

If you’ve been following my recent posts, much of this will familiar. That said, I think it’s worth summing up some of the most important practical concerns about the new test in a single post.

1) Test security

The redesigned SAT has been plagued by security problems since its first administration in March 2016. The College Board has long recycled tests, re-administering exams internationally after they have been given in the United States, a practice that has continued with the new exam and that has created numerous opportunities for cheating.

Predictably, problems appeared as soon as the new test was introduced: on March 28th, Reuters broke a story detailing the College Board’s decision to administer the exam even after it was revealed that it had already been compromised in Asia. 

Disturbingly, this seems to be turning into a pattern. On July 25th, Reuters also reported that hundreds of questions intended for the October 2016 exam had been leaked, raising serious questions about the College Board’s ability to keep tests secure and the testing process fair. 

2) Test validity

As soon as the College Board released the new Official Guide in June 2015, tutors and other test-prep professionals began commenting on the greatly diminished quality of the questions on the new SAT.

After a long silence about just who would be writing the new exam, a College Board representative finally confirmed (via Twitter!) that the SAT would no longer be written by ETS, as it had been since the 1940s, but rather by the College Board itself. That meant the most experienced ETS psychometricians would no longer be doing quality control.  

In June, Manuel Alfaro, a former College Board director of test development posted a series of tell-all reports on LinkedIn, detailing the shockingly disorganized process by which questions were created and vetted.

Among his revelations: questions were being revised after field-testing (meaning that substantially altered questions were effectively being tested out on the actual exam); one test advisory committee member wrote a scathing, 11-page letter stating that the test items were “the worst he had ever seen;” and David Coleman repeatedly ignored pleas from College Board employees concerned about the quality of the items.

The assertion that there is a severe shortage acceptable test items is borne out by the fact that some students who took the June SAT received exams identical to the March test. Let me reiterate that: some students retook the exact same test only two months after they first sat for it.

The College Board has clumsily tried to get around this problem by barring tutors from non-released exams, and by demanding that students not discuss specific questions publicly.

There is nothing to suggest this problem is going away anytime soon. Assuming the College Board elects not to include the compromised items on the October test (which may or may not be a reasonable assumption), where will they obtain a sufficient number of valid replacement items in time? Will the same exam again be given multiple times in the same year?

3) Score delays

Traditionally, SAT scores have become available around two-and-a-half weeks after test administrations. This year so far, students have had to wait up to two months for their scores. The College Board has not yet publicized score- release dates for 2016-2017, so it is unclear whether these delays will continue. 

In addition, the College Board has traditionally released the October, January, and May tests through the Question and Answer Service (QAS). Typically, these exams are made public approximately six weeks after the test is administered. This year, however, two of the exams administered in May projected to be released at the end of the August, reportedly because a problematic question needed to be replaced — an unprecedented occurrence. (According to a College Board official, they’re still “figuring out the meta-data.” Whatever that means.) 

If October scores are delayed because of the security breach, or if some of the items need to be replaced, it is reasonable to expect another long wait for that test.

4) Lack of authentic practice exams

Old SAT: 10 tests in the Official Guide, plus several additional official practice tests released by the College Board.

ACT: Five tests in the previous edition of the Official Guide, plus two entirely new tests in the updated edition. There are also several additional official practice tests floating around the web. 

New SAT: Four tests in the Official Guide. The May exam, which normally would have been released mid-June, is still unavailable as of mid-August. 

It was originally reported that the College Board/Khan Academy would be releasing an additional four tests last fall, but that plan was tacitly shelved at some point. 

So yes, there is ample practice material on Khan Academy, but there is no substitute for using authentic, full-length practice tests, to figure out test-taking issues such as pacing and endurance.

5) Inconsistent and distorted scaling/scoring, and unhelpful score reports

In the past, all of the students taking the SAT in the United States received the same test, although different tests presented the nine multiple-choice sections in different orders to hinder cheating.

The new SAT always presents the four sections in the same order, so different students are now given entirely different tests.

Because different tests are scaled differently, students who answer the same number of questions correctly may receive different scores. Although students will still have a general idea of how many questions they need to answer correctly in order to achieve their goals, this does make it more difficult to plan strategically.

In addition, percentiles were formerly calculated based only on the scores obtained by all of the students taking the SAT. Now, however, the College Board has also created a “National Percentile” category, which compares actual test-takers to all students nationally, even ones who did not take the test. As a result, performance is inflated.

Although the College Board has released concordance scales between the old SAT and the new SAT, and between the new SAT and the ACT, it remains unclear how reliable/accurate these scales are, or how colleges will view them. The ACT has also taken the College Board to task, questioning the validity of the SAT-ACT concordance table.

Prior to March 2016, SAT score reports included commonsense, helpful information such as the number of questions answered correctly and incorrectly on each section. 

Now, however, SAT score reports consist primarily of edu-jargon that many students are likely to have difficulty interpreting (e.g. “Make connections between algebraic, graphical, tabular, and verbal representations of linear functions”), making it difficult for students as well as tutors to understand where and why points are being lost, and what specific steps are required for improvement. 

The College Board’s useless SAT reports

The following was sent to me by a colleague, a longtime teacher and tutor who runs her own business; I’m posting it here with her permission. Keep in mind that the College Board has repeatedly touted “transparency” (ha!) as one of the key features of the SAT redesign. 

I have a student who scored in the 400’s on her  June SAT. Thought I’d look at her report (granted, not a queestion-and-answer service report) online to see what areas need work. This is what I got.

Your score indicates that you are already likely able to:

  • Select the most appropriate data display that represents the relationship between two variables (PSD)
  • Select an appropriate graphical representation of a context (PSD)
  • Interpret data represented in a graph (PSD)
  • Create an expression or equation in one variable that models a context (HOA)
  • Create a linear function that models a context (HOA)
  • Create a linear equation in two variables that models a context
  • Solve a linear equation in one variable (HOA)
  • Create a ratio based on a context and use the rate to solve a problem (PSD)

PSD (Problem-Solving and Data Analysis)

This component of the SAT focuses on the assessment of students’ ability to use ratios, percentages, and proportional reasoning, as well as describe graphical relationships and analyze data. The Problem Solving and Data Analysis score is the number of questions you answered correctly converted to a scale score. It is a separately scaled score and is not used to compute other scores.

HoA (Heart of Algebra)

This component of the SAT focuses on the assessment of students’ skills with linear equations and systems of linear equations. The Heart of Algebra score is the number of questions you answered correctly converted to a scale score. It is a separately scaled score and is not used to compute other scores.

Improve your skills by focusing on the following suggestions:

  • Make connections between algebraic, graphical, tabular, and verbal representations of linear functions. When given one representation, be able to create any of the other representations.
  • Use the relationship between variables shown on a graph to make predictions and conclusions given a context
  • When multiplying polynomials, first examine the expression for structure and then follow the order of operations.
  • When factoring polynomials, look for relationships that allow the use of the difference of two squares, the square of a binomial, and quadratic trinomials.
  • Use what you know about factoring and the zero product property to solve quadratic equations.

Seriously?  If a student is scoring in the 400s on math, how can s/he be expected to understand most of that education-speak? And how is a parent supposed to help?

Granted, I didn’t major in math, but I teach it all the time and I still feel like I need to translate from a foreign language when I read this. It also irritates me that there is not a little bit more *obvious* information, such as how many questions did the student miss on the no-calculator section and how many on the calculator-allowed. Instead, they put the two sections together. 

In my student’s case, I suspect that it is the no-calculator allowed section that she’s struggling with – but I only think this because her PSAT score report tells me that she missed almost all of the no-calculator questions, while she answered well over half of the calculator allowed questions. This then led me to realize that she has a lot of trouble adding, subtracting, multiplying and dividing fractions (which would be easy to do on a calculator). 

Glad to have all the questions, student answers, and correct answers on PSAT, but even something relatively basic like number of questions right/wrong/skipped per section would be helpful (they USED to provide this!). 

And then a little bit later…

I just looked at another report (different student whose parent wanted me to give them an idea of what he should be focusing on) and the jargon went on for pages.  The problem is that it is not just useless – it’s WORSE than useless. It purports to be informational and “level the playing field” and all that. But in actuality, it reinforces a feelings of helplessness and marginalization among those with less access to sources of help.  In other words, the College Board lies. It makes all these claims – work hard in school and use our free resources – and that’s all you need! It’s only those (few) in the know and those with the resources to pay people like you and me that actually know the truth.

And just to kill the dead horse a bit more, I even looked at Khan Academy to see how well the “partnership” would serve my students. *sigh* I got frustrated trying to find efficient, pointed help for specific math problems and wound up on the FAQs page. No exaggeration – 12 of the 20 questions on the page admit to the program’s failings.
 
  • We don’t have a way for teachers and coaches to view their students’ progress in Official SAT Practice. We are planning to add ways…
  • We’re working on making your SAT practice activity show up there and also planning to add…
  • We don’t currently have a way for you to switch to questions from an earlier level without your intentionally missing questions…
  • We don’t currently have a way for you to instantly switch what level….
  • We do not currently have videos or articles about the essay but will be adding those. We are also investigating ways to score…
  • In the diagnostic quizzes, you can tell the system you’re guessing, but you’re right, you can’t currently do that in other places. This is on our list of things to add.
  • Sorry, there’s not a way to reset your diagnostic quizzes….
  • We agree this would be more efficient, but we haven’t built this into the system yet….
  • On the practice and diagnostic tests, it can be tricky to discern whether you are looking at an underlined comma, semicolon, period, or colon…This will not be an issue when you take the SAT itself.
  • (Can I print out all of the practice questions?)  We don’t have plans to do this, though we definitely understand the benefits… [this is something I was really hoping for.  Besides the fact that not all students have reliable and regular online access, the test itself is still pencil-and-paper. Shouldn’t we encourage students to practice that way?]
  • We do not have a dedicated smartphone app for this system…
  • …we still need to create specialized badges that apply well to the SAT practice system

 After I read this page, I gave up. It is clear the powers that be don’t care enough to make a program – touted to serve the 1.5+ million students who take the SAT each year – top-notch, or even above average. I actually felt embarrassed for the poor employees who have to answer these questions. I could almost hear them wanted to answer every question with a blanket admission, “You know, you’re right. This whole thing basically stinks.”  

My husband told me not to get mad. “Just think of it as job security.”