So I’m in the middle of rewriting the workbook to The Ultimate Guide to SAT Grammar. After the finishing the new SAT grammar and reading books, I somehow thought that this one would be easier to manage. Annoying, yes, but straightforward, mechanical, and requiring nowhere near the same intensity of focus that the grammar and reading books required. Besides, I no longer have 800 pages worth of revisions hanging over me — that alone makes things easier.
However, having managed to get about halfway through, I have to say that I’ve never had so much trouble concentrating on what by this point should be a fairly rote exercise. Even writing three or four questions a day feels like pulling teeth. (So of course I’m procrastinating by posting here.)
In part, this is because I have nothing to build on. With the other two book, I was revising and/or incorporating material I’d already written elsewhere; this one I have to do from scratch. I’m also just plain sick and tired of rewriting material that I already poured so much into the first time around.
The problem goes beyond that, though.
I’ve recently come to realize is that the tests I’ve written in the past were comparatively easy to construct because the test they were based on were themselves well-written. Sure, the current workbook was a drag (and don’t even get me started on the trauma of having Word spontaneously decide to create multiple pages 45s — I still flinch whenever I see that number), but writing for the new SAT has made me realize just how much I took for granted. I’ve never actually had to mimic a poorly written test.
Although focusing on The Skills That Matter Most sounds like a lovely idea in principle (who would want students tested on skills that don’t matter?), it’s a little less lovely in practice — or at least in the sort of practice that the writers of the new test have cooked up.
As I’ve written about before, one of the most salient features of the new test is that the majority of the questions focus primarily on a “core” group of skills or topics, with a much smaller number of questions testing a much wider range of topics. The major problem with this setup is that students who are aiming for high scores will need to spend a significant amount of time studying topics that stand a relatively small chance of actually appearing on the test.
To use the type of rhetoric normally applied to SAT vocabulary, students will now be required to devote lots of time “memorizing obscure rules that they are unlikely to ever be tested on.”
I knew this in theory before, but I didn’t fully realize how it would play out on a practical level until I started writing full-length sections.
As has been extensively remarked upon by now, the new SAT multiple choice writing section is essentially a cheap ripoff of the ACT English section — kind of like a fake designer handbag. At first glance, it seems pretty much the same, but when you start to look more closely, you can see that the quality just isn’t as good. At some level, I get the sense that even the people writing the test think it’s bullshit. There’s a carelessness to it that I’ve never felt on another exam. (Cat portraits? Seriously?)
Another big issue involves length. Given the fact that all of the reading is now lumped together in a single, seemingly interminable 65-minute section (nearly twice as long as the ACT reading action) and the writing section will immediately follow the reading section, it is understandable that the writing section would be made somewhat shorter. (As a side note, having to do two verbal sections consecutively is going to be a major turnoff for all but the strongest readers). The problem is that with only 44 questions, 12 of which must involve inserting/deleting information and another 1-3 of which must involves graphics of some sort, there simply isn’t room to test the full range of concepts. As a result, there’s a constant tradeoff: some concepts that I would argue are extraordinarily important — comma splices, for example, or verb tense — are given very short shrift.
Furthermore, with room for only one or two questions in a given category per test, there’s very little way to test concepts at various levels of difficulty. To take just one example, the current SAT writing section focuses heavily on subject-verb agreement, with at least three and as many as five questions of varying difficulty levels appearing throughout the test. Most of these questions test that concept in fairly predictable ways: students can be reasonably confident, for example, that they will encounter sentences that place non-essential clauses and prepositional phrases between subjects and verbs, as well as sentences with compound subjects (two singular nouns connected by “and”). Not coincidentally, these are the structures that students are most likely to employ in their own writing. Because of the frequency with which these structures are tested, students have very good reason to spend time mastering them.
Based on the material released by the College Board, however, the new test appears to test subject-verb agreement no more than two times per test, and sometimes not at all. Furthermore, there is no way to tell what forms those questions will take when they do appear: they could be as pedestrian as subject-prepositional-phrase verb, or they could be as out-there as “that/whether/what” as subjects — a construction that few people except professional writers (or pretentious Harvard Crimson reporters) use.
But, you say, isn’t that a good thing? Won’t that make the test harder to “beat?” Well, yes, it will make the test less predictable, but predictability is not always a negative. Here’s the thing: the basics of subject-verb agreement are really, really important. Even if a student struggles with some of the more complex variations, they should at least understand how to avoid the most common errors involving it by the end of high school.
Now, given that these common constructions may or may not be tested, students will have far less incentive to study them to the point of mastery. The fact that something is straightforward to learn does not mean that it is isn’t important, or that it shouldn’t be tested. A test that truly focused on “the skills that mattered most,” or one that genuinely gave students a clear blueprint of what to study, simply would not operate this way.
The ACT English test, in contrast, has 75 questions — that’s plenty of room to test a much fuller range of topics in various ways, from a variety of standpoints, and with varying degrees of difficulty. There are numerous rhetoric questions, but they feel far more balanced with the grammar questions. The writers of that test clearly understand that both are important. Looking back, when I’ve written mock ACT English section, I’ve never felt as if I had to pick and choose; there always seemed to be ample opportunity to include all of the question types I wanted to include. The same for creating mock SAT Writing tests: there was plenty of room to incorporate everything, and I never felt as if I was having to make constant judgment calls about what to test.
Now, though, I find myself writing and rewriting, shifting things around, trying to figure out what’s important enough to include. Decisions that seem like they should be simple and straightforward are instead tedious and drawn-out. The problem is certainly compounded by test-writing fatigue, and by my tendency toward perfectionism, but still, it seems like this shouldn’t be so hard.
Furthermore, trying to capture the ACT’s earnest, down-home, politically correct hokeyness may be an eye-rolling experience, but at least it’s never made me feel like a shill for the tech industry.
The College Board’s prevailing philosophy seems to be that making students read vapid passages that basically scream “LOOK, WE’RE BEING RELEVANT!” and that present tired clichés about technology and the new economy, with an occasional bone tossed condescendingly to the humanities just so no one can accuse them of being total Philistines (look, it’s ok to take some philosophy course because they can help you make money!), will somehow make students “college and career ready.”
To be perfectly blunt, that is one of the stupidest ideas I’ve ever encountered. Only people who live in an echo chamber completely divorced from reality could buy into — or think that students would buy into — something so patently condescending.
It’s also such a blatantly desperate ploy to recapture market share that it comes off as more than a little pathetic. (Not that I’m feeling sorry for anyone — you ruin the test, you suffer the consequences). You can just picture the strategy sessions at which it was decided that “Relevant!” would be the official buzzword of the redesigned SAT, to be repeated ad nauseum lest anyone continue to think that the good people at the College Board were somehow out of touch.
Unfortunately, there doesn’t seem to be a way of creating accurate materials that don’t reflect this particular ideological bent. In order to resolve this moral quandary and assuage my conscience, I’m thinking that I may need to put a disclaimer in the book.
Notice: the views expressed this work are intended to represent those held by the technology-happy testing companies and education “reformers” responsible for the installation of Bill Gates’s lackey as the head of the College Board and the redesign of the SAT. In no way should they be taken to reflect my personal opinions.
I’m only half-kidding.