Of all the discussions floating around about SAT prep, the one I find most irritating — and pointless — is the guessing vs. skipping debate. I’ve heard all the debates by now, and I’m just not interested. I’m not a statistician, but after five years of teaching and writing what are essentially logic questions, I’ve gotten pretty good at spotting fallacies. I’ve also seen how things play out in the real world, where poorly thought-out guesses rarely pan out (at least when I’m watching).
The “guess if you can eliminate x number of answers” approach is based on the assumption that test-takers can reliably identify incorrect answers and that they will not eliminate the correct answer. From what I’ve seen, that is not even remotely a valid assumption. Certainly not for low-scoring students, but often not for higher-scoring ones either.
I think a lot of people would argue that two or three of the answers are usually clearly wrong — wildly off-topic, patently absurd, too extreme, etc. — and can be safely eliminated. The problem with that claim, however, is that it presupposes the ability to understand both the passages and the test in a way that, while obvious to a tutor or a high-scoring student, cannot be taken for granted when it comes to lower-scoring students. My weakest students, the ones who struggle to break 500, often pick answers that most higher-scoring students would immediately recognize as wildly off-base, even without having read the passage.
So why not simply teach those students to “read the test?” Well, because that actually requires some fairly sophisticated skills and in and of itself. It does not matter how many times you tell someone to avoid extreme answers if that person cannot actually recognize such answers — you can tell them to avoid words like “always” or “never” or “totally,” but there’s no guarantee that they will consistently remember to do so (persistently low scores are often accompanied by memory and attention issues), and they’re also unlikely to know a lot of the vocabulary words (e.g. utter contempt) that indicate extremity.
Exploiting the patterns inherent in the test requires exact same ability that the SAT tests — namely the ability to think abstractly and to apply general concepts to unfamiliar situations. Someone who has difficulty with the concept of extremity in the abstract certainly cannot use it as a reliable way of navigating the test, and you can’t exactly give someone a list of every single possible extreme word and phrase that could possibly appear (and again, you’d probably run into the memory issue).
Then there’s the skipping problem. A year or two ago, when I was more uninformed about these things, I naively thought that a convenient — and fast — way for low-scoring students to pull up their Critical Reading scores would be for them to simply focus on the questions they could answer most easily, picking a target raw score, and skip all of the questions that would be too difficult or time consuming. After all, you can theoretically skip a third of the questions and still get a 600.
When I tried out the skipping strategy with my students, however, I encountered a couple of major roadblocks.
1) They refused to skip questions on practice tests
Skipping a lot of questions can be very risky, and I would never suggest that someone try out a new strategy on the real thing — that’s what practice tests are for, and sometimes you have to do a whole lot of them to figure out what works. There’s a steep learning curve, a lot of trial and error, and no shortcut (that whole pesky “no shortcut” thing again). But my students kept getting seduced by the possibility that they might, just this one time, get all the questions right if they tried to answer them, and so neither of us ever got to see what would happen if they actually skipped. For short-term students, this was a disaster.
And, more seriously…
2) They couldn’t figure out which questions to skip
As I kept discovering, it’s pretty much impossible to circumvent the whole “reasoning” aspect of a reasoning test. And the problem with passage-based reading, unlike Math or Writing, is that you don’t know where the hard questions are going to fall. As a result, you have to know beforehand what types of questions you tend to have trouble with, how to recognize them, and when it’s worthwhile to try vs. skip — all of which require a level of self-monitoring that the lowest-scoring students tend to lack. In fact, unsurprisingly, they often show the least understanding of where they actually stand: one of my students who had scored a 390 CR on a practice test after doing virtually no work for months informed me the day before the test that he felt good about the test and was confident he could go in and answer every question correctly!
On top of that, there’s sufficient variation from passage to passage that there’s no guarantee that a particular kind of question that’s troublesome on one passage will be troublesome on another kind of passage. Yes, Passage 1/Passage 2 relationship questions are reliably hard across the board, as are “support/undermine” and “analogy” questions, but the latter two don’t even show up all that often. And while inference questions are certainly more challenging in principle than literal comprehension questions, the reality is that sometimes, practically speaking, there’s very little difference between them. If you don’t actually understand what the passage is saying, even literal comprehension questions are tough, and if you can’t “read the test,” you can’t even play “what looks like a right answer?”
So in the case of a really weak student who reads slowly and skips lots of questions, there’s no guarantee that they’ll be able to answer the remaining questions correctly, and that can easily translate into a score in the 400s. When someone cannot even reliably judge which questions, or how many questions, they can answer, it’s almost impossible to come up with a reliable strategy; assuming I don’t have two or three years to try to cover all the skills that they’re missing, that makes my job almost impossible.
Those higher-scoring students have their own set of issues where guessing and skipping are concerned, however.
My biggest problem with relying on process-of-elimination and guessing is that it fails to acknowledge the relationship between question and answer and reduces prep to a series of cheap tricks. It deliberately ignores the fact that most students do not in fact have the necessary close reading skills to do well on Critical Reading and tries to give students a false sense of superiority by making them believe that the test isn’t really hard, just “tricky.” (It’s a lot easier to say that you got tricked — with the implication that it was the test’s fault — than admitting that you simply didn’t know what you were doing.) The student learns that the correct response is (B) simply because it is not (A), (C), (D), or (E) but understands nothing about the tie between the “weird” or “vague” wording in the answer choice and the specific wording of the passage.
The remarkable thing is that some students can get quite far without having any clear understanding of what the test is testing (I certainly didn’t when i was in high school), but when those 680/700-ish students come to me, I often have my work cut out for me.
Those students have been operating under the unfortunate assumption that they do not really need to do any work upfront; that they do not need to answer the question on their own because they will be able to recognize the answer from among the choices, even though they’re not really thinking through what the more abstract or confusing options actually mean.
Some of my strongest students have consistently crossed out the correct answer first on the questions they got wrong, rendering any further process of elimination utterly moot. When I made them work back through the question — for real — they were usually a little sheepish, but they still resisted working that carefully from the start. Even after (or perhaps because of) my seven thousandth admonition that they had to work more carefully than seemed necessary on every single question, I suspect that in some corner of their minds, they clung to the idea that there was a shorter way.
Sometimes their overall guessing and reasoning skills are also strong enough to mask serious skill gaps as well — more than one 700-level student had difficulty with big-picture questions because, it emerged, they could not even determine the topic of the passage, nor were they sure where to look to find it. They could handle the detail questions, but when they had to integrate information from a few key places not specifically indicated, they were lost. They didn’t know to look at the introduction or the conclusion or the topic sentences because they didn’t know how to read arguments.
When it comes to those students, changing their mentality is the most challenging aspect. Persuading them that they actually need to refine their close reading skills, not just become better guessers, if they want to jump those last 50-100 points is not always easy. Unfortunately, if they don’t, they’ll just keep banging their heads against the 700 wall
Convincing them is particularly difficult because the “tricks” have been working for them thus far, and because it also requires breaking some of the most deeply ingrained beliefs about the SAT and standardized-testing in general: doing well on the test is not just a matter of knowing the right tricks; the presence of multiple answers does not excuse you from knowing how to work through the questions; and the right answer actually answers the question and can be determined through a concrete, logical process — one that might take more work than what you’re used to but that is ultimately far more reliable. In short, it’s not a guessing game, and if you treat it that way, you’re missing the entire point of the test.