I have to say I never thought I’d write a post singing the virtues of multiple choice tests (well, sort of). Despite the fact that much of my professional life is dictated by such exams, I’ve never had any overwhelming liking for them. Rather, I’ve generally seen them as a necessary evil, a crudely pragmatic way of assessing fundamental skills on a very large scale. Sure, the logic and elimination aspects are interesting, but they’ve always in comparison to the difficulty of, say, teaching a student to write out a close reading of a passage in their own words. People might argue that learning to do so in irrelevant (obviously I disagree, but I’m not going into that here), but basically no one is disputing that it’s hard. At any rate, I’ve always assumed that given the alternative between an essay-based test and a multiple-choice one, the former would invariably be superior.
By this point, you think I’d have taken a lesson from the SAT and learned not to think in such extreme terms, but from time to time I apparently still fall prey to that kind of thinking. A couple of weeks ago, however, I had a conversation that made me reconsider that assumption. Not toss it out the window, mind you, but look at it in a slightly more nuanced manner.
It all started when a friend who teaches high school French invited me for dinner. When I arrived, she was in the process of grading a batch of tests she’d given her AP students, and she was becoming increasingly concerned. She’d asked them to write a short essay comparing the main themes of two short stories — a perfectly straightforward assignment, she’d thought — but virtually none of the students had actually written statements directly comparing the articles. Instead, they’d simply written every piece of information they knew about the stories, never indicating how it answered the question.
My friend was really and truly shocked. It was not a notably difficult task she’d given them, and they’d discussed the stories in class. But out of a class of over 20 students, only two had made general statements directly comparing the stories.
So now my friend had a dilemma: her students had not actually completed the assignment correctly, but they had written something. If she docked a lot of points, she would have to deal with a gaggle of hysterical sixteen year-olds and their more hysterical parents (one of whom had recently sent her a very lengthy email begging her to raise her daughter’s grade to an A- simply because she loved French so much), all insisting that it was unfair of her to have taken the points off because the students had written a lot.
And if she gave out too many low grades, she’d end up shouldering the blame for not having taught them properly. (She’d had a 100% AP pass rate for the last two years, a statistic that included kids who shouldn’t have been taking AP but whom she’d been forced to let in anyway; her ability to teach wasn’t in question.) But on the other hand, 95% of her students hadn’t actually demonstrated mastery of the task at hand. They didn’t deserve to be rewarded for something they hadn’t accomplished.
In the end, she compromised and took off half the number of points she’d originally wanted to take off.
It was an entirely reasonable decision, but it also left her with a bad taste in her mouth. The real problem, of course, was that her students didn’t seem to know how present arguments in terms of main ideas — at least not in French. But she had so much content she needed to cover before the AP exam that she couldn’t afford to lose even part of a class, never mind a few days or a week, just to try to remedy the issue.
There was also the deeper issue that her students had apparently been conditioned to believe that if they wrote something, anything, it would be good enough, and that they could at least argue for partial credit.
“You know,” my friend said over dinner, “there’s really something to be said for multiple choice tests. For all their shortcomings, they really lay it on the line — you either know it or you don’t.”
And that got me thinking. I agreed with my friend — I’d written as much in who knows how many blog posts — but somehow watching her worry put things in a new light.
The way things stood, she didn’t actually know how much her students understood about the relationship between the stories’ themes. It could have been a language problem, although that seemed unlikely; some of them had managed to write plenty otherwise. It could have been that they’d never learned to write general statements period. It could be that they had learned to do so in English class but that it didn’t occur to them to do so in French. It could be that they understood the relationship between the stories but didn’t see the necessity of spelling it out because they thought that all of those details were sufficient to get the point across.
That last part wouldn’t have surprised me in the least. One of hardest things about teaching writing is conveying just how explicit one must be in order for the reader to follow the writer’s thoughts. Mastering that skill requires an extremely high level of metacognition — writers must essentially set aside their knowledge of a subject and place themselves in the readers’ shoes, asking themselves objectively whether what they’ve written makes sense. That’s a challenge for a lot of adult writers, never mind sixteen year-old ones.
But difficulty expressing one’s ideas in writing does not always translate into the inability to identify those ideas in someone else’s writing. I’ve worked with kids who simply couldn’t put answers in their own words but could immediately and unwaveringly identify correct answers when they saw them. They knew the idea they were trying to convey, but they just didn’t yet have the vocabulary or the verbal acumen to get it across in so many words themselves. For students in this category, multiple choice tests can be a very useful vehicle for showing what they know.
Less generously, of course, multiple choice tests can be an extremely useful tool for eliminating the bullshit factor and for determining just what students actually understand. To be clear: given the option between demanding, rigorously graded essay exams and multiple choice tests, I clearly come down in favor of the former. The problem, however, is the “rigorously graded” part. If teachers are pressured (by the administration, by parents, implicitly by college admissions officers) to reward any an indication effort, even when it seriously calls into question the student’s actual mastery of the task at hand, then essay tests can actually be less rigorous than well-designed multiple choice tests.
To be sure, multiple choice tests are not a panacea, but properly used, they can offer a revealing assessment. At best, they make it next to impossible for students to consistently choose correct answers without having mastered particular concepts. The real issue is what teachers can do with feedback such tests provide. If the curricular constraints are such that they can’t address the underlying problems… Well, I’m honestly not sure what to say to that.
People seem to be throwing around the term “rote learning” a whole lot these days in regard to the SAT, without any apparent understanding of what it actually means. So in a modest — and perhaps vain — attempt at cutting through some of this linguistic obfuscation, I offer the following explanation.
This is an example of a question that tests rote knowledge:
The dates of the American Civil War were:
This question does not require any thought whatsoever, nor does it require the answerer to have any actual knowledge of the American Civil War beyond when it occurred. It is simply necessary to have memorized a set of dates, end of story. This is what “rote learning” actually means — memorizing bits and pieces of information, devoid of context, and without consideration of how those particular bits and pieces of information fit into a larger context.
The SAT does not ask questions like this. Ever.
It may be necessary to have internalized a certain amount of knowledge in order be able to answer SAT questions, but the questions themselves require application of knowledge. Sometimes that application may be quite straightforward, but it will be application nonetheless. And a test that requires the application of knowledge is, by definition, not a test of rote learning.
For example, consider the following passage:
You can make a mess of any language. It all depends on who is doing the talking and how he or she speaks – the speed, rhythm, and tone of voice. When some people open their mouths, the results sound more like yelling than talking. So isn’t it a little presumptuous to claim that one language is beautiful and another is ugly? Isn’t beauty entirely subjective? And what’s more: who actually knows every language and is in a position to make such a definitive judgment? The Japanese – to take just one example of a non-Western culture – seem to see the whole matter differently. A friend who is a professor in Tokyo explained to me that Japanese people generally consider their mother tongue to be the most beautiful, but also have a high opinion of French and the Polynesian languages.
A rhetorical technique used in the passage is
(D) rhetorical questioning
In order to answer this question, the reader must have learned the definition of the correct term (perhaps by rote, although probably not, since English teachers usually teach by example) as well as the definitions of the incorrect terms (usually necessary to prevent second guessing), AND be able to apply that knowledge in the context of an unfamiliar piece of writing.
That is not a “rote” exercise.
To be certain it is possible to answer this question with a very effective shortcut — since “rhetorical questioning” is listed as an answer choice, it is only necessary to scan the passage for question marks, which lo and behold are there — but one that still requires the test-taker to remember the “trick” as well as the situation in which it can be applied, and then ignore other, potentially confusing pieces of information that may prevent secure selection of that answer.
From an adult’s perspective, that might well sound like nothing more than a feeble objection to the idea that doing well on the SAT is just about knowing the right “tricks,” but for a kid whose comprehension is only so-so to start with, and who’s trying to recall and apply dozens of other strategies over the course of 4.5 hour test, remembering to only look for question marks and ignore everything else when an answer choice says “rhetorical questioning” can be a very tall order. (“I know you said to look for question marks, and I saw question marks, but then I thought well, maybe there’s an allusion — I mean, the guy, he like talks about his friend and Japan and stuff…”).
The fact that (D) must be the correct answer because the SAT is a logic test, designed to test simple and efficient solutions, and the fact that (D) involves the simplest, most efficient way of answering the question doesn’t even cross their minds because they’re too busy trying to remember what their English teacher said about irony last week. They also tend to lack the metacognitive skills that would allow them to think in those terms.
So now the real question: when people accuse the SAT of testing “rote learning,” what do they actually mean?
Ever since I went on my little rant about Elizabeth Kolbert, I’ve been seriously pondering this matter.
First, I suspect that most of people who accuse the SAT or promoting “rote learning” have given no thought whatsoever to the actual meaning of the term and are simply repeating (by rote) what they’ve heard because bashing the test makes them feel superior. Since everyone knows that “rote learning” is bad, and since the SAT is a bad test, the SAT must therefore test rote knowledge. Deduction by means of association.
You see, in the education debate, that’s the beauty of never defining your terms — whatever you happen to be against constitutes “rote learning,” and whatever you’re in favor of is “critical thinking.” Moreover, when people actually start to exhibit critical thinking (such as parsing the meanings of words and their implications), you can turn around and attack them for pedantry, for overcomplicating matters, for worrying about pointless, academic minutiae, as opposed to the real issue: lack of critical thinking. Which is of course never defined. And so on.
The result is a conversation that goes nowhere, which is probably the point. Presumably, that’s just how the hedge-funders trying to privatize education want it.
But I digress.
For the people who have even given a split-second thought to the meaning of “rote learning,” I suspect they’re using the term to refer to the fact that SAT questions have only one correct answer, and that the test-takers must choose an answer phrased in somebody else’s words (lack of creativity, the horror!).
That is not, however, the same thing as testing rote knowledge — and as your friendly neighborhood pedant, I think that it’s important to make that distinction.
The underlying assumption, I think, is that there is no real relationship between SAT questions and their answers, and that questions that have only one correct answer cannot involve critical thinking — that is, multiple steps of reasoning and logic, the identification of supporting evidence (!), the systematic elimination of illogical possibilities based on both the questions themselves and, yes, knowledge of the test’s framework. Answering SAT questions requires all of those things. Moreover, there is usually more than one way to arrive at each answer, and the student has complete autonomy to employ whichever one they see fit. All that matters is that they get to the right answer somehow.
The real criticism, I suspect, is that there is even such thing as a right answer. If you believe that every answer, no matter how harebrained, has merit, then of course you’re going to have a problem with the SAT — or with any test, for that matter.
If you are going to criticize the SAT for asking questions that only have one right answer, then criticize the SAT for asking questions that have only one right answer. And if you believe that there’s no such thing as a question with only one right answer, then at least have the guts to say that too. (By the way, I’m all in favor of open-ended essay exams, provided that they’re graded with an appropriate level of rigor. Realistically, though, the chance of such a system being accepted in the United States anytime soon is approximately zero.)
But newsflash: In the real world, right answers count; it doesn’t matter how the solution is arrived at. Furthermore, efficient, reliable, and, yes, creative shortcuts tend to be rewarded. When students look at me anxiously and ask me what they’re supposed to do, even though the directions are printed right in front of them (and we discussed those directions the previous session, as well as the session before that), I don’t just worry about how they’re going to do on the SAT — I worry about how they’re going to function in the real world.
When I first started tutoring reading for the SAT and the ACT, I took a lot of things for granted. I assumed, for example, that my students would be able to identify things like the main point and tone of a passage; that they would be able to absorb the meaning of what they read while looking out for important textual elements like colons and italicized words; and that they, at bare minimum, would be able to read the words that appeared on the page and sound out unfamiliar ones.
Over the last few years, however, I’ve progressively shed all those assumptions. When I start to work with someone, I now take absolutely nothing for granted. Until a student clearly demonstrates that they’ve mastered a particular skill, I make no assumptions about whether they have it. And that includes reading the words as they appear on the page.
To be sure, many of the students I’ve worked with do have most of the basics down — I’ve only worked with a few who had striking difficulties sounding out words. But on the other end of the spectrum, I can also count on one hand the number of students I’ve worked with who truly read at an adult level. Unsurprisingly, most of them required no more than a handful of sessions to score in the 750-800 range. In between, of course, there’s a vast, uneven middle ground, usually corresponding to mid-500s to high 600s on the SAT, and 23-28 or so on the ACT.
Within that very large group, though, I’ve noticed that most students fall into one of three general subgroups. The division isn’t clear-cut, but still, I find it’s a helpful way to think about things. Sometimes a student missing some of the lower-level skills will simultaneously have some of the higher-level ones (I once worked with a whip-smart girl who had serious decoding problems but a stellar sense of logic), but very often, a lack of skills at one level translates into an inability to master skills at the next level.
Students in this group typically learned to read through a whole-language approach and have had minimal exposure to phonics. Having never learned to match letters or combinations of letters to specific sounds, they recognize words by memory and are forced rely on guesswork when they encounter an unfamiliar ones. Often, they think by process of association. If an unfamiliar word starts the same way as a familiar one, they’ll simply plug in the one they know. For example, if they see an unknown word like prodigious, they might read productive, or they might read argument instead of augment. Interestingly, they tend not to notice whether the words they’re plugging in make grammatical sense in context.
Because they’re not reading the words that are actually on the page, these students can completely misinterpret what they’re reading. But in addition to that, it’s almost impossible for them to use roots, prefixes, suffixes, etc. to make logical assumptions about unfamiliar words because they can’t even recognize when roots are being used. Lack of knowledge about how words are put together feeds incomprehension.
At a somewhat more advanced level are the students who can decode competently but lack the vocabulary knowledge to make sense out of what they’re reading. I’ve actually found that many students with learning disabilities fall into this category. Kids without any evident weaknesses can often slide by in school, whether they’ve learned to decode 100% reliably or not, and so their problems go undetected. In contrast, students with obvious reading difficulties are noticed and often given explicit instruction in phonics. The result is that they learn to sound out difficult words beautifully but have no idea what they mean!
For many of these students, direct instruction in vocabulary and roots can boost their comprehension considerably — if, that is, they’re willing to put in the time.
These students are adept decoders and often have relatively strong vocabularies, but they have difficulty making the leap from comprehending the literal words to understanding their larger significance (which would in turn allow them to identify right answers quickly and securely). Typically, they lack the broader contextual knowledge that would allow them to connect the content of specific passages to larger debates, discussions, and themes in the real world.
While some of these students can learn to strategize well enough to pull themselves in the low 700s, others stay stuck in the mid-high 600s on the SAT and below 30 on the ACT because they can’t get to the big picture from the details. What looks like a timing or a strategy problem is actually indicative of something deeper. But these students only have a reading problem in the sense that they don’t have enough context to pull together all the pieces. What they really have is a knowledge problem. And in the short term, there’s no way to compensate for that.
As long as I’m in full-out combat mode… One more snipe.
Plenty of people love to hate the SAT because of the purported lack of “critical thinking” it requires.
But what about the other side?
I’ve had more than one parent tell me that their child does wonderfully on tests in school because they can just memorize things and spit them back, then forget those things as soon as they’re done.
They say this as if it is a good thing. (For the record, this is not the sort of content-based education I support.)
Their primary problem with standardized tests is that their children cannot simply memorize their way to a high score. They are upset that the SAT and ACT do not test rote memorization.
What they do want, I suspect, is simple: a test on which their children can achieve a high score — one that they’re not embarrassed to utter when their friends inevitably start comparing their children’s scores, and very preferably one high enough to get those children into a college whose name they can slip into polite conversation with oh-so-faux humility (“Boy, she really learned a lesson waiting until the last minute to do all those applications, but wasn’t it great that she could choose between Harvard, Duke and Amherst?” That’s a direct quote, by the way.)
If such a test involves rote memorization, they’re all for it.
I’ve also had more than one student tell me that they’re good at tests that just ask them to plug in the formula but they don’t don’t do so well when they have to, like, figure things out.
They say the words “figure things out” with striking distaste, and without the slightest sense of irony. Sometimes they even wrinkle their noses as they say it.
Often these are students with serious gaps in their knowledge and a marked tendency to resist any new way of approaching things. That does not, however, stop them from insisting that the SAT/ACT is a stupid test that doesn’t measure anything.
The problem, course is, is that the SAT and the ACT demand flexibility. If you cannot adjust your approach to the task at hand (the way you sometimes have to in real life), you’ll never get past a certain point.
I suspect that this is what a lot of people mean when they say they’re “bad test-takers.”
To be clear, I am not saying that ALL parents or students are like this, or even that most of them are. But it is an attitude I’ve encountered repeatedly, and it’s one that I have less and less patience for.