I have to say I never thought I’d write a post singing the virtues of multiple choice tests (well, sort of). Despite the fact that much of my professional life is dictated by such exams, I’ve never had any overwhelming liking for them. Rather, I’ve generally seen them as a necessary evil, a crudely pragmatic way of assessing fundamental skills on a very large scale. Sure, the logic and elimination aspects are interesting, but they’ve always in comparison to the difficulty of, say, teaching a student to write out a close reading of a passage in their own words. People might argue that learning to do so in irrelevant (obviously I disagree, but I’m not going into that here), but basically no one is disputing that it’s hard. At any rate, I’ve always assumed that given the alternative between an essay-based test and a multiple-choice one, the former would invariably be superior.

By this point, you think I’d have taken a lesson from the SAT and learned not to think in such extreme terms, but from time to time I apparently still fall prey to that kind of thinking. A couple of weeks ago, however, I had a conversation that made me reconsider that assumption. Not toss it out the window, mind you, but look at it in a slightly more nuanced manner.

It all started when a friend who teaches high school French invited me for dinner. When I arrived, she was in the process of grading a batch of tests she’d given her AP students, and she was becoming increasingly concerned. She’d asked them to write a short essay comparing the main themes of two short stories — a perfectly straightforward assignment, she’d thought — but virtually none of the students had actually written statements directly comparing the articles. Instead, they’d simply written every piece of information they knew about the stories, never indicating how it answered the question.

My friend was really and truly shocked. It was not a notably difficult task she’d given them, and they’d discussed the stories in class. But out of a class of over 20 students, only two had made general statements directly comparing the stories.

So now my friend had a dilemma: her students had not actually completed the assignment correctly, but they had written something. If she docked a lot of points, she would have to deal with a gaggle of hysterical sixteen year-olds and their more hysterical parents (one of whom had recently sent her a very lengthy email begging her to raise her daughter’s grade to an A- simply because she loved French so much), all insisting that it was unfair of her to have taken the points off because the students had written a lot.

And if she gave out too many low grades, she’d end up shouldering the blame for not having taught them properly. (She’d had a 100% AP pass rate for the last two years, a statistic that included kids who shouldn’t have been taking AP but whom she’d been forced to let in anyway; her ability to teach wasn’t in question.) But on the other hand, 95% of her students hadn’t actually demonstrated mastery of the task at hand. They didn’t deserve to be rewarded for something they hadn’t accomplished.

In the end, she compromised and took off half the number of points she’d originally wanted to take off.

It was an entirely reasonable decision, but it also left her with a bad taste in her mouth. The real problem, of course, was that her students didn’t seem to know how present arguments in terms of main ideas — at least not in French. But she had so much content she needed to cover before the AP exam that she couldn’t afford to lose even part of a class, never mind a few days or a week, just to try to remedy the issue.

There was also the deeper issue that her students had apparently been conditioned to believe that if they wrote something, anything, it would be good enough, and that they could at least argue for partial credit.

“You know,” my friend said over dinner, “there’s really something to be said for multiple choice tests. For all their shortcomings, they really lay it on the line — you either know it or you don’t.”

And that got me thinking. I agreed with my friend — I’d written as much in who knows how many blog posts — but somehow watching her worry put things in a new light.

The way things stood, she didn’t actually know how much her students understood about the relationship between the stories’ themes. It could have been a language problem, although that seemed unlikely; some of them had managed to write plenty otherwise. It could have been that they’d never learned to write general statements period. It could be that they had learned to do so in English class but that it didn’t occur to them to do so in French. It could be that they understood the relationship between the stories but didn’t see the necessity of spelling it out because they thought that all of those details were sufficient to get the point across.

That last part wouldn’t have surprised me in the least. One of hardest things about teaching writing is conveying just how explicit one must be in order for the reader to follow the writer’s thoughts. Mastering that skill requires an extremely high level of metacognition — writers must essentially set aside their knowledge of a subject and place themselves in the readers’ shoes, asking themselves objectively whether what they’ve written makes sense. That’s a challenge for a lot of adult writers, never mind sixteen year-old ones.

But difficulty expressing one’s ideas in writing does not always translate into the inability to identify those ideas in someone else’s writing. I’ve worked with kids who simply couldn’t put answers in their own words but could immediately and unwaveringly identify correct answers when they saw them. They knew the idea they were trying to convey, but they just didn’t yet have the vocabulary or the verbal acumen to get it across in so many words themselves. For students in this category, multiple choice tests can be a very useful vehicle for showing what they know.

Less generously, of course, multiple choice tests can be an extremely useful tool for eliminating the bullshit factor and for determining just what students actually understand. To be clear: given the option between demanding, rigorously graded essay exams and multiple choice tests, I clearly come down in favor of the former. The problem, however, is the “rigorously graded” part. If teachers are pressured (by the administration, by parents, implicitly by college admissions officers) to reward any an indication effort, even when it seriously calls into question the student’s actual mastery of the task at hand, then essay tests can actually be less rigorous than well-designed multiple choice tests.

To be sure, multiple choice tests are not a panacea, but properly used, they can offer a revealing assessment. At best, they make it next to impossible for students to consistently choose correct answers without having mastered particular concepts. The real issue is what teachers can do with feedback such tests provide. If the curricular constraints are such that they can’t address the underlying problems… Well, I’m honestly not sure what to say to that.