On the new SAT essay, part I: background

In the past, I haven’t posted much about the SAT essay. Even though my students always did well, teaching the SAT essay was never my favorite part of tutoring, in large part because I’m what most people would consider a natural writer (thanks in large part to the 5,000 or so books I consumed over the course of my childhood), and “naturals” don’t usually make the best teachers. Besides, teaching someone to write is essentially a question of teaching them to think, and that’s probably the only thing harder than teaching someone to read.

The redesigned essay is a little different, first because it directly concerns the type of rhetorical analysis that so much of my work focuses on. In theory, I should like it a lot. At the same time, though, it embodies some of the most problematic aspects of the new SAT for me. Some of these issues recur throughout the test but seem particularly thorny here.

I’ve been trying to elucidate my thoughts about the essay for a long time; for some reason, I’m finding it exceptionally difficult to disentangle them. Every point seems mixed up with a dozen other points, and every time I start to go in one direction, I inevitably get tugged off in a different one. For that reason, I’ve decided to devote multiple posts to this topic. That way, I can keep myself focused on a limited number of ideas at a time and avoid writing a post so long that no one can get more than halfway through!

Before launching into an examination of the essay itself, some background.

First, it is necessary to understand that the major driving force behind the essay change is the utter lack of correlation between factual accuracy and scores – that is, the rather embarrassing fact that students are free to invent examples (personal experiences, historical figures/battles/act, novels, etc.) without penalty. In particular, personal examples have been a particular target of David Coleman’s ire because they cannot be assessed “objectively.” The College Board simply could not withstand any more bad publicity for that particular shortcoming. As a result, it was necessary to devise a structure that would simultaneously require students to use “evidence” yet not require – or rather appear not to require – any outside knowledge whatsoever.

As I’ve pointed out before, students have been – and still are – perfectly free to make up examples on the ACT essay, but for some mysterious reason, the ACT never seems to take the kind of flack that the SAT does. Again, marketing.

My feelings on this issue have evolved somewhat over the years; they’re sufficiently complex to merit an entire post, if not more, so for now I’ll leave my opinion out of this particular aspect of the discussion.

Let’s start with the pre-2016 essay.

Despite its very considerable shortcomings, the current SAT essay is as close as possible to a pure exercise in “using evidence” – or at least in supporting a claim with information consistent with that claim (information that may or may not be factually accurate), which is essentially the meaning of “evidence” that the College Board itself has chosen to adopt.  

For all its pseudo-philosophical hokeyness, the current SAT essay does at least represents an attempt to be fair. The questions are deliberately constructed to be so broad that anyone, regardless of background, can potentially find something to say about them. (Are people’s lives the result of the choices they make? Can knowledge be a burden rather than a benefit?). Furthermore, students are free to support their arguments with examples from any area they choose – contrary to popular belief, there is ample room for creativity. 

While students may, if they so choose, use personal examples, the top-scoring essays tend to use examples from literature, history, science, and current events. The example of a top-scoring essay in the Official Guide, if memory serves me correctly, is an analysis of the factors leading up to the stock market crash of 1929 – not exactly a personal narrative. In contrast, essays that rely on personal examples, particularly invented ones, tend to be vague, unconvincing, and immature. Yes, there are some students who can pull that type of fabrication off with aplomb, but in most cases, “can” does not mean “should.”

Furthermore, students who make up facts to support other types of examples are rarely able to do so convincingly. The ones who can are, by definition, strong writers who understand how to bullshit effectively – a highly useful real-world skill, it should be pointed out. But in general, the best writers tend to have strong knowledge bases (both being the result of a good education) and thus the least need to make up facts.

That is why the essay, formerly part of the Writing SAT II test, was relatively uncontroversial for most of its existence: only selective colleges required it, and so only the students who took it were students applying to selective college – a far, far smaller number than apply today. As prospective applicants to selective colleges, those-test takers were generally taking very rigorous classes and thus had very a solid academic base from which to draw. Remember that this was also in the days before work could be copied and pasted from Wikipedia, and when AP classes were still mostly restricted to very top students. While plenty of smart-alecks (including, I should confess, me) did of course invent examples, the phenomenon was considerably more limited than it became in 2005, when the essay was tacked on to the SAT-I.

Based on what I’ve witnessed, I suspect that the questionable veracity of many current essays is also a result of the reality that many students who attempt to write about books, historical events, scientific examples, etc. simply do not know enough facts to support their arguments effectively, either because they are not required to learn them at all in school (the acquisition of factual knowledge being dismissed as “rote memorization” or “mere facts”), or because information is presented in such a fragmentary, disorganized manner that they lack the sort of mental framework that would allow them to retain the facts they do learn.

One of the unfortunate consequences of doing away with lectures, I would argue, is that students are not given the sort of coherent narratives that tend to facilitate the retention of factual information. (Yes, they can read or watch online lectures, but there’s no substitute for sitting in a room with a real, live person who can sense when a class is confused and back up or adjust an explanation accordingly.) 

At any rate, when word got out about just how ridiculous some of those top-scoring essays were… Well, the College Board had a public relations problem on its hands. The essay redesign was thus also prompted by the need to remove the factual knowledge component. 

Now, here it gets interesting. As I forgot until recently, the GRE “analyze an argument” essay actually solves the College Board’s problem quite effectively. (The GMAT and LSAT also have similar essays.) Test-takers are presented with a brief argument, either in the form of a letter to an editor, a summary of research in a magazine or journal, or a pitch for a new business. While the exact prompt can vary slightly, it is usually something along the lines of this: Write a response in which you discuss what specific evidence is needed to evaluate the argument and explain how the evidence would weaken or strengthen the argument.

The beauty of the assignment is that it has clearly defined parameters – there is effectively no way for students to go outside the bounds of the situation described – yet allows for considerable flexibility. The situations are also general and neutral enough that no specific outside knowledge, terminology, or coursework is necessary to evaluate arguments concerning them.

In short, it is a solid, fair, well-designed task that reveals a considerable amount about students’ ability to think logically, present and organize their ideas in writing, evaluate claims/evidence, and “dialogue” with differing points of view while still maintaining a clear focus on their own argument.

There is absolutely no reason this assignment could not have been adapted for younger students. It would have eliminated any temptation for students to invent (personal) examples while providing an excellent snapshot of analytical writing ability and remaining more or less universally accessible. It also would have been perfectly consistent with the redesigned exam’s focus on “evidence.”

Instead, the College Board essentially created a diluted rhetorical strategy essay, taken from the AP English Composition exam — a very specific, subject-based essay that many students will lack prior experience writing. Students are given 50 minutes (double the current 25) to read a passage of about 750 words in response to the following prompt:

As you read the passage below, consider how the author uses

  • evidence, such as facts or examples, to support claims.
  • reasoning to develop ideas and to connect claims and evidence.
  • stylistic or persuasive elements, such as word choice or appeals to emotion, to add power to the ideas expressed.

Write an essay in which you explain how the author builds an argument to persuade his/her audience that xxx. In your essay, analyze how the author uses one or more of the features in the directions that precede the passage (or features of your own choice) to strengthen the logic and persuasiveness of his/her argument. Be sure that your analysis focuses on the most relevant features of the passage.

Your essay should not explain whether you agree with the author’s claims, but rather explain how the author builds an argument to persuade his/her audience.

Before I go any further, I want to make something clear: I am not in any way opposed to asking students to engage closely with texts, or to analyzing how authors construct their arguments, or to requiring the use of textual evidence to support one’s arguments. Most of my work is devoted to teaching people to do these very things.

What I am opposed to is an assignment that directly contradicts claims of increased equity by testing skills only a small percentage of test-takers have been given the opportunity to acquire; that misrepresents the amount and type of knowledge needed to complete the assignment effectively; and that purports to reflect the type of work that students will do in college but that is actually very far removed from what the vast majority of actual college work entails.

In my next post, I’ll start to look at these issues more closely.

The problem with “explain your answer”

The problem with “explain your answer”

I’m not sure how I missed it when it came out, but Barry Garelick and Katherine Beals’s “Explaining Your Math: Unnecessary at Best, Encumbering at Worst,” which appeared in The Atlantic last month, is a must-read for anyone who wants to understand just how problematic some of Common Core’s assumptions about learning are, particularly as they pertain to requiring young children to explain their reasoning in writing.  

(Side note: I’m not sure what’s up with the Atlantic, but they’ve at least partially redeemed themselves for the very, very factually questionable piece they recently ran about the redesigned SAT. Maybe the editors have realized how much everyone hates Common Core by this point and thought it would be in their best interest to jump on the bandwagon, but don’t think that the general public has yet drawn the connection between CC and the Coleman-run College Board?) 

I’ve read some of Barry’s critiques of Common Core before, and his explanations of “rote understanding” in part provided the framework that helped me understand just what “supporting evidence” questions on the reading section of the new SAT are really about. 

Barry and Katherine’s article is worth reading in its entirety, but one point that struck me as particularly salient.

Math learning is a progression from concrete to abstract…Once a particular word problem has been translated into a mathematical representation, the entirety of its mathematically relevant content is condensed onto abstract symbols, freeing working memory and unleashing the power of pure mathematics. That is, information and procedures that have been become automatic  frees up working memory. With working memory less burdened, the student can focus on solving the problem at hand. Thus, requiring explanations beyond the mathematics itself distracts and diverts students away from the convenience and power of abstraction. Mandatory demonstrations of “mathematical understanding,” in other words, can impede the “doing” of actual mathematics.

Although it’s not an exact analogy, many of these points have verbal counterparts. Reading is also a progression from concrete to abstract: first, students learn that letters are represented as abstract symbols, and that those symbols correspond to specific sounds, which get combined in various ways. When students have mastered the symbol/sound relationship (decoding) and encoded them in their brains, their working memories are freed up to focus on the content of what they are reading, a switch that normally occurs around third or fourth grade.

Amazingly, Common Core does not prescribe that students compose paragraphs (or flow charts) demonstrating, for example, that they understand why c-a-t spells cat. (Actually, anyone, if you have heard of such an exercise, please let me know. I just made that up, but given some of the stories I’ve heard about what goes on in classrooms these days, I wouldn’t be surprised if someone, somewhere were actually doing that.) 

What CC does, however, is a slightly higher level equivalent — namely, requiring the continual citing of textual “evidence.”  As I outlined in my last couple of posts, CC, and thus the new SAT, often employs a very particular definition of “evidence.” Rather than use quotations, etc. to support their own ideas about a work or the arguments it contains (arguments that would necessarily reveal  background knowledge and comprehension, or lack thereof), students are required to demonstrate their comprehension over and over again by “staying within the four corners of the text,” repeatedly returning it to cite key words and phrases that reveal its meaning  — in other words, their understanding of the (presumably) self-evident principle that a text means what it means because it says what it says. As is true for math, entire approach to reading confuses demonstration of a skill with “deep” possession of that skill. 

That, of course, has absolutely nothing to do with how reading works in the real world. Nobody, nobody, reads this way. Strong readers do not need to stop repeatedly in order to demonstrate that they understand what they’re reading. They do not need to point to words or phrases and announce that they mean what they mean because they mean it. Rather, they indicate their comprehension by discussing (or writing about) the content of the text, by engaging with its ideas, by questioning them, by showing how they draw on or influence the ideas of others, by pointing out subtleties other readers might miss… the list goes on and on.   

Incidentally, I’ve had adults gush to me that their children/students are suddenly acquiring all sorts of higher level skills, like citing texts and using evidence, but I wonder whether they’re actually being taken in by appearances. As I mentioned in my last post, although it may seem that children being taught this way are performing a sophisticated skill (“rote understanding”), they are actually performing a very basic one. I think Barry puts it perfectly when he says that It is as if the purveyors of these practices are saying: “If we can just get them to do things that look like what we imagine a mathematician does, then they will be real mathematicians.”

In that context, these parents’/teachers’ reactions are entirely understandable: the logic of what is actually going on is so bizarre and runs so completely counter to a commonsense understanding of how the world works that such an explanation would occur to virtually no one who hadn’t spent considerable time mucking around in the CC dirt. 

To get back to the my original point, though, the obsessive focus on the text itself, while certainly appropriate in some situations, ultimately serves to prohibit students from moving beyond the text, from engaging with its ideas in any substantive way. But then, I suspect that this limited, artificial type of analysis is actually the goal. 

I think that what it ultimately comes down to is assessment  — or rather the potential for electronic assessment. Students’ own arguments are messier, less “objective,” and more complicated, and thus more expensive, to assess. Holistic, open-ended assessment just isn’t scalable the same way that computerized multiple choice tests are, and choosing/highlighting specific lines of a text is an act that lends itself well to (cheap, automated) electronic grading. And without these convenient types of assessments, how could the education market ever truly be brought to scale?