Three levels of reading incomprehension

Three levels of reading incomprehension

When I first started tutoring reading for the SAT and the ACT, I took a lot of things for granted. I assumed, for example, that my students would be able to identify things like the main point and tone of a passage; that they would be able to absorb the meaning of what they read while looking out for important textual elements like colons and italicized words; and that they, at bare minimum, would be able to read the words that appeared on the page and sound out unfamiliar ones.

Over the last few years, however, I’ve progressively shed all those assumptions. When I start to work with someone, I now take absolutely nothing for granted. Until a student clearly demonstrates that they’ve mastered a particular skill, I make no assumptions about whether they have it. And that includes reading the words as they appear on the page. (more…)

Inverse relationships

I’ve come up with a formula

The amount of time a curriculum devotes to teaching critical thinking is inversely proportional to the actual critical thinking skills that the students acquire.

Think of it this way:

Critical thinking skills can only develop as the result of accumulated subject-specific knowledge, not as the result of learning “critical thinking” strategies in the abstract.

The more time students spend learning formal processes (e.g. identifying the main point) designed to teach them “critical thinking” skills in the abstract, the less time they spend obtaining subject-specific knowledge (e.g. biology, history).

Thus, the more time students spend learning learning formal processes designed to teach them critical thinking skills, the less likely they are to acquire the very knowledge that would allow them to think critically.

Or to put it in mathematical terms, where CT is defined as actual critical thinking ability and ct is defined as abstract, formal processes designed to promote “critical thinking:”

CT ? 1/ct

The flip side of the “critical thinking” debate

The flip side of the “critical thinking” debate

As long as I’m in full-out combat mode… One more snipe.

Plenty of people love to hate the SAT because of the purported lack of “critical thinking” it requires.

But what about the other side?

I’ve had more than one parent tell me that their child does wonderfully on tests in school because they can just memorize things and spit them back, then forget those things as soon as they’re done.

They say this as if it is a good thing. (For the record, this is not the sort of content-based education I support.) (more…)

Ivy League-educated journalist takes test for teenagers, is underwhelmed (or: What exactly does Elizabeth Kolbert consider “critical thinking?”)

So Debbie Stier’s book, The Perfect Score Project, is officially out.

While I don’t agree with every single one of Debbie’s recommendations (we’ve gone back and forth over some of them for months), I am utterly, phenomenally, incredibly proud of her for actually seeing this thing through to the finish. I can’t believe she made it, and I am so, so happy for her.

I’m not just saying this because there’s an entire chapter about me. Seriously. Although I do have to admit she did a pretty good job of capturing my personality, the good and the, uh, prickly.

And for the record, I’ve been expecting the backlash re: “helicopter parenting” à la The Atlantic. I’m planning to post my contribution to the discussion here since the forum moderators were kind enough to remove what I considered a surpassingly civil piece of commentary (perhaps I wasn’t supposed to mention my real name?), on the off chance that someone actually wants to read the real back story behind the sensationalistic headline. In all fairness, though, you slap on a headline like “I Took the SAT Seven Times to Help My Son Get Into College” (NOT Debbie’s decision, and only tenuously related to the real story!), and the crazies are going to come swarming out of the woodwork.

But here I’d like to discuss Elizabeth Kolbert’s article in The New Yorker. As I told Debbie, I found the article surpassingly trite and irritating. Not just because Kolbert actually ends with the SAT cliché to end all clichés (c’mon now, say it with me: The only thing the SAT tests is how well you take the SAT!) but also because she seems more concerned with wallowing in her own anxieties and pre-conceived notions about the test than in actually reading the book that Debbie wrote.

On one hand, I’m sure Kolbert gave readers the article they wanted: no one wants to hear that the SAT is worth something. Thinking about it, I realize that Debbie’s done something incredibly subversive — she’s written a book in which she 1) dares to take the SAT seriously, 2) openly admits that she likes it (oh horror of horrors!), and 3) suggests that doing well on it might actually require not only knowledge but actual work.

Taken together, that trifecta represents such a fundamental attack on received wisdom about the test that it’s a wonder anyone was willing to publish the book at all!

What’s interesting to me about Kolbert’s article, however, is how it embodies some of the central tropes and contradictions that inevitably run through discussions about the SAT. (Yes, I know that last sentence is written in academic-ese, but there is literally no other way to say it.)

Kolbert states:

As an adult, I found the test more difficult than I had as a teen and, at the same time, more disappointing. Many of the questions were tricky; some were genuinely hard. But, even at its most challenging, the exercise struck me as superficial. Critical thinking was never called for, let alone curiosity or imagination.

There are a couple of things to notice here. First, Kolbert invokes the standard straw man argument, criticizing the SAT for failing to do something it was never intended to do. American university applicants — unlike those in virtually every other country in the world — have ample opportunity to demonstrate their curiosity and imagination, and are in fact encouraged to do so. The SAT is intended to give a general snapshot of applicants’ ability to apply basic reading, math, and writing skills in unfamiliar settings. (Sometimes that’s called for in the real world too.) The point is not to test creativity; the point is to test the ability to apply basic knowledge and have a reasonably objective criterion by which to compare applicants from wildly different backgrounds.

More worrisome, however, is Kolbert’s implicit attitude that a test that fails to test imagination and creativity must be bad. Imagination and creativity are of course good things, but in order to get to the point where you can make those things work for you (in college, in life), you have to master a lot of other, more “superficial” skills first. Observing how a text is structured, for example, may seem superficial, but it is a crucial prerequisite to understanding how its argument is organized, and thus to formulating a cogent response. This “basic” skill, however, is one that almost none of my students have mastered. Most of them have never been asked to do it at all. Nevertheless, Kolbert — along with most of the American educational establishment — takes it as a given that an exercise that does not explicitly encourage creativity (as she defines it — I would argue that in certain ways, the SAT demands quite a bit of creativity) must lack value.

In this regard, Kolbert makes the classic mistake of an adult looking at the SAT; she assumes that students have already mastered “rote” skills to the point where they can apply them effortlessly in “creative” and “imaginative” ways, the way an educated adult could. Having read the writing of many, many high school students, however, I can confidently state that this is not the case.

What is most interesting, though, is Kolbert’s use of the term “critical thinking.” Notably, she fails to define the term — apparently she considers it so self-explanatory as to be unworthy of a definition. This is, of course, hardly a surprise; most of the people who criticize schools, the SAT, etc. for failing to promote “critical thinking” rarely bother to give actual examples of what they mean by the term. (Presumably people who argue in favor of critical thinking would acknowledge that it involves supporting one’s arguments, but perhaps that isn’t necessary when one is arguing in for so noble a cause.) In this case, however, Kolbert’s rhetorical omission allows her to criticize the test for doing precisely what she argues that it fails to do. This tortured logic becomes apparent when she states:

Soon I came to a reading section, with a long passage about writing and running by Haruki Murakami. Was this passage “analyzing an activity” or “challenging an assumption”? Both seemed valid. Was a phrase in a second reading passage “speculative” or “ironic” or “defensive”? Damned if I knew.

Now, incorrect answers to Critical Reading questions are written to sound eminently plausible — that’s one of the hallmarks of the SAT. The test consists of reading closely to determine which of those plausible-sounding answers is in fact directly supported by the text. Very, very rarely — and I do mean occasionally, as in one vaguely ambiguous question or so every five or six tests — The College Board flubs this up, but for the most part, the right answer is actually the right answer, even if it’s not an answer you expected, or like, or would phrase in a similar way. Having spent around five years dissecting quite literally hundreds of Critical Reading question and then producing a 380 page tome dedicated to picking apart the skills required to succeed on that section, I think I’ve earned the right to state that like them or not, answers to Critical Reading questions, especially ones to tone questions, are pretty damn accurate.

“Speculative” and “ironic” are also pretty far apart tone-wise. It’s obviously possible for a statement to be both, but the chance of those two things converging in the particular section that a Critical Reading question happens to ask about is well beyond unlikely. But rather than acknowledge, for example, a propensity for reading too far into or outside what the author intended, she relies on the classic strategy of turning the blame on the test. Because everyone knows that SAT answers are tricky and ambiguous, she has no need to justify herself further. She’s simply presenting what for her audience is likely a foregone conclusion.

Furthermore, let’s consider Kolbert’s assertion that “both [answers] seemed valid,” emphasis on seemed. Is not distinguishing between things that merely seem to be true and things that are actually true not a crucial component of so-called critical thinking? Or does the fact that it’s the SAT asking Kolbert to make fine distinctions negate the importance of that skill?

I am not just being sarcastic here — how would Kolbert define critical thinking and how, exactly, do the aspects of the test that she criticizes not actually require it? Or in other words, when she criticizes the SAT for not requiring “critical thinking,” what does she actually mean? Is it the multiple choice format she dislikes (with its ensuing elimination of any possible way of bullshitting one’s way through a question or the acquisition of partial credit)? But then when she does get to write in response to an open-ended prompt, she resents having to take a stance, normally a hallmark of good analytical writing, and one that she has no difficulty demonstrating in her article (presumably she knows better than to make a bunch of vague, unconnected statements, regardless of Debbie’s advice). Is it the tiresome necessity of reading of texts literally instead of (no pun intended) speculating about some deeper metaphorical significance?

What does she mean?

I would seriously like to know.

The Knowledge Deficit

So I finally got around to reading The Knowledge Deficit, E.D. Hirsch’s screed against the American educational establishment. I didn’t actually realize that the book was controversial until I mentioned it to a couple of people. I can’t, however, say that I’m surprised it elicited the reaction it did when it was first published; it contradicts pretty much all the received wisdom about what constitutes effective education in the United States, and it does so very, very bluntly. I can’t figure out how it didn’t get on my radar sooner.

Unknown

 

The essay paradox (maybe facts aren’t so bad after all!)

Here’s something I find puzzling: the SAT essay consistently comes under fire for allowing kids to make up information without being penalized for it. Presumably, then, the people doing the criticizing believe that knowing facts, and citing them appropriately in one’s writing, is a good thing. But at the same time, those people turn around and criticize schools for promoting “drill and kill” and “rote learning.”

If students were truly exposed to endless “drill and kill,” they would presumably at least know facts. There’s almost no way *not* to remember things after hearing them repeated a certain number of times. But from what I’ve observed, most of my students have difficulty discussing their Essay examples in anything resembling an in-depth manner because they don’t know enough concrete facts — about academic subjects, at least — to be able to discuss history, literature, or current events in detail. As a result, their writing inevitably becomes vague, repetitive, and confused.

Does anyone else see the irony here?

You can’t insist that schools stop teaching facts and then be surprised when students don’t know facts!

To be clear, I understand perfectly well that students learn facts best in the context. But the idea that kids are simply sitting and chanting “one times one is one, two times two is four…” is profoundly detached from the reality of American schools in 2014. (Yes, there are plenty of schools that drill kids endlessly in test-taking strategies, but that’s not what I’m talking about here.) More likely they’re clustered in groups so they can “learn from each other,” with one or two diligent kids sitting and doing the work while the others talk about what they did last weekend.

If an administrator happens to poke her head in, they’ll all look like wonderfully active and engaged learners, but the chances that they’ll retain any of what they discussed tomorrow or the next day are pretty slim. Then the  ones who can afford it hire tutors to do the drilling they didn’t get in class.

The reality is that even if teachers do present fascinating, engaging, stimulating lessons, kids still need to be held responsible for mastering basic pieces of factual knowledge — the two are not mutually exclusive, and it’s a gross oversimplification to claim that they are. But learning usually involves repetition, sometimes lots of repetition. That’s just how it works. In other domains (sports, music, etc.), that’s still accepted as common sense, but when it comes to academics, all that flies out the window.

Incidentally, I now encourage my students who are big sports fans to just write about sports: a kid who can’t write a coherent argument about the The Great Gatsby to save his life suddenly turns into a clear, flowing, and eloquent writer, complete with names, dates, facts, and statistics, when discussing Magic Johnson’s career. And it works: those essays are (by SAT essay standards) interesting to read, relatively painless for the kids to write, and they consistently receive scores of 10+.

Funny that I don’t see anyone complaining about “rote learning” there — if a kid wants to spend hours memorizing batting or shooting statistics, no one seems to have the least problem with it.