I realize that my 2017 blogging record hasn’t exactly been stellar so far, but I’ve been hard at work on a number of projects.
First, I’m excited to announce that I am collaborating with Larry Krieger (of the original Direct Hits and APUSH Crash Course fame) on a vocabulary book for the new SAT.
But, you say, didn’t the College Board get rid of all those (not really) obscure words? Isn’t vocabulary kind of…passé? As it turns out, vocabulary is still quite relevant. Both the Reading and Writing sections still include plenty of words that are unfamiliar to many students, and we’ve found an approach that efficiently targets only the material most relevant to the new exam. Stay tuned for more details.
Second, the Critical Reader website will be getting a makeover. I’m still in the process of determining just how the site will be reorganized, but hopefully the new site will be live in the next couple of months.
In addition, I will soon be releasing an updated version of my Complete GMAT Sentence Correction guide, which integrates more material from both the 2017 Official GMAT Guide and Official GMAT Verbal Guide. Every chapter will now be accompanied by a list of relevant questions in both of these books, along with the specific sub-topics they test, and discussions of specific Official Guide questions will be woven into chapters as well. A new chapter is also devoted to strategies for working through questions, providing a bridge from exercises dealing with individual concepts to dealing with test-style questions testing multiple concepts simultaneously.
Also within the next couple of months, look for my GRE Vocabulary Workbook. Although the book does include some high-frequency word lists, it is not a vocabulary book in the traditional sense. (A bunch of those already exist, and I so no reason to add to the pile.) Rather, it’s designed to give prospective GRE-takers the chance to practice applying all the vocabulary they’ve studied. In addition to detailed strategies for working through both Text Completions and Sentence Equivalences, the book includes nearly 350 GRE-style practice questions targeting ETS’s favorite words.
The New York Times op-ed columnist Paul Krugman often talks about zombie ideas – ideas that are unsupported by any evidence but that continue to linger on in the mainstream, where they are kept alive by Very Serious People who should really know better, but who collectively choose to bury their heads in the sands because it suits their needs to do so.
As far as the SAT is concerned, I would like to nominate two myths in particular for zombie status:
1) Arcane vocabulary
As I’ve pointed out countless times before (hey, someone has to keep saying it), virtually all of the supposedly obscure vocabulary tested on the old SAT was in fact the type of moderately sophisticated and relatively common language found in the New York Times.
As I’ve also pointed out before, this is a misconception that could be clearly rectified by anyone willing to simply look at a test, but alas, people who hold very strong convictions are wont to reject or ignore any evidence to the contrary. Call this Exhibit A for confirmation bias.
2) The guessing penalty
To be clear, there is no automatic correlation between guessing and answering questions correct vs. incorrectly. A student can guess wildly and still get a question right, or answer with absolute certainty and get it wrong. In theory, it was possible to guess on every single question of the old SAT and still receive a perfect score; likewise, a student could conceivably answer every question confidently without getting a single one right.
The quarter-point deduction for wrong answers on the old SAT was designed as a counterbalance to prevent students from receiving scores that did not reflect their knowledge, and to prevent strategic guessers from exploiting the structure of the test to artificially inflate their scores (as can now be done on both the new SAT and the ACT).
More than any other ideas, though, at least one of these two seems to make an appearance in virtually every article discussing the SAT, regardless of how valid the other points are.
For example, in a recent discussion of this year’s slight drop in SAT scores (old test), Nick Anderson of The Washington Post states that “The College Board jettisoned much of the old test’s arcane vocabulary questions, dropped the penalty for guessing and made the essay optional” – a sentence that remarkably contains not just one but two SAT words!
And in her otherwise excellent Reuters article on the College Board’s failure to ensure that SAT math questions conformed to the test specifications, Renee Dudley makes several references to the “obscure” vocabulary on the old test. Just for grins, I went through her article looking closely at the choice of vocabulary and found nearly a dozen “SAT words” (including some real faves like prescient and succinct).
She also alludes to the fact that “The new test contains no penalty for guessing wrong, and the College Board encourages students to answer every question.”
As I read Anderson and Dudley’s articles, it occurred to me that the inclusion of these zombie ideas has actually become a sort of rhetorical tic, one that anyone writing about the changes to the SAT is effectively obligated to mention.
Obviously, these references involve two of the biggest changes to the test and can hardly be avoided, but I think that something more than just that is going on here.
Consider, for example, what isn’t said: although it is sometimes stated that rSAT math problems are intended to have more of a “real world” basis, the fact that geometry has been almost entirely removed from the exam is almost never explicitly mentioned.
In addition, the kind of disparaging language used to describe SAT vocabulary is notably absent when it comes to math. I have yet to encounter any piece of writing in which geometry was dismissed an “obscure” subject that lacked any relevance to (c’mon, say it with me) “college and career readiness.” Nor does one regularly read articles sympathetic to students who whine that they’ll never actually use the Pythagorean Theorem for anything outside geometry class.
Why? Because depicting a STEM subject – any STEM subject – that way would be taboo, given the current climate. Even if the College Board has decided that geometry isn’t one of the “skills that matter most,” the virtual elimination of that subject from the test is a matter that must be pussyfooted around.
On a related note, the arcane vs. relevant discussion also plays to fears that students will be insufficiently prepared to compete in the 21st century economy. The goal in emphasizing “relevant” vocabulary is to provide reassurance that the students won’t fall behind; that the College Board can now be trusted to ensure they are prepared for the real world.
At the same time, this is essentially a rhetorical sleight of hand designed to disparage the humanities without appearing too obviously to do so – a euphemism for people who do not know what euphemisms are because, of course, such words have been deemed irrelevant, and why bother to learn things that aren’t relevant?
The unspoken implication is that acquiring a genuinely rich, adult-level vocabulary is not really an important part of education; that it is possible to be prepared for college-level reading equipped with only middle school-level words; and that it is possible to develop “high level critical thinking skills” without having a commensurate level of vocabulary at one’s disposal. In short, that it is possible to be educated without being educated.
That is of course not possible, but it provides a comforting fantasy.
Call this the respectability politics of anti-intellectualism – a way of elevating ignorance to the level of knowledge by painting knowledge not as something overtly bad but as something merely irrelevant. That is a much subtler and more innocuous-sounding construction, and thus a far more insidious one.
As for the “guessing penalty” myth… This phrase is in part designed to reinforce a narrative of victimization. Its goal is to elicit pity for the poor, under-confident students whose scores did not reflect what they knew because they were just too intimidated to bring themselves to pick (C), even if they were almost sure it was the answer.
Framing things in terms of guesses rather than wrong answers makes it much easier to evoke sympathy for these students. After all, why should anyone – especially a member of an already oppressed group – be punished for guessing?
The conflation of guessing and punishment also helps perpetuate a central American myth about education, namely that more confidence = higher achievement. By that logic, it is assumed that students (sometimes implicitly but often explicitly understood as female, underrepresented minority, and first/generation low-income) would perform better if only they knew they wouldn’t lose additional points for taking a risk. If these students felt more confident, so the argument goes, their scores would improve as well.
In reality, however, there is often an inverse relationship between confidence and knowledge: if anything, the most confident students tend to be ones who least understand what they’re up against. (True story: the only student who ever told me he was going to answer every question right was scoring in the high 300s.) Helping these students feel more confident does nothing to increase their knowledge and can actually cause them to overestimate their abilities. In fact, when students begin to acquire more knowledge and obtain a more realistic understanding of where they actually stand, it is common for their confidence to actually decrease.
The really interesting part about the phrase “guessing penalty,” however, is that it can also be understood in another way – one that directly contradicts the way described above.
An alternate, perhaps more charitable, interpretation of this phrase is that students were formerly penalized for guessing too much. Not realizing that they would lose an extra quarter-point for wrong answers, they would try answer every question, including ones they had no idea how to do, and lose many more points than was necessary.
Understood this way, the term “guessing penalty” refers to the fact that the scoring system made it almost impossible for students to wild-guess their way to a high score. I suspect that this was the original meaning of the term. (As a side note, I can’t help but wonder: when people argued for the elimination of the quarter-point penalty, did they realize that they were actually arguing in favor of making the SAT easier to game?)
According to this view, students who cannot afford tutors or classes to teach them “tricks” about which questions to skip cannot possibly compete with their more privileged peers. Here again, the obvious goal is to frame the issue in terms of equity.
At this point, one might observe a contradiction: students on one hand were described as being so cowed by the thought of losing ¼ of a point that they could not even bring themselves to guess, and yet they were on occasion also presented as being so oblivious to that penalty that they tried to answer every question.
But back to the subject at hand.
Another reason I suspect the socio-economic argument against the “guessing penalty” has so much traction is that it would seem to be backed up by commonsense reality.
While plenty of students managed to figure out the benefits of skipping sans coaching, it is also true that a certain type of student could benefit significantly from some help in that department. Given two students with the same level of foundational knowledge, starting scores, and ability to integrate new information, the one with the tutor would typically be at an advantage. That’s pretty hard to dispute.
Whether this particular type of help is inherently more problematic than other types of help – help that more privileged students will continue to receive, quarter-point penalty or no quarter-point penalty – is, however, subject to debate.
Based on my experience, I would actually argue that in fact the quarter-point deduction made the old SAT an overall harder test to tutor than it would have been otherwise, and far less vulnerable to the kind of simple tricks and strategies that mid-range students can, to some extent, use on both the new SAT and the ACT.
The reality is that teaching students to skip questions on the old SAT was not always such a straightforward process; in some cases, it was a downright nightmare. It was only really effective when students had a good sense of which questions they were likely to answer incorrectly – that is, when the only questions they consistently got wrong were the ones they had difficulty answering. Unfortunately, this was usually only the case for about the top 10-15% of students.
In contrast, trying to help a student who was consistently both confident and wrong figure out which and how many questions to skip was often an exercise in futility. Because such students often didn’t know what they didn’t know, and had a corresponding tendency to overestimate their knowledge, there was no clear correlation between how they perceived themselves to be doing and how they were actually doing. This was most problematic on the reading section, where easy and hard questions were intermingled; there was no way to tell them, for example, to focus on the first twenty questions.
When students’ knowledge was really spotty, it was difficult to determine whether they should even be encouraged to skip more than a few questions on the entire test because there was absolutely no guarantee they’d get enough of the questions they did answer right to save their score from being a complete disaster. And it was also necessary to be careful when discussing which question types to avoid because if students came across one such question phrased in an unfamiliar way, they might not recognize it as something to avoid.
As a tutor, I came to loathe those situations because they forced me to treat the test as a cheap guessing game, particularly if the students were short term. Eventually, I stopped tutoring people in that situation altogether because things were so hit-or-miss. Often, their scores did not improve at all, and sometimes they even declined.
In addition, some students flat-out refused to even try skipping, regardless of how much I begged/pleaded with/cajoled them. I had students who repeatedly promised me they would try skipping some questions on their next practice test and then answered every question anyway, every time. I never even managed to figure out how many questions, if any, they should skip, and so I couldn’t advise them.
At the opposite extreme, I had students who knew – knew – that they could skip at most one or two questions suddenly freak out on the real test and skip seven.
The point here is that no matter how much tutoring they had received, and no matter how many thousands of dollars their parents had paid, the kids were the ones who ultimately had to self-assess in the moment and make the decisions about what they likely could and could not answer. Sometimes they stuck to the plan, and sometimes they panicked or got distracted by the kid sitting in front of them tapping his pencil and spontaneously threw out everything we’d discussed. No one could do it for them. And if their assessments were inaccurate and they messed up, their score inevitably took a real hit. The limits of tutoring were exposed in a very blatant way.
One last point:
On top of everything I’ve discussed so far, there is also the issue of which groups of students get compared in discussions about equity. When it comes to test-prep, there is a foundational level below which strategy-based tutoring is largely ineffective. If we’re talking about the most profoundly disadvantaged students, then it’s unlikely the kind of classes or tutoring that are generally blamed for the score gap would bring these students up to anywhere remotely close to the range of their middle-class peers.
Yes, certain individual students might draw considerable benefit, but on the whole, the results would probably be quite small. The amount of intervention needed to truly close the gap would be staggering, and it would have to start long before eleventh grade. But that’s a deep systemic issue that goes far beyond the SAT, and thus it’s easier to simply make superficial changes to the test.
I suspect – although I do not have any hard evidence to back this up – that the effects of tutoring are felt most strongly somewhere in the middle: between say, the lower-middle class student and the upper-middle class student who attend similarly good schools, take similar classes, and have similar skills and motivation levels – students who stand to benefit more or less equally from tutoring. If the former cannot even afford to take a class while the latter meets with her $150/hr. private tutor twice a week for six months, there’s a pretty good chance the difference will show up in their scores.
This is of course still a problem, but it’s a somewhat different problem than the one that usually gets discussed.
Moreover, the elimination of the wrong-answer penalty will give privileged mid-range students an even larger advantage. Yes, students who do not have access to coaching can now guess randomly without worrying about losing additional points, but students who do have access to coaching can be taught to guess strategically, filling in entire sections with the same letter to guarantee a certain number of points while spending time on the questions they’re most likely to answer correctly.
This is particularly true on the reading section. Because there are fewer question types, and the passages are not divided up over multiple sections, students on the lower end of average who have modest goals can be more easily taught to identify what to spend time on and what to skip than was the case before.
The result is that the achievement gap is unlikely to disappear anytime soon, regardless of the College Board’s machinations.
In a Washington Post article describing the College Board’s attempt to capture market share back from the ACT, Nick Anderson writes:
Wider access to markets where the SAT now has a minimal presence would heighten the impact of the revisions to the test that aim to make it more accessible. The new version [of the SAT], debuting on March 5, will eliminate penalties for guessing, make its essay component optional and jettison much of the fancy vocabulary, known as “SAT words,” that led generations of students to prepare for test day with piles of flash cards.
Nick Anderson might be surprised to discover that “jettison” is precisely the sort of “fancy” word that the SAT tests.
But then again, that would require him to do research, and no education journalist would bother to do any of that when it comes to the SAT. Because, like, everyone just knows that the SAT only tests words that no one actually uses.
That reminds me of a discussion I had a couple of days ago with a teacher friend who was complaining that her students clung too rigidly to one side of an argument, that they had trouble understanding nuances.
“V.,” I said. “Your students probably don’t even know what nuance is. The concept is foreign to them. It’s a word they see on a flash card when they’re studying for the SAT and forget five seconds later.”
The people with us, both highly educated adults (and one the parent of a high school student), laughed. It had never occurred to either of them that “nuance” could be considered a difficult word.
I was therefore obliged to give them my standard spiel about how many words that 16 year-old who never pick up a book would consider obscure, would actually be considered perfectly common by educated 50 year-olds. But since those 50 year-olds can only remember the SAT from the perspective of a high school student, they remain convinced that the words it tests are actually obscure.
They continued to laugh, but uncomfortably. I think they were both a little horrified.
So sorry Nick. Next time you write about the SAT, you might want to actually look at some tests and see what kind of vocabulary gets tested there. Either that, or you should make sure to avoid words like “jettison” and “stagnated” (which I happened to notice you used later in the article).
Or maybe you should just make some flash cards for your readers.
I’ve been following Diane Ravitch’s blog for a while now. I think she does a truly invaluable job of bringing to light the machinations of the privatization/charter movement and the assault on public education. (I confess that I’m also in awe of the sheer amount of blogging she does — somehow she manages to get up at least three or four posts a day, whereas I count myself lucky if I can get up that every couple of weeks.)
I don’t agree with her about everything, but I was very much struck by this post, entitled “The Reformers’ War on Language and Democracy.”
Maybe it is just me, but I find myself outraged by the “reformers'” incessant manipulation of language.
“Reform” seldom refers to reform.
“Reform” means privatization.
“Reform” means assaults on the teaching profession.
“Reform” means eliminating teachers’ unions, which fight for better salaries and working conditions.
“Reform” means boasting about test scores by schools that have carefully excluded the students who might get low scores.
“Reform” means using test scores to evaluate teachers even though this practice has negative effects on teacher morale and fails to identify better or worse teachers.
“Reform” means stripping teachers of due process rights or any other job security.
“Reform” means that schools should operate for-profit and that private corporations should be encouraged to profit from school spending.
“Reform” means acceptance of privately managed schools that operate without accountability or transparency.
“Reform” means the incremental destruction of public education.
Reading Ravitch’s post, I couldn’t help but think about the linguistic games that the College Board is playing — the College Board under David Coleman having become a central player in the “reform” movement.
As Ravitch points out, however, the word “reform” has become a euphemism for a whole host of destructive practices.
The point of a euphemism is to make an unpleasant or potentially offensive reality more palatable by presenting it in neutral or even positive terms. “Reform” is, of course, a nice, neutral/positive word, which is why it makes such an effective euphemism, and thus why it was seized up on in the first place.
Now, “euphemism” is a word that is tested on the current SAT. It falls into the categories of both “hard” vocabulary and content knowledge: it’s a pretty sophisticated word, but it’s also the sort of specific rhetorical device that students are presumably (or at least should be) learning in English class.
Presumably, it’s also the sort of word that is now considered “irrelevant.” And that got me once again thinking about just what the College Board means by “relevant.”
When I considered the words held up as examples — analyze, synthesize, hypothesis — it occurred to me that relevant also means something like “neutral.” No one would argue that these words aren’t important in school, but they are also exceedingly inoffensive, and I don’t think that’s an accident.
In contrast, when I look back through recent SATs, I’m struck by the number of “loaded” words that appear on the exam — words like partisan, obsequious, polemic, pundit, jargon, convoluted, deference, transparency, obfuscation.
These are incredibly negative words, not to mention incredibly political ones. While these are certainly not the kinds of words most high school juniors encounter on a daily basis, in the classroom or out, they are most certainly “relevant.” They are words that educated people use to critique politicians and corruption and so-called reform movements. People — teenagers — do not “naturally” or spontaneously acquire the vocabulary to understand and follow these types of adult phenomena. Gaining access to these words means gaining access to these concepts. How could someone make sense out of Rush Limbaugh without the word pundit?
The conflation of “relevant” with “neutral,” I think, reflects a world view that conflates neutral language, or neutral tone, with objective reality — that there is only one answer, that what is simply is, and any possibility of criticism is therefore precluded. Moreover, any person who does attempt to criticize them can be dismissed as fringe, unstable, “irrelevant,” etc., etc. and therefore unworthy of serious consideration.
Interestingly, by asking students to identify the author’s attitude in very neutral-sounding passages, the current SAT makes the point that sounding neutral is not the same as being neutral. That’s a subtle but exceedingly important idea: in reality, people can use extremely neutral language to propose all sorts of crazy things. The fact that their tone is reasonable does not mean that their ideas are reasonable (ahem, Ben Carson). Learning to think critically involves acquiring the tools to distinguish between those two things, and to spot inconsistencies.
The new SAT, in contrast, barely deals with tone and attitude, never mind the distinction between them. (Because, of course, appearance is the same as reality, and things should be taken at face value, right?)
Furthermore, the exclusive focus on second meanings is now beginning to strike me as suspect as well. Obviously, yes, a number of very common words in English have multiple meanings, and understanding when words are used in non-literal ways is an important component of comprehension. (I once had a student completely misinterpret a section of a passage because he thought execute mean “get rid of” rather than “carry out.”)
Most “hard” words have one very specific meaning that is used to add a very specific connotation; learning how to use these words appropriately means gaining the ability to write in a more nuanced and sophisticated way. In contrast, the point of focusing on second meanings is essentially that words can be used to mean whatever an author wants them to mean.
By that logic:
“Black” can mean “white,”
“Reform” can mean “privatize,”
“Honor” can mean “destroy.”
A very, very long time ago — so long ago that many of the people who stumble across this post were probably in, gasp, middle school — I wrote a post about the infamous marshmallow experiment. For those of you unfamiliar with the experiment, it involved giving a group of preschool students a marshmallow and then telling them they could either eat it right then or, if they wanted to wait, could have a second marshmallow. A follow-up study revealed that the children who had elected to wait had higher SAT scores than those who ate the marshmallow immediately, thus suggesting a correlation between the ability to delay gratification and long-term academic achievement.
That correlation is something I observe pretty regularly. A student who jumps to choose the first answer she thinks sounds plausible without really considering what it’s saying is is obviously going to have difficult doing well. (By the way, I’m not just trying to be politically correct by using the female pronoun here — interestingly, I’ve actually seen this problem occur more frequently among girls than boys.) But the one place on the entire SAT that I consistently see this problem most clearly is in sentence completions.
Oddly enough, I wasn’t fully conscious of that weakness presented itself until I started writing dozens (and dozens and dozens) of sentence completions for my Sentence Completion Workbook (yes, that’s a shameless plug). The more time I spent analyzing how answer choices were constructed, though, the more I realized how those questions are set up to exploit students’ tendency to jump to conclusions before fully thinking things through.
Let’s try an experiment. Look at the following question:
There has been little ——- written about de la Mare; indeed, that which has been written is at the two extremes,
either appallingly ——- or bitterly antagonistic.
(A) hostile . . ambiguous
(B) recent . . illogical
(C) fervent . . complimentary
(D) objective . . sycophantic
(E) temperate . . censorious
This isn’t the easiest question, but it’s pretty doable if someone has a solid vocabulary and, much more importantly, can stay calm long enough to figure out what the sentence is actually saying.
The second blank is a little more straightforward than the first, so it makes sense to start with it. It’s the opposite of “bitterly antagonistic,” which has to be something good. Even if you don’t know what “antagonistic” means, you can make an educated guess because good things aren’t normally described as “bitterly.”
Now, when a lot of solid, 500-600 students look at the right-hand blank, something like this happens:
(A) no, ambiguous means “unclear”
(B) no, “illogical” just doesn’t make sense
(C) “complimentary” is good, so it fits! It’s the answer. Ok, done.
When students do bother to look at (D) and (E), they can often get rid of (E) because they know that “censor” is bad. Then they look at (D), and I hear something like, “Well, I don’t know what “sycophantic” means, but “syco” sounds like something bad (like a psycho), so it must be (C).
Which of course it isn’t; otherwise, I never would have chosen this question to discuss.
(C) vs. (D) is actually a classic case of easy synonym vs. hard synonym. It is, shall we say, an ETS favorite, primarily because it plays on the oh-so-common tendency to grab at the first thing that looks like it could work.
In reality, “sycophantic” means “excessively complimentary” — as in, so over the top that it’s borderline creepy. In reality, the second side of either (C) or (D) could work; the answer hinges on the first blank, which is opposed to “two extremes.” The word must therefore mean something like “not extreme,” and between “fervent” and “objective,” only the latter fits (“fervent” means “passionate”).
There is, however, an interesting phenomenon that can be observed when one looks only at the right-hand answers.
(A) . . ambiguous
(B) . . illogical
(C) . . complimentary
(D) . . sycophantic
(E) . . censorious
The words in (A), (B), and (E) have nothing to do with one another. They’re somewhat random, even if they are all negative. (C) and (D), however, have similar meanings — (D) is simply much stronger than (C). In addition, it’s much more obscure, and that’s the part that counts. Given the choice between word that clearly fits and a word that could mean anything, most people will choose the word that clearly fits.
Furthermore, it’s not a coincidence that “complimentary” is presented before “sycophantic.” Plenty of test-takers stop as soon as they hit that word; it doesn’t occur to them that there could be another possibility later on.
But here’s the rule: Different answers to two-blank sentence completions typically contain “easy” and “hard” synonyms that could work equally well for one of the blanks. When this occurs, the more difficult synonym is usually correct. This is particularly true as you get closer to the end of the section (unless, of course, a second meaning is involved) — the answer to number two might be something very straightforward, but the answer to number seven…? Probably not.
So the bottom line:
One, don’t choose an answer until you’ve looked through ALL of your options.
Two, don’t choose an answer just because you know what it means, especially if the word for the other blank doesn’t quite fit.
And three, if you’re close to the end of a section and happen to spot an easy/hard synonym pair in different answer choices, it’s usually a safe bet to start out by assuming that the answer that contains the harder word is right. You can always reevaluate if necessary.