by Erica L. Meltzer | Jun 30, 2013 | Blog, SAT Critical Reading (Old Test)
It’s relatively common knowledge that “extreme” answers (ones that contain words like “always” or “never”) tend to be incorrect, but what doesn’t often get discussed is just why those sorts of answers are often wrong and how that reason relates to the overall goal of what the SAT is trying to test.
As I wrote about recently, one of the great themes of Critical Reading is that there’s “always a however” — that is, arguments are not black-and-white, that they contain nuances. Simply saying “well, x is obviously always right, and everyone who disagrees is an idiot” is not a particularly effective mode of argumentation. That’s part of the reason Passage 1/Passage 2 exists.
Ever notice that when one author presents one side of an argument in black-and-white terms, the other author will come back arguing exactly the opposite side? That’s not a coincidence. The point that the test is trying to make is that stating that something is true very strongly does not actually make it true, nor does it negate that fact that there are plenty of arguments against it. If you want to make a solid argument, you have to seriously consider the opposite side and think about how it fits into your argument — that goes a degree beyond what the SAT tests, but the skills that P1/P2 tests are the basis for that ability.
Note, by the way, that considering degrees of nuance is exactly the OPPOSITE of what most people are taught when it comes to the SAT essay (and, might I add, in school). That’s not to say you can’t earn a very high score arguing in extreme, black-and-white terms (indeed, it’s usually much easier to do so in the space of 25 minutes), simply that training yourself to look at only one side of an argument is a stellar way to blunt your understanding of how to approach Critical Reading.
So that’s point one.
The second, closely related point is that is individual experiences cannot necessarily be generalized, and that different people will hold different positions shaped by their background, social circumstances, etc. The assumptions you hold about the world might not be true for the kid who sits next to you in Bio, or who lives down the street or on another continent — even though you can all be classified as members of the category “teenager” or perhaps “teenagers studying for the SAT. In other words, what is true about a particular person or type of person (teenager, artist, scientist, anthropologist) cannot automatically be extended to every single other person or type of person — and the failure to fully grasp that idea is one of the fallacies on which many wrong Critical Reading answers are based. This is why answers that are “too broad” or “too extreme” are frequently (but not always) incorrect. It’s also why so many SAT rhetorical strategy questions ask you to notice personal anecdotes; relying on your own experience as your sole support for a claim does not a particularly solid argument make.
One of the things that I’ve noticed while tutoring is that my students have tendency to conflate the specific and the general — so, for example, if a passage discusses a particular painter and i ask them what the passage is about, I’ll often get a response like “painters.” At some level, they don’t seem to completely grasp the importance of the distinction between the singular and the plural; I know that because they often roll their eyes when I ask them how many painters the passage actually talked about. “Ok, fine,” I can hear them thinking, “it was only one painter, but who cares? Isn’t that, like, basically the same thing?”
Actually no, it’s completely different.
What’s really interesting to me, however, is the way in which this type of faulty reasoning replicates itself in discussions about SAT prep — the irony is really quite impressive. On one hand, this isn’t at all surprising: if people have trouble with the SAT because their logical reasoning skills are lacking, that weakness is going to manifest itself equally outside the test. But just for grins, let’s look at some of the most common. Notice the extreme language common to all of them:
-If I can do it, anyone can do it
Umm… Let’s examine that claim. Maybe you studied for a couple of years and raised your score 500 points to a 2350 — it certainly happens — but presumably you didn’t have a serious decoding problem (that is, you could actually read the words on the page and weren’t constantly guessing what unfamiliar words meant), were at some point capable of perceiving the underlying patterns in the test, had a good enough memory to retain all the vocabulary you were learning, and already knew many of the elementary and middle-school level words that the SAT tests (e.g. permanent, compromise, tendency. And that’s just a handful of skills for Critical Reading.
Having worked with kids who had serious trouble with all of the above, I can state pretty confidently that those are not things that can be taken for granted for every student. If a student has trouble with all of them, it’s a pretty safe bet that a 2350 — or even a 2000 — is not a realistic goal. An 1850 is actually a pretty solid score to start with; it’s usually an indicator that someone primarily needs to learn to take the test and that the underlying skills are more or less ok. In that context, 500 is points is a lot, but given the steepness of the curves on Writing and Math, it’s a difference of not all that many questions.
But there is a whole huge category of kids whose basic skills cannot be taken for granted and for whom raising their score even 100 points is a massive struggle. Kids with high-achieving peers and tutors with high-achieving students tend not be aware of their existence and therefore fall prey to another common logical misstep: if I haven’t seen it, it doesn’t exist (the underlying assumption being that they’ve already been exposed to the full range of achievement levels, or that their particular experience is somehow representative of the high school population as a whole).
Having worked with kids at pretty much the full spectrum (mid-300s to mid-700s), I can say pretty securely — and perhaps coldly — that not every kid has what it takes to improve by hundreds of points. Sometimes the deficits are just too extreme.
-SAT prep classes/tutors are completely worthless because I did really well just studying on my own
I’ve worked with lots of very smart kids who did fabulously once they were actually taught just what the test was asking them to do but who might not have figured it out by themselves. It’s nice that you did really well on your own, but you can’t conclude anything other than that tutors and/or prep classes would have been completely worthless for you. (Besides, even if you did amazingly, you might have also learned something from a tutor — for example, plenty of people manage to do very well on CR without really understanding what it’s testing.)
–All SAT prep classes/tutors are completely worthless because they just teach you all the same strategies you could read in the books
That has an element of truth when it comes to strategy-based prep (at least in terms of classes), but if you need work on the actual skills, you’re probably better off sitting with someone who can actually teach them to you.
-Since I wouldn’t have done well on the SAT without a tutor, everyone who doesn’t have one must be at a huge disadvantage. Therefore, the only thing the SAT tests is how much money you have to prep.
Again, there’s a kernel of truth — as everyone knows, there is a direct correlation between SAT scores and income. Are kids who grow up in seriously educationally deficient environments at a huge disadvantage? Of course. Are wealthy kids whose parents can afford thousands of dollars for private tutoring at an advantage? Of course. Is that fair? Of course not.
But those are generalizations, and correlation does not equal causation: there are kids who can’t afford tutoring but who can sit down independently with a prep books and figure out everything they need to know; likewise, there are kids who get tutored for a year or more and don’t increase their scores at all (or worse, see them go down). Both are outliers, but they do exist.
There are also kids whose parents can afford tutoring but who are perfectly happy to sit down on their own with a prep book. If some of them score very well, it’s usually in large part because they’ve grown up in an enriched environment and got all the skills they needed just from their parents and school.
The fact that a tutor helped one particular person do well can by no means be generalized into the assumption that every person needs a tutor to do well, or that it’s impossible to do well unless you come from a well-off background.
And my personal fave:
-The only thing the SAT tests is how well you take the SAT
Does anyone truly believe that a kid who doesn’t know what “permanent” means can be expected to perform at the same level as one who’s studied three languages, reads Dickens in her spare time, and holds a position in the National Classical League? Or that a kid who struggles to get B’s in Algebra II (like I did) is seriously competitive with a national Math Olympian? Somehow I don’t think so. Yes, everybody has their strengths and weaknesses, but at the extreme ends of ability, the differences are so gaping that they’re pretty much impossible ignore.
It’s also possible to be very smart and still be missing important skills, and it’s a lot easier to blame the test for your shortcomings rather than own up to the possibility that you might not know as much as you think.
by Erica L. Meltzer | Jun 24, 2013 | Blog, General Tips
Of all the discussions floating around about SAT prep, the one I find most irritating — and pointless — is the guessing vs. skipping debate. I’ve heard all the debates by now, and I’m just not interested. I’m not a statistician, but after five years of teaching and writing what are essentially logic questions, I’ve gotten pretty good at spotting fallacies. I’ve also seen how things play out in the real world, where poorly thought-out guesses rarely pan out (at least when I’m watching).
The “guess if you can eliminate x number of answers” approach is based on the assumption that test-takers can reliably identify incorrect answers and that they will not eliminate the correct answer. From what I’ve seen, that is not even remotely a valid assumption. Certainly not for low-scoring students, but often not for higher-scoring ones either. (more…)
by Erica L. Meltzer | Jun 21, 2013 | Blog, General Tips
As you may have heard, June SAT scores are back. And if you’re reading this, chances are you’re looking to improve your score this summer and thinking about what might be possible. Maybe you want to crack 1800…or 2000…or even 2300. You’ve got about three months, which is plenty of time to accomplish…something. If you’re unhappy with what you’ve managed to accomplish on your own, you might be thinking about thinking about taking a class or working with a tutor, or maybe you’re just planning to keep plugging away on your own. Regardless of your (or your child’s) situation, however, there are some things you should keep in mind.
So a little reality check. Some of this is going to sound awfully blunt, and probably more than a little harsh, but here are some things to keep in mind:
The SAT is hard.
Good grades are no guarantee of a high score.
Your score is the result of what you know, whether you can apply that knowledge instantaneously and under pressure, and how well you manage yourself on the particular test you happen to get; it is not something you are entitled to because you attend a particular school or have spent x amount of time or money being tutored, or even because you work hard.
Most people will, by definition, score somewhere around average.
If there really were tricks you could use to ace the exam, lots of people would get perfect scores instead of about 300 out of 1.5 million.
However hard you think you are working, there are other students out there who are putting in much, much more. If you want to equal — or surpass them — you need to be willing to work just as hard.
You are being compared to hundreds of thousands of your peers, including the very top students in the United States and some in other countries; the curve is designed to reflect that.
A tutor is not a miracle worker.
There is a skill level below which short-term strategy-based prep is usually not effective. A score below 600 after a significant amount of prep is usually a good indicator that there are a number of fundamentals missing, although higher scorers are often missing particular key skills (e.g. identifying the topic of a passage) to various degrees.
If you are missing skills, getting to the next level will require a huge amount of work, whether your score goal is 1600, 2000, or 2400. It’s as much about where you’re starting from as it is about where you want to go.
The SAT does not work like tests in school. It’s designed to gauge how well you can apply your knowledge, not whether you can simply cram in a bunch of words and formulas, to be forgotten as soon as you walk out of the exam room. If you haven’t mastered skills to the point where they’re automatic, you will not be able to apply them — or even be able to figure out when to apply them — to the test.
Today’s eleventh and twelfth grade textbooks are written at the same level that ninth grade textbooks were written at fifty years ago. If you don’t read anything other than textbooks and Sparknotes summaries, with the occasional Wikipedia article thrown in, you will most likely not be prepared for Critical Reading.
No tutor can compensate for two or five or ten years of accumulated deficits in a couple of months, never mind four or five sessions, and it is not fair or realistic to expect one to do so. A student who doesn’t know words like “surrender” and “compromise” and “permanent,” or who has reached the age of 17 without being able to consistently recognize the difference between a sentence and a fragment, is going to hit a wall unless they are willing to spend huge amounts of time filling in some of those gaps on their own.
Now that I’m starting see lots of students who are missing important middle-school vocabulary and some who are missing basic elementary school vocabulary, I realize that Stanley Kaplan knew what he was talking about when he said that SAT prep should begin in kindergarten.
While the majority of my students improve, sometimes very dramatically, some of them do not; occasionally, their scores even go down. And students who come to me for a handful of sessions with middling scores, genuine knowledge gaps, and an unrealistic sense of just how much work they’ll need to put in to get the next level, rarely see any significant progress. (Note: taking three or four practice tests doesn’t count for much when there are people taking twenty or thirty…or more.) On the other hand, someone who has all the basics in place and just needs a little push to get to the next level might get where they want to be in a session or two. I’ve seen it happen more than once, but those people really did have things pretty much in order to start with.
I do my best to be really clear about just what I can and cannot likely accomplish in a given timeframe, but it’s a very fine line between being honest and being discouraging. I don’t want to turn away someone I could genuinely end up helping. I’ve seen enough kids pull off huge and unexpected jumps to know that it’s not my place to judge what someone is or is not ultimately capable of doing, but I don’t want to encourage people to harbor unrealistic expectations either.
I realize, by the way, that I probably shouldn’t admit all of this publicly — doing so can’t possibly be good for business — but given how convinced everyone seems to be about the existence of quick fixes, I feel responsible for saying something.
At some level, I think that the test-prep industry’s claim that there really are little “tricks” has become so ingrained in people’s psyches that they don’t fully grasp just how hard it is to raise a score, especially a Critical Reading score, until they see that 490 or 550 or 570 staring at them — again — from the computer screen. It seems impossible that they should have done what seemed (to them) like a huge amount of work and paid a lot of money, only to end up right back where they started. They don’t understand just how precisely the test has been calibrated to keep producing the same results. They hire a tutor because they think there really is some sort of magic shortcut (more than one parent has said I must know “all the tricks,” wink-wink, nudge-nudge) and are consequently very rudely shocked by just how hard they or their child will have to work to break through to the next threshold.
No matter how upfront I am about the limits of my abilities, though, I still feel responsible (and vaguely disingenuous, even though I’ve made it clear that I can promise nothing) when a student doesn’t improve. Then I start to wonder whether other tutors really do have secrets that I don’t know about.
Believe or not, I’m not trying to discourage anyone who’s less than over-the-moon about their SAT or ACT score. If you’re planning to study for this summer (and yes, I will post some actual test tips, not just whine about the decrepit state of the American school system, although I might have to get a few more posts about that in before I move on), by all means, you might actually succeed in raising your score hundreds of points.
Occasionally, like yesterday, I’ll get an email from a kid who did nothing other than work through my books and practice diligently, but who nevertheless managed to raise his CR and Writing scores by 400 points. Emails like that make my day, actually my week. They reassure me that people who put in the work actually can improve by that much, regardless of what the College Board claims.
Basically, you get out what you put in, tutor or no tutor. Most of my best students, the ones who make the 100, 150+ point improvements per section, have been incredibly self-driven. They experimented with strategies, hunted down old exams on the Internet, and read Oliver Sacks for pleasure; and when they came to me with questions, it was because they had worked through things as far as they possible could and were genuinely stuck. The ones who were dragged by their parents, who would clearly rather have been somewhere else… Well, some of them actually improved rather impressively, too (didn’t think I was going to say that, did you?), but they did stop short of their potential. The ones who did the work in only the most perfunctory manner, however, the ones who showed no interest in really understanding the test, and who expected me to give them a secret that would allow them to reach their goal without really having to think…Would you really be surprised if I told you that they almost always ended up disappointed?
So don’t think that improving is impossible. Lots of people do it, sometimes by quite a bit. But don’t expect a 200-point improvement to fall in your lap either. Or, for that matter, be hand delivered to you on a silver platter.
by Erica L. Meltzer | Jun 18, 2013 | Blog, SAT Reading
A while back, a student who was trying to raise his Reading score came to me with complaint: “Everyone always says that the answer is right there in the passage,” he told me. “But I feel like that’s not always the case.”
He was right, of course. He’d also hit on one of the many half-truths of SAT prep, one that frequently gets repeated with the best of intentions but that ends up confusing the heck out of a lot of people.
As I’ve written about before, most SAT prep programs spend a fair amount of time drilling it into their students’ heads that the only information necessary to answer Critical Reading questions can be found in the passages themselves, and that test-takers should never, under any circumstances, use their own knowledge of a subject to try to answer a question. They’re right (well, most of the time, but the exceptions are sufficiently rare and apply to so few people that they’re not worth getting into here).
In trying to avoid one problem, however, they inadvertently create a different one. The danger in that piece of advice is that it overlooks a rather important distinction: yes, the answer can be determined solely from the information in the passage, but the answer itself is not necessarily stated word-for-word in the passage.
A lot of the time, this confusion stems from the fact that people misunderstand the fact that the SAT tests, among other things, the ability to move from concrete to abstract. That is, to draw a connection between specific wordings in the passage and their role within the argument (emphasize, criticize, assert, etc.). The entire POINT of the test is that the answers to some questions can’t be found directly in the passage.
Correct tend to either describe what is occurring rhetorically in the passage or paraphrase its content using synonyms (“same idea, different words”). You need to use the information in the passage and then make a cognitive leap. It’s the ability to make that leap, and to understand why one kind of leap is reasonable and another one isn’t, that’s being tested. If you only look at the answer choices in terms of the passage’s content, they won’t make any sense, or else they’ll seem terribly ambiguous. Only when you understand how they relate to the actual goal of the question do they begin to make sense. In other words, comprehension is necessary but not sufficient.
While knowing all this won’t necessarily help you figure out any answers, it can, at the very least, help to clarify just what the SAT is trying to do and just why the answers are phrased the way they are. If you can shift from reading just for content to reading for structure — that is, understanding that authors use particular examples or pieces of evidence to support one argument or undermine another — the test starts to make a little more sense. And if you know upfront that the answer is unlikely to be found directly in the passage and that you need to be prepared to work it out yourself, you won’t waste precious time or energy getting confused when you do look at the choices. And sooner or later, you might even get to the point of being able to predict some of the answers on your own.
by Erica L. Meltzer | Jun 17, 2013 | Blog, Issues in Education
From The Faulty Logic of The Math Wars:
A mathematical algorithm is a procedure for performing a computation. At the heart of the discipline of mathematics is a set of the most efficient — and most elegant and powerful — algorithms for specific operations. The most efficient algorithm for addition, for instance, involves stacking numbers to be added with their place values aligned, successively adding single digits beginning with the ones place column, and “carrying” any extra place values leftward.
What is striking about reform math is that the standard algorithms are either de-emphasized to students or withheld from them entirely. In one widely used and very representative math program — TERC Investigations — second grade students are repeatedly given specific addition problems and asked to explore a variety of procedures for arriving at a solution. The standard algorithm is absent from the procedures they are offered. Students in this program don’t encounter the standard algorithm until fourth grade, and even then they are not asked to regard it as a privileged method
…
It is easy to see why the mantle of progressivism is often taken to belong to advocates of reform math. But it doesn’t follow that this take on the math wars is correct. We could make a powerful case for putting the progressivist shoe on the other foot if we could show that reformists are wrong to deny that algorithm-based calculation involves an important kind of thinking.
What seems to speak for denying this? To begin with, it is true that algorithm-based math is not creative reasoning. Yet the same is true of many disciplines that have good claims to be taught in our schools. Children need to master bodies of fact, and not merely reason independently, in, for instance, biology and history. Does it follow that in offering these subjects schools are stunting their students’ growth and preventing them from thinking for themselves? There are admittedly reform movements in education that call for de-emphasizing the factual content of subjects like biology and history and instead stressing special kinds of reasoning. But it’s not clear that these trends are defensible. They only seem laudable if we assume that facts don’t contribute to a person’s grasp of the logical space in which reason operates.
In other words, reform movements are largely based on the rejection of a “reality-based” concept of education. We couldn’t possibly have anything as piddling as facts interfering with the joy and beauty of learning. If a child wants to believe that 2+2 =5, shouldn’t they be praised for thinking independently?
In all seriousness, though, there’s something borderline sadistic about schools refusing to teach actual, well-established knowledge, knowledge that makes learning easier. Not every student is genius capable of re-deriving the Pythagorean theorem on their own. Yes, by all means, teach students to understand why things are true – I’ve heard from math tutors who constantly encounter kids who do just fine in calculus because they’ve learned when to plug in about four formulas but who fall down on comparatively basic SAT math because they don’t really understand why things work the way they do, or how to apply simple formulas when they’re presented in unfamiliar ways. The point is, teach them something, don’t just let them flail around trying to figure it out on their own.
What’s the point in all those centuries of accumulated knowledge if schools are just going to toss it out the window?
by Erica L. Meltzer | Jun 16, 2013 | Blog, Issues in Education
Apparently tracking is making a comeback. I was actually unaware that it had disappeared in the first place, but given that I generally try my hardest to remain immune to the latest fads emanating from education schools, that’s not exactly a surprise. As the product of twelve years of tracked classes, however, I find the subject somewhat interesting. Now granted that in my deliberate (and obstinate) ignorance of educational theory leaves me with little to offer beyond personal anecdote, but as someone who got to see tracking from both the top and bottom — and who got to see both the advantages and the disadvantages of that system — I think I can offer a few insights.
I attended a high school that tracked strictly, beginning in ninth grade: all subjects were divided into standard and honors tracks, with some subject further broken down into Basic, Standard, Honors, and AP. (more…)
by Erica L. Meltzer | Jun 13, 2013 | Blog, The Mental Game
When people ask me whether I enjoy my job, my usual response is something along the lines of, “Some people do crossword puzzles, I write SATs” — the implication being that I view the test as a sort of amusing intellectual game. The other implication, of course, is that I don’t actually do crossword puzzles.
Or, well, didn’t.
A couple of weeks ago, while I was walking downtown with a friend, I got hungry and made him sit with me in Koreatown while I indulged a late-night craving for kimbap. In return, he proceeded to pull out the NYT crossword puzzle and insist that I help him with it. I groaned and told him for the thousandth time that I’m just not good at crossword puzzles (I write SATs, isn’t that enough?!), but he wouldn’t take no for an answer, and after I managed to figure out a couple of clues (“River’s movement?” Ebb and flow), I sort of had to admit that was having fun. (more…)
by Erica L. Meltzer | Jun 12, 2013 | Blog, Issues in Education
Stupidity from the New York Times opinion page. According to NY public middle school teacher Claire Needell Hollander:
New teachers may feel so overwhelmed by the itemization of skills in the Common Core that they will depend on prepared materials to ensure their students are getting the proper allotment of practice in answering “common core-aligned” questions like “analyze how a drama’s or poem’s form or structure … contributes to its meaning.” Does good literary analysis even answer such questions or does it pose them?
Excuse me? Studying the relationship between form and meaning is the point of literary analysis. An English teacher who doesn’t understand that has no business teaching English, no matter how “geeky” or enthusiastic she might be about her subject. Talking about feelings is what you do at a book club. Or in therapy. Ms. Hollander’s question calls to mind a teenager’s reaction when faced with a concept she doesn’t quite understand — rather than admit her perplexity, she clumsily tries to suggest that the whole thing never made sense in the first place.
To be fair, I understand her fear that schools will strip the (few) remaining bits of life from classrooms across the United States, but at least in theory, the Core’s emphasis on understanding that texts don’t magically come into existence, that they convey meaning through a series of specific choices about structure, diction, imagery, register, and so on, is one of the things that it gets right! Without understanding how texts are constructed, how things like irony, wordplay, and metaphor work, students have no tools for making literal sense out of challenging works. After years of teachers like Ms. Hollander, they have literally *never* been asked to read a text closely — fiction, non-fiction, nothing. As I’ve heard from so many students staring down baffling Critical Reading passages, “it’s just a bunch of words.”
I would be interested to know what Ms. Hollander proposes to a student doesn’t have an emotional reaction to what she’s reading? What would she suggest? That the student just keep reading until she feels something? Eventually, she’ll just learn to fake it, but she certainly won’t learn anything. Worse yet, what if a student can’t even really understand what he’s reading? (I haven’t met many middle-schoolers — or, for that matter, many high school juniors — who could really “get” The Color Purple, never mind grapple with the issues it raises in anything but the most clichéd manner, but perhaps Ms. Hollander’s students are an exceptionally precocious bunch.) Or what about a student who had the “wrong” kind of emotion (presumably one who didn’t feel sufficiently upset about Celie’s victimhood)? What would Ms. Hollander do then?
To be clear, I’m not advocating an approach that mechanically reduces literature down to a series of dry rhetorical figures in order to avoid any discussion of the actual ideas it contains — when I was studying in Paris, I loathed that aspect of the French system — but rather one that takes into account the fact that understanding how texts say what they say is a crucial part of appreciating what they say. The best teachers I had, both in English and otherwise, were intensely passionate about their subjects, and they conveyed that passion in ways that made what they had to say unforgettable. But they never confused their love for their subjects with the kind of facile touchy-feelyness advocated here.
Thank you to Emory University English professor Mark Bauerlein for pointing out that this kind of “therapeutic” approach is actually quite manipulative. Describing a typical middle-school assignment, he writes:
Most specimens of narrative writing in the [Demonstrating Character] units involve some sort of personal experience, reflection, or opinion. One from a 7th-grade unit on Civil Rights may be the very worst, which asks students to pretend they were witnesses to the horrific bombing of the 16th Street Baptist Church and a friend was seriously injured. “What emotions are you feeling?” it proposes. “How will these events affect your future? What will you do to see that justice is served?”
As a college teacher of freshman English, I can see no sense in these assignments. They don’t improve critical aptitude, and they encourage a mode of reading and writing that will likely never happen in a college major or their eventual job. There is a theory behind it, of course, holding that only if students can relate to their subjects will they do their best and most authentic writing, not to mention explore and develop their unique selves.
The notion sounds properly student-centered, the motives educational, but in practice few 14-year-olds have the intellectual and emotional equipment to respond. Puberty turns them inside out, the tribalisms of middle school confound them, the worlds seems awfully big, the message of youth culture impart fantastical versions of peers, and they’re not sure who they are.
What lurid imaginings do we throw them into when we tell them to witness a bombing? Do we really expect 7th graders to ruminate upon their integrity? Ponder these assignments closely and they start to look less benevolent and more coercive. One of them in an 8th grade unit on “Adolescent identities” mentions a short story involving self-sacrifice, then says,
Think of a time in your life where you have put someone else’s needs or wants, like a family member or friend, ahead of your own desires. Convey to an audience of your peers what the circumstances of that time were, who you sacrificed for and what led you to that decision.
A 14-year-old receiving it must wonder just how self-sacrificing he must appear. If the student doesn’t remember too much and still has to fill more pages, she will fabricate the necessary details. Should he admit to having resented the self-sacrifice? Should she congratulate herself for her good deeds? The whole exercise involves so many tricky expectations that the student wonders what implicit lesson he should take from it. (https://educationnext.org/the-me-curriculum/)
And furthermore:
Without focused training in deep analysis of literary and non-literary texts, students enter college un-ready for its reading demands. Students generally can complete low-grade analytical tasks such as identifying a thesis, charting evidence at different points in an argument, and discovering various biases. But college level assignments ask for more. Students must handle multi-layered statements with shifting undertones and overtones. They must pick up implicit and explicit allusions. They must expand their vocabulary and distinguish metaphors and ironies and other verbal subtleties.
Those capacities come not from contextualist orientations (although “outside” information helps), but from slow, deliberate textual analysis. The more teachers slip away from it, the more remediation we may expect to see on college campuses, a problem already burdening colleges with developing capacities that should have been acquired years earlier. Indeed, when ACT pored over college-readiness data from 2005, it found that “the clearest differentiator in reading between students who are college ready and students who are not is the ability to comprehend complex texts.” More reader response exercises for 9th-11th-graders are only going to exacerbate the problem.(https://educationnext.org/not-just-which-books-teachers-teach-but-how-they-teach-them/)
Amen.
by Erica L. Meltzer | Jun 7, 2013 | ACT English/SAT Writing, Blog
When a lot of students start studying for ACT English/SAT Writing, one of the first things they often wonder is whether they actually really need to read the entire passage, or whether it’s ok to just skip from question to question.
My answer?
A resounding yes and no. That is, yes, they have to read everything, no they can’t just skip from question to question.
Here’s why:
ACT English and SAT multiple-choice Writing are context-based tests. Sometimes you’ll be asked about grammar, and sometimes you’ll be asked about content and structure. Both kinds of questions are often dependent on the surrounding sentence, however. A question testing verb tense may have four answers that are acceptable in isolation but only one answer that’s correct in context. If you don’t look at the surrounding sentences and see that they’re in the past, you might not realize that the verb in question has to be in the past as well.
Furthermore, it’s often impossible to answer rhetoric questions without a general knowledge of the paragraph or passage. If you’ve been reading the passage all along, you’re a lot more likely to be able to spot answers immediately since you’ll be able to tell whether a given choice is or is not consistent with the passage. If, on the other hand, you suddenly start reading surrounding sentences, you’re more likely to miss important information because you don’t have the full context for them.
by Erica L. Meltzer | Jun 7, 2013 | ACT Reading
ACT Reading questions that ask about dates or time periods often appear deceptively easy. It’s easy to assume that all you have to do is go back to the passage and pick out the appropriate date. Even in a reading that includes a number of dates or years, that’s pretty straightforward once you find the correct spot in the passage, right?
Wrong.
These kinds of questions are actually inference questions in disguise, and answering them often requires you to take information from various parts of the passage and perform some very basic calculations.
For instance, one ACT passage asks about the time period when a particular kind of glass structure was least likely to be built in the United States.
Nowhere in the passage does the author actually come out and state the answer; (s)he only tells us that in the post-World War II period, many glass structures were built in the US, but that since 1973, most glass structures have been built in Europe.
We can therefore infer that after 1973, most glass structure were LESS likely to be built in the US than they were before. The answer, however, is 1975-1985 — only an approximation of what’s stated in the passage. A lot of people get confused because they can’t find a spot in the passage that states the year directly, and often they end up trying to justify a response that’s way off base.
I don’t want to suggest that the correct answer will never be directly stated in the passage; sometimes it will. But before you pick an answer just because you remember seeing it in the passage, make sure that it really does fit.
by Erica L. Meltzer | Jun 4, 2013 | Blog, SAT Grammar (Old Test)
I spend a lot of time teaching people to stop looking so hard at the details. Not that details are so bad in and of themselves — it’s just that they’re not always terribly relevant. There’s a somewhat infamous SAT Critical Reading passage that deals with the qualities that make for a good physicist, and since the majority of high school students don’t have particularly positive associations with that subject, most of them by extension tend to dislike the passage.
The remarkable thing is, though, that the point of the passage is essentially the point of the SAT: the mark of a good physicist is the ability to abstract out all irrelevant information.
Likewise, the mark of a good SAT-taker is the ability to abstract out all unimportant information and focus on what’s actually being asked.
One of the things that people tend to forget is that the SAT is an exam about the big picture — for Writing as well as Reading.
I say this because very often smart, detail-oriented students have a tendency to worry about every single thing that sounds even remotely odd or incomprehensible, all the while missing something major that’s staring them in the face. Frequently, they blame this on the fact that they’ve been taught in school to read closely and pay attention to all the details (and because they can’t imagine that their teachers could be wrong, they conclude that the SAT is a “stupid” test).
Well, I have some news: not all books are the kind you read in English class, and different kinds of texts and situations call for different kinds of reading. When find yourself in college social sciences class with a 300 page reading assignment that you have two days to get through, you won’t have time to annotate every last detail — nor will your professors expect you to do so. Your job will be to get the big picture and perhaps focus on one or two areas that you find particularly interesting so that you can show up with something intelligent to say.
But back to the SAT.
On CR, it’s fairly common for people to simply grind to a halt in passage when they encounter an unfamiliar turn of phrase. For example, most people aren’t quite accustomed to hearing the word “abstract” used as a verb: the ones who ignore that fact and draw a logical conclusion about its meaning from the context are generally fine. The other ones, the ones who can’t get past the fact that “abstract” is being used in a way they haven’t seen before, tend to run into trouble. They read it and realize they haven’t quite understood it. So they go back and read it again. They still don’t quite get it, so they reread it yet again. And before they know it, they’ve wasted two or three minutes just reading the same five lines over and over again.
Then they run out of time and can’t answer all of the questions.
The problem is that ETS will always deliberately choose passages containing bits that aren’t completely clear — that’s part of the test. The goal is to see whether you can figure out their meaning from the general context of the passage; you’re not really expected to get every word, especially not the first time around. The trick is to train yourself to ignore things that are initially confusing and move on to parts that you do understand. If you get a question about something you’re not sure of, you can always skip over it, but you should never get hung up on something you don’t know at the expense of something you can understand easily. If you really get the gist, you can figure a lot of other things out, whereas if you focus on one little detail, you’ll get . . . one little detail.
The question of relevant vs. irrelevant plays out a lot more subtly in the Writing section, where people often aren’t quite sure just what it is they’re supposed to be looking for, especially when it comes to Error-IDs. As a result, they want to understand the rule behind every underlined word and phrase, regardless of whether it’s something that’s really relevant. And because about 95% of the rules tested are predictable and fixed from one test to the next, a lot of the time the correct answers aren’t terribly relevant. Worrying about every little rule makes the grammar being tested appear much more complex than it actually is.
The reality is that if you only look for errors involving subject verb agreement, pronoun agreement, verb tense/form, parallel structure, logical relationships and comparisons, prepositions, and adjectives and adverbs, you’re going to get most of the questions right. And if an error involving one of those concepts doesn’t appear, there’s a very good chance that there’s no error at all. Thinking like that is a lot more effective than worrying about why it’s just as correct to say “though interesting, the lecture was also very long” as it is to say, “though it was interesting, the lecture was also very long.”
I’m not denying that understanding why both forms are correct is interesting or ultimately useful. I’m simply saying that if you have a limited amount of time and energy, you’re better served by zeroing in on what you really need to know.
by Erica L. Meltzer | Jun 4, 2013 | Blog, SAT Critical Reading (Old Test)
The distance between a high CR score and a truly outstanding one rarely runs along a linear path. Unlike Math and Writing, which are essentially based on a number of fixed rules and formulas and which can therefore be improved by the mastery of discrete concepts, Critical Reading cannot necessarily be improved by memorizing a few more rhetorical terms or vocabulary words. On the contrary, for someone stuck in the high 600s/low 700s on CR, raising that score into the 750+ range frequently involves completely rethinking their approach.
Given two students with identical solid comprehension skills and 650-ish scores at the beginning of junior year, the one who is willing to try to understand exactly how the SAT is asking them to think and adapt to that requirement will see rapid and dramatic improvement (often 100+ points). The other one will flounder, maybe raising their score 30 of 50 points, but probably not much higher. Occasionally, their score won’t budge at all or will even drop. They’ll get stuck and get frustrated because they just know that they deserve that 750+ score, but the one thing they will absolutely not do is change their approach. And by change their approach I mean assume that their ability to recognize correct answers without thoroughly working through the questions is considerably weaker than they imagine it to be. In other words, they have to take a step back and assume that they know a lot less than they actually do.
Let me explain: one of the things I continually find fascinating is that people can spout on for extended periods of time about the supposed “trickiness” of the SAT, yet when it comes down to it, they won’t actually take concrete steps to prevent themselves from falling for “trick” answers (i.e. answers that contain mistakes someone who is rushing or can’t bothered to fully read the question would likely make).
The best way I know of to reduce the possibility of getting “tricked” is to actually attempt to answer the question before looking at the answers — or at least to determine the general idea that is probably contained in the right answer. Working this way, however, requires you to abandon the assumption that you’ll be able to spot the right answer when you see it, even if you’ve made no attempt to figure it out beforehand.
Now, in case you haven’t noticed, answers to SAT CR questions are deliberately worded in a confusing manner. Unless you really know what you’re looking for, things that aren’t necessarily the case may suddenly sound entirely plausible, and things that are true may sound utterly implausible. You need to approach the answers with that knowledge and consciously be on your guard before you even start to read them. But in order to do that, you need to be willing to admit a few things:
1) Your memory probably isn’t as good as you think it is
Just because you think you remember what the passage said doesn’t mean you actually remember what the passage said — at least not all the time. Even if you remember well enough almost all of the time, it only takes a handful of slips to get you down from 800 to 720. Throw in a missed vocab question or two on each section and bang, you’re back at 680. If you want to get around the memory issue, you need to write down every single step of your process. It doesn’t have to be neat or even legible to someone other than you, but it needs to be there for the times you don’t actually remember.
2) Your thought process probably isn’t as unique as you imagine it to be
The test-writers at ETS are not stupid, and they know exactly how the average eleventh grader thinks — questions and answers are tested out extensively before they show up on the real test, and the wrong answers are there because enough high-scoring students have chosen them enough times. Don’t assume you won’t do the same. I also say this because many of my students are astonished when I trace the precise reasoning that led them to the wrong answer — before they’ve told me anything about why they chose it. They were laboring under the illusion that their thought process was somehow distinct to them. It wasn’t.
3) Sometimes, there is no shortcut
That’s a little secret that most people in the test-prep industry would rather not admit. A lot of students who are accustomed to using common answer patterns (e.g. get rid of anything that’s too extreme) to get to around 650-700 are shocked to discover that this technique won’t get them any further and that they actually just have to understand pretty much everything. Sometimes spotting the “shortcut” also requires very advanced skills that even relatively high-scorers don’t possess. On CR, the ability to determine the function of a paragraph from a single transition in its first sentence is a highly effective shortcut, but it involves a level of sensitivity to phrasing that most sixteen year-olds — especially ones who don’t read non-stop — haven’t yet developed.
4) Getting a very top score is hard
There’s a reason that only about 300 people – out of 1.5 million – get perfect scores each year. If acing the test were just about learning the right “tricks,” there would be a lot more 2400s.
If you really want to get your score up to 750-800 range, you need to respect that the SAT is in fact difficult and that it is your job to conform to it, not the other way around. If you don’t understand why a particular answer is correct, stop before you jump to blame the test for not making it what you think it should be. It doesn’t matter that you take hard classes. It doesn’t matter that your AP English teacher thinks your essays are brilliant. There’s something in your process that went awry, and it’s your job to identify and fix it.
Reading this over, I realize that a lot of what I’ve written in this post may sound fairly harsh. But I also know from experience that overconfidence is one of the biggest problems that can hold you back from attaining the scores you’re capable of achieving. It’s hard — I’m not denying it — but if you can take a step back and start to admit that you might not know everything you think you do, you might just have a fighting chance at an 800.
by Erica L. Meltzer | Jun 4, 2013 | SAT Critical Reading (Old Test)
The single most important strategy you can use to get through SAT Critical Reading is to find the main point of every passage you read. Can this be annoying? Of course. You just want to jump to the questions and get them over with. Unfortunately, if you work this way, there’s always a chance that you’ll get thrown off by a distractor answer, no matter how good you are and no matter how well you think you’ll recognize the right answer when you see it.
The SAT rewards those who work through problems — Critical Reading as well as Math — very, very carefully. Most Critical Reading questions ask about the relationship between various details and the author’s overall point, and if you don’t take 15 seconds and define that point explicitly for yourself, sooner or later you will look right past it when it appears.
But where to find it?
The answer, it turns out, is pretty simple. There are three places it’s likely to be:
1) Last sentence, first paragraph (the classic place for a thesis)
2) First sentence, second paragraph
3) The last sentence of the passage
In general, you should automatically underline the last sentence of every passage you read — don’t think, just underline. It’ll usually sum up the point in some fashion, and if you find yourself totally lost, it gives you something specific to look back at.
Furthermore, do not ever, ever fail to circle/underline/notate in some fashion the word “the point.” It shows up far more often than you’d think. If the author is telling you what the point is, it’s the point. Really.
by Erica L. Meltzer | Jun 3, 2013 | Blog, Time Management
I recently posted about the necessity of learning to think quickly on the SAT, but lest you think I’m advocating rushing through the test at warp speed, I’d like to qualify that advice a bit. Learning to manage time on is not fundamentally about learning to do everything quickly but rather about learning which things can be done quickly and which must be done slowly.
When I go over Critical Reading material with my students and they ask me to explain a question they had difficulty with, one of the things I always point out to them as I read the question out loud is how slowly I move through it. I actually take a fraction of a second to absorb each word and make sure that I’m processing it fully. Sometimes I rephrase it for myself two or three times out loud, in progressively simpler versions. If necessary, I write down the simplified version. The end result, while not excessively time-consuming, involves considerably more effort than what my students are likely to have put into understanding the question.
Usually by the second time I rephrase the question, however, my students start to get that oh-so-exquisite look of teenage boredom on their faces; I can almost see the little thought bubble reading “ok, fine, whatever, can she just get on with it already?” pop out from their heads. As I do my best to impress upon them, however, I’m not simply reading the question that slowly to torture them; I have to read it that way because if I don’t, I’m likely to miss something important. Sure, if I just breezed through it, I might get it right anyway, but I might also not — and I’m not taking any chances. The fact that I recognize my own potential for weakness and take steps to address it is, I also stress, one of the reasons I almost never get anything wrong. (Usually they just say “yeah” and roll their eyes.)
The other thing I stress, however, is that reading questions slowly will not create a timing problem for them if they’ve used their time to maximum efficiency elsewhere. If they haven’t lingered over words or answer choices whose meanings they’re really not sure of; if they haven’t stared off into space instead of taking active steps to distinguish between those last two answers, then they can afford to spend fifteen or twenty seconds making sure they’ve read every word of a question carefully. The whole point is that they have to adjust their approach to the particular task at hand. Flexibility is, I would argue, a key part of what the SAT tests, and building that flexibility is a key part of the preparation process. You can’t predict every guise that a particular concept will appear in — that’s part of what makes the SAT the SAT — but if you know how to resist getting sucked into things that confuse you, you’ll at least have some measure of control.
by Erica L. Meltzer | May 27, 2013 | Blog, SAT Critical Reading (Old Test), Tutoring
If you look in the Official College Board Guide, 2nd edition (aka the Blue Book), you’ll see that the sample essays in the front of the book are written in response to a prompt that asks whether there is always a “however” (i.e. are there always two sides to every argument?)
It recently occurred to me that the College Board’s choice of that particular prompt for inclusion in the Official Guide was not an accident; on the contrary, it’s a sort of “clue” to the test, an inside joke if you will. And in classic College Board style, it’s laid bare, in plain sight, for everyone to see, thereby virtually guaranteeing that almost everyone will overlook it completely.
Let me back up a bit. When I took the SAT in high school, one of the Critical Reading strategies I devised for myself was, whenever necessary, to write a quick summary of the argument of that the author of a passage was both for and against. So if, for example, a question asked how a particular author would be likely to view the “advocates” of a particular idea (let’s say string theory, just for grins), I would write something like this:
Author: ST = AMAZING! (string theory is amazing)
Advocates: ST = WRONG! (string theory is wrong)
Therefore, author disagrees w/advocates, answer = smthg bad
It never struck as anything but utterly logical to keep track of the various arguments that way. As a matter of fact, I took the process of identifying and summarizing various points of view so much for granted that it never really occurred to me that keeping track of all those different points of view was actually was more of less the point of the test. Of course I knew it at some level, but not in a way that led me to address it quite so explicitly as a tutor. I assumed that it was sufficient to tell my students that they needed to keep track of the various points of view; not until about a year ago did it truly dawn on me that my students couldn’t keep track of those points of view. They were having trouble with things like main point because they couldn’t distinguish between authors’ opinion and “other people’s” opinions, and therefore I needed to explain some very basic things upfront:
1) Many SAT passages contain more than one point of view.
2) The fact that an author discusses an idea does not necessarily mean that the author agrees with that idea.
3) Passages contain more than one point of view because authors who write for adults often spend a lot of time “conversing” with people — sometimes imaginary people — who hold opposing opinions. Authors are essentially writing in response to those “other people.”
4) There are specific words and phrases that a reader can use to identify when an author is talking about his or her own ideas vs. someone else’s ideas.
5) The fact that authors discuss other people’s ideas does not make them “ambivalent” or mean that they do not have ideas of their own.
6) It is also possible for authors to agree with part of someone else’s idea and disagree with other parts. Again, this does not mean that the author is ambivalent.
In other words, there’s always a “however,” and if the author of Passage 1 doesn’t give it to you, the author of Passage 2 almost certainly will.
Not surprisingly, I have Catherine Johnson to thank for this realization. A while back, she posted an excerpt from Gerald Graff and Cathy Birkenstein’s book, They Say/I Say: The Moves that Matter in Academic Writing on her blog, and reading it was a revelation for me. I’d already touched on “they say/I say” model in a very old post (SAT Passages and “Deep Structure”), but Graff and Birkenstein’s book explained the concept in a far more direct, detailed, and explicit manner. It also took absolutely nothing about students’ knowledge for granted.
I’d already written a first draft of The Critical Reader at that point, but when I read that excerpt, something clicked and I thought, “that’s it — that’s actually the point of Critical Reading. THAT’S what the College Board is trying to get at.” To be sure, Critical Reading tests a number of other things, but I think that this is one of the most — if not the most — important. If you understand the strategies that authors use to suggest agreement and disagreement with arguments, you can sometimes understand almost everything about a passage — it’s content, its structure, its themes — just from reading a few key lines. “They Say/I Say” provided me with the thread that bound the book together. It also provided the very important link between reading on the SAT and reading in the real world (or at least in college) — a link that some critics of the SAT (!) insist rather stridently does not exist.
Then, in a colossal “duh” moment a couple of days ago, it occurred to me that the point of the quote before the essay prompt is to provide students with the option of using the “they say/I say” format in their essays (if they so wish) — it’s just that the students have so little experience with that format (if they even know it exists) that it never even occurs to them to use it!
Just how little experience students have with it became clear, incidentally, when I was working with students on the synthesis essay for the AP French exam. As is the case for AP Comp, students are given three sources and expected to compose a thesis-driven essay, integrating the sources into their writing. There’s no way to earn a high score without using all of the sources, and since the sources cover all sides of the argument (pro, contra, neutral), at least one source will contradict the student’s position. So basically, the point of the exercise is to force them to integrate opposing viewpoints into their writing.
As I discussed the essay with my students, however, I made two intriguing discoveries:
1) They did not really understand that the essay was thesis-driven and that it was ok for them to express their own opinions.
They equated having to include multiple side of an argument with not having an opinion. They were stunned — and relieved — to discover that it was ok for them to actually write what they thought instead of simply summarizing what all the various sources said. Incidentally, their teacher had told them that more than once, but I think the concept was too foreign for them to fully grasp.
2) They did not know how to integrate other people’s words and ideas into their own arguments in anything resembling a fluid manner.
Instead of writing things like “As Sorbonne Professor Jean-Pierre Fourrier convincingly argued in a May 2009 article that appeared in Le Monde, the encroachment of English into the French language is nothing new,” they would write something like this: “Jean-Pierre Fourrier, a professor at the Sorbonne, says ‘the encroachment of English into the French language is nothing new’ (Source 1).”
When I showed one of students (a very smart girl and a strong writer) how to do the former, she was thrown off guard. “Oh,” she said. “I didn’t know that.” “Of course you didn’t know that,” I said matter-of-factly. “No one taught you how to do it. So I’m teaching you now.”
That was another lightbulb moment for me. The thought had drifted across my consciousness before, but it hadn’t quite pushed its way to the surface. Reading and writing are two sides of the same coin. Students who haven’t been taught how to make use of certain strategies explicitly in their own writing are therefore unlikely to recognize those strategies in other people’s writing. Ergo, when an author interweaves his or her opinions with someone else’s opinions in the same passage or paragraph, sometimes even in the same sentence, students have limited means of distinguishing between the two points of view.
I think that this is something that should be covered very explicitly and thoroughly in AP Comp class, but something tells me that it isn’t. I certainly didn’t learn it in high school; instead, I picked it up in college by reading lots of academic articles and simply copying what professional scholars did.
So what’s the solution? It is in part, I think, They Say/I Say — or something like it (note the very subtle plug for The Critical Reder here;). I’ve said it before, and I’ve said it again: the only way to prepare for a college-level test is to read things meant for college students, which They Say/I Say certainly is. So if you’re taking the SAT next Saturday and are reading this in the hopes of picking up some last-minute miracle tips for Critical Reading, here’s my advice: read Cathy Birkenstein and Gerald Graff’s introduction to They Say/I Say. It won’t give you any SAT-specific “tricks,” but it will explain to you clearly and bluntly, just what it is that most of the writing you’ll encounter on the SAT is trying to accomplish. Even if it doesn’t solve all your problems, but it might demystify the test a bit and make Critical Reading seem a little less weird.
by Erica L. Meltzer | May 18, 2013 | Blog, Issues in Education
Occasionally I inadvertently find myself in the crossfire between what teachers think students know and what students actually know. From this peculiar vantage point, I’m often struck by the way the assumptions on both sides fail to line up — high school teachers often take for granted that their students can “connect the dots” on their own, and high school students assume their teachers know that they need everything explained very explicitly. What looks from one side like teachers failing to teach important information, and from the other side like students being lazy or clueless, is actually a classic case of faulty assumptions.
Let me explain.
For the past couple of weeks, I’ve been wrapped up in AP French prep. The AP exam was revised last year ago to include a “synthesis” essay that requires students to read an article, interpret a graph, and listen to an audio clip, then write a thesis-driven essay (all in French) about a given question (e.g. “Should the French language be protected from English?”).
One of the sources always takes the “pro” side, one the “con,” and one is neutral. The audio is usually the most intimidating source because it involves authentic French spoken quickly by a native speaker, and it’s almost impossible for someone who hasn’t lived in a francophone environment to pick up on the nuances. Most kids are just flipped out about whether they’ll be able to figure out what’s going on.
Here’s the thing, though: it’s pretty easy to figure out what sides the article and the graph are taking, and they’re always presented before the audio. So by default, the audio has to take the side that the other two haven’t. Logically, a person can determine the point of the audio before they even begin listening to it.
Incidentally, I didn’t realize this until I had to calm down a panicked junior who was terrified she wasn’t even going to be able to figure out which side the audio was taking. When I inquired about the order in which the sources were presented and she told me that the audio was always last, I realized that she could deduce the position the speaker would take before she even listened to it. When I told her that… let’s just say that it was a proverbial lightbulb moment.
Now this is where it gets interesting: her teacher is a good friend of mine, and I mentioned the exchange to her. Now, for the record, my friend is a fabulous teacher with a 100% pass rate on the French AP — a major feat in a huge NYC public school (albeit a very selective one). She’s nothing if not clear. But somehow it had never occurred to her that her students needed to be told explicitly that the audio was taking the position that the other two weren’t. It just seemed too obvious. But after I told her about the student’s realization, she made a point of mentioning it in class.
The next time I saw my student, she proudly announced that Madame had taught the class the “trick” she’d learned from me the previous week. “But,” she sniffed indignantly, “she really should have told us that before.”
That moment threw into sharp relief everything I’ve been thinking about recently. I’m increasingly aware of the disconnects between what teachers and teachers think teenagers know vs. what teenagers actually know, and of the fact that high school students, given 2 + 2, won’t necessarily think to put them together to make 4.
More recently, I was explaining to a friend (a Ph.D. in Classics with years of teaching experience) that my students often have trouble figuring out when an author is discussing their own ideas vs. someone else’s ideas, and she asked me to repeat the statement because she found it so astonishing. She couldn’t even conceive of that a person could have such a problem, never mind the fact that I could be so matter-of-fact about it.
I don’t have any grand solutions for any of this. I do know that I approach the SAT with fewer and fewer assumptions about what people actually know (although every now and then I still get thrown — how exactly can someone make it through life without knowing the meaning of “permanent”?).
I know, for example, that a kid scoring 700 might not consistently be able to identify the topics of SAT passages.
I know that even kids scoring above 700 often have significant trouble figuring out what an author believes when that author spends time considering opposing points of view.
I know that kids often have trouble with tone because they can’t draw a relationship between how the words appear on the page and how the sound. I also know that sometimes they can’t sound out words in the first place because they were never taught phonics.
In short, I’ve learned to start from zero. Better for me to be pleasantly surprised than the contrary.
by Erica L. Meltzer | Apr 6, 2013 | ACT Reading, Blog
This is a nifty little strategy I learned about five years ago, when I first started tutoring the ACT. It requires a tiny bit of time upfront, but it can pay off quite a bit. It’s also fairly easy to adapt to your interests and strengths.
Here it is:
As soon as you start Reading Comprehension section, quickly leaf through all four passages, and start with the one that seems easiest/most interesting. Then do the next most interesting, then the next, and save the least interesting/most difficult for last.
Yes, you will have to spend maybe 30 or 45 seconds initially figuring this out, but you don’t have to read a lot — you can usually tell from a sentence or two whether the passage is going to be reasonably ok or utterly impossible.
Working this way has a couple of major advantages:
1) Time
Easier passages tend to go more quickly, meaning that you’re less likely get behind on time from the start. You also don’t waste time on questions you might not get right, then get easier questions wrong toward the end because you’re running out of time and panicking.
2) Confidence
If you start out with something interesting, your level of engagement will be higher. You don’t start thinking “this sections sucks, I hate this, I’m never going to finish on time, I wish it were just over already” two minutes into the test, then miss easier things later because you’re discouraged. You’ll be more focused and more likely to know you’re answering things correctly, which will boost your confidence and make the rest of the section seem more manageable. If you get stuck in the last passage, well… it’s the last passage. You’ve already answered lots of questions correctly, so it won’t ruin you. You might get a 28 rather than a 30, but you probably won’t get a 23.
Know your strengths and weaknesses:
I find that most people taking the ACT tend to have pronounced strengths and weaknesses on the reading passages — those who are more math/science-oriented tend to find the Science and Social Science passages easier and more enjoyable, whereas people who are more humanities-oriented tend to prefer Prose Fiction and Humanities. And when people have a least favorite passage, it’s almost always either Prose Fiction or Science.
If this applies to you, you’re in luck because your decision is basically made for you. If you know that one type of passage always gives you trouble, don’t even it look at it initially; just save it for last. If you always find one passage relatively easy, just start with it. When you’re done, just look at the two remaining passages, and do whichever one you like better first.
by Erica L. Meltzer | Apr 5, 2013 | Blog, Issues in Education
From The New York Times:
Imagine taking a college exam, and, instead of handing in a blue book and getting a grade from a professor a few weeks later, clicking the “send” button when you are done and receiving a grade back instantly, your essay scored by a software program.
And then, instead of being done with that exam, imagine that the system would immediately let you rewrite the test to try to improve your grade.
EdX, the nonprofit enterprise founded by Harvard and the Massachusetts Institute of Technology to offer courses on the Internet, has just introduced such a system and will make its automated software available free on the Web to any institution that wants to use it. The software uses artificial intelligence to grade student essays and short written answers, freeing professors for other tasks.
Umm…. Exactly what other tasks do professors need to be freed to do? Ok, writing, research, whatever, fine, but call me crazy, isn’t grading student work also supposed to be an integral part of their job? Wait, perhaps this just an excuse to close down writing programs and humanities departments (to have more money to give to STEM fields) in order to save the pittance that would otherwise go toward paying the adjuncts who do the real grunt work of teaching writing? Or maybe it’s really just a dastardly ploy by overpaid aging leftists to outsource their work so that they can go lounge on the beach in Tahiti.
Obviously elite colleges will never actually adopt this technology for their undergrads (can you imagine the howls of protest?), although I can see it being used for some of the online, open-enrollment classes.
The real issue is the possibility that non-elite colleges, not to mention high schools in poor states with poor education systems (the article mentions Utah and Louisiana) will use this technology to replace actual human teachers, thus further exacerbating the gap between the educational “haves” and “have nots.”
As pretty much all 934 Times commenters have pointed out, there are just too many things a computer program cannot read for, like veracity, logic, subtlety, and yes, creativity, both in terms of content and structure. And professors don’t exactly seem eager to be relieved of the drudgery of teaching writing either (guess they’re not nearly as lazy as everyone seems to think).
Not to mention this:
“It allows students to get immediate feedback on their work, so that learning turns into a game, with students naturally gravitating toward resubmitting the work until they get it right,” said Daphne Koller, a computer scientist and a founder of Coursera.
First of all, there’s something seriously wrong when 20 year-olds can’t learn unless “it’s like a game.” Learning to write is hard work — it can certainly be enormously interesting and rewarding, but it also takes a long time to master. It’s the polar opposite of a video game, and any system built around the notion that the process can be circumvented by a few cheap tricks has absolutely no understanding of what learning to write involves.
There’s also quite a lot to be said for not getting instant feedback. One of my high school English teachers would return essays covered in reams of meticulous red scrawl; sometimes his notes were almost as long as our papers. Needless to say, he took a lot longer than a few minutes to grade our essays. Those comments could be very harsh, but they showed that he took our work very, very seriously — probably a lot more seriously than most us took it. I probably wouldn’t be able to do what I do now if not for that class.
Besides, the primary thing this technology will teach kids is how to write something that can game the system — regardless of whether it makes sense. The SAT essay, which is still (presumably) scored by human beings, is evidence enough for the kind of gag-inducing jumbled prose that a computerized grader would likely reward.
But who of course cares a whit about whether students actually learn to write as long as the test scores are good? After all, technology is the solution for everything, and test scores are what education is really about. Right?