When it comes to talking about improving students’ reading, one of the factors that makes having a coherent conversation so challenging is that the word “reading” itself has two meanings: it can refer to decoding—that is, the literal process of matching squiggles on a page to their corresponding sounds in the English language—or it can refer to the much more sophisticated process of comprehension, which is also dependent on things like vocabulary, ability to navigate various types of syntax, and background knowledge. Although the same word is used to describe both of these abilities, the first meaning does not necessarily imply the second.
And as if that weren’t already complicated enough, there’s yet another factor that is often overlooked: listening. (more…)
The University of Chicago’s recent decision to go test-optional got me thinking: what if Bob Shaeffer over at FairTest got his wish, and the SAT and ACT were not merely made optional but flat out abolished? Let’s assume – as seems reasonable – that the rest of the system would remain unchanged.
So picture it: a world in which every one of an elite college’s 50,000+ applicants (or more) would be judged entirely on his or her specific merits, as a totally unique and authentic individual, and given full and complete consideration unmarred by input from the ACT or the College Board.
Wouldn’t that the result be a better system, a fairer system, a system that no longer punished disadvantaged students who couldn’t afford expensive test prep classes?
Probably not. (more…)
(photo by Bryce Lanham, Wikimedia Commons)
The University of Chicago has become the first of the truly elite schools to adopt a test-optional policy, which will take effect for the class of 2023.
From UChicago’s website:
The University of Chicago on June 14 launched the UChicago Empower Initiative, a test-optional admissions process to enhance the accessibility of its undergraduate College for first-generation and low-income students.
A strategic initiative to address key barriers encountered by underserved and underrepresented students, the UChicago Empower Initiative has three areas of focus: the use of technology for greater flexibility in the admissions process, including making submissions of standardized test scores optional; increased financial support, on-campus programming and online resources for first-generation, rural and underrepresented students, with full tuition aid for students whose families earn less than $125,000; and new scholarships and access programs to recognize those who serve our country and local communities. Each aims to empower historically underrepresented communities in the highly selective admissions process by increasing equity and access. (https://news.uchicago.edu/story/uchicago-launches-test-optional-admissions-process-expanded-financial-aid-scholarships)
Chicago’s justification for going test-optional is similar to that of other test-optional schools, but I do think that something a little more interesting is going on here – rhetorically at least. (more…)
Recently, a colleague who is foreign-language classroom teacher told me the following story: since she started teaching around a decade ago, she’s always made sure to introduce her beginning-level classes to the concept of cognates – words that are very similar in English and the Romance language she teaches, and that are derived from a common root.
Every previous year, her students had been perfectly receptive to the concept, but this year they would have none of it: they mocked the term cognate as an obscure “SAT word” and insisted that they shouldn’t be forced to learn it.
My colleague then asked her students how they expected to be able to read high-level material in high school and college without a strong vocabulary.
Nothing. Blank stares. (more…)
Not to long ago (5/30/18), I happened to post the following Question of the Day on Facebook:
It wasn’t that long ago that putting food in liquid nitrogen was something you’d only see in a high school science class, but it’s also becoming a mainstay of modernist cooking. It’s odorless, tasteless, and harmless because it’s so cold (–320.44°F to be exact), it boils at room temperature and evaporates out of your food as it rapidly chills it.
A. NO CHANGE
B. tasteless, and harmless, and because
C. tasteless and harmless, because
D. tasteless, harmless and because,
The Atlantic’s Jeffrey Selingo recently published about an article about an entirely predictable consequence of grade- and score- inflation on the selective college admissions process — namely, that the glut of applicants with sky-high GPAs and test scores is making those two traditional metrics increasingly less reliable as indicators of admissibility.
It’s not that those two factors no longer count, but rather that they are increasingly taken as givens. So while top grades and score won’t necessarily help most applicants, their absence can certainly hurt. (more…)
While browsing through Daniel Willingham’s blog the other night, I came across a link to an intriguing — and very worrisome — article about students’ media literacy that seemed to back up some of my misgivings about the redesigned SAT essay and, more generally, about the peculiar use of the term “evidence” in Common Core-land.
The authors describe one of the studies as follows:
One high school task presented students with screenshots of two articles on global climate change from a national news magazine’s website. One screenshot was a traditional news story from the magazine’s “Science” section. The other was a post sponsored by an oil company, which was labeled “sponsored content” and prominently displayed the company’s logo. Students had to explain which of the two sources was more reliable…
We administered this task to more than 200 high school students. Nearly 70 percent selected the sponsored content (which contained a chart with data) posted by the oil company as the more reliable source. Responses showed that rather than considering the source and purpose of each item, students were often taken in by the eye-catching pie chart in the oil company’s post. Although there was no evidence that the chart represented reliable data, students concluded that the post was fact-based. One student wrote that the oil company’s article was more reliable because “it’s easier to understand with the graph and seems more reliable because the chart shows facts right in front of you.” Only 15 percent of students concluded that the news article was the more trustworthy source of the two. (https://www.aft.org/ae/fall2017/mcgrew_ortega_breakstone_wineburg)
Think about that: the sponsored content was explicitly labeled as such, but still the vast majority of the students thought it was true because it had a cool graphic and presented information in a way that was easy for them to comprehend. If that many students had trouble with content that was labeled, what percentage would have trouble with content that wasn’t labeled?
These findings link up with a big part of what I find so disturbing about the rhetoric about “evidence” surrounding the redesigned SAT.
The College Board’s contention is that learning to interpret data in graph(ic) form is about learning how to “use evidence” — but at least insofar as the test is concerned, there is exactly zero consideration of where that data comes from, what viewpoints or biases its sources may espouse, and whether it is actually valid. In other words, all of what using evidence effectively actually entails. The only thing that matters is whether students can figure out what information the graph is literally conveying.
There’s a word for that: comprehension.
As I’ve written about recently, one of the things I find most concerning about the redesigned SAT essay is students’ tendency to write things like, Through the use of strong diction, metaphors, and statistics, author x makes a compelling case for why self-driving vehicles should be embraced.
The problem is that students are given no information about the source of the statistics, and as the excerpt above clearly illustrates, the use of statistics (even lots of them!) by itself says absolutely nothing about whether a source is trustworthy. But students are in no way penalized for suggesting that citing lots of numbers automatically makes an author’s argument strong. And that is huge, whopping, not to mention potentially dangerous, misunderstanding.
Moreover, students are also permitted to project their misconceptions onto the audience as a whole, e.g., Readers cannot fail to be impressed by the plethora of large numbers the author cites. (Actually, yes, some readers probably can fail to be impressed, but only if they actually know something about the subject.)
Taking these kinds of subtleties into account is largely beyond the scope of the graders — but then, the entire scoring model itself is part of the problem.
To be sure, this type of fallacy is a lot less over-the-top— and therefore less interesting to the media — than the patent absurdities students could get away with on the old essay, but it’s a lot more insidious and in its own way just as damaging (if not more so).
Condoning these types of statements only encourages students to conflate appearance and reality — exactly the sort of thing that leaves them open to being easily manipulated. That is exactly the opposite of fostering critical thinking.
In the real world, as I think is obvious by now, these types of misconceptions can have pretty huge consequences.
If you’re just starting to look into test-prep for the SAT or ACT, the sheer number of options can be a little overwhelming (more than a little, actually). And if you don’t have reliable recommendations, finding a program or tutor that fits your needs can be a less-than-straightforward process. There are obviously a lot of factors to consider, but here I’d like to focus on one area in which companies have been known to exaggerate: score-improvement.
To start with, yes, some companies are notorious for deflating the scores of their diagnostic tests in order to get panicked students to sign up for their classes. This is something to be very, very wary of. For the most accurate baseline score, you should use a diagnostic test produced only by the College Board or the ACT. Timed, proctored, diagnostics are great, but using imitation material at the start can lead you very far down the wrong path. (more…)
A while back, I happened to find myself discussing the AP® craze with a colleague who teaches AP classes, and at one point, she mentioned offhandedly that with the push toward data collection and continual assessment, schools are increasingly eliminating the type of cumulative final exams that used to be standard in favor of frequent small-scale quizzes and tests that can be easily plotted for administrators’ consumption.
I poked around and discovered that some schools have also eliminated cumulative mid-term or final exams because such assessments are insufficiently “authentic” (read: not fun) or because of concerns about stress, or because so much time is already devoted to state tests.
I wasn’t really aware of that shift when I was tutoring various SAT II and AP exams, but it explained some of what I encountered: students had been exposed to key concepts, but they hadn’t been given sufficient practice for those concepts to really sink in. They were learning only what they needed to know for a particular quiz or test and then promptly forgetting the material.
As discussed in my previous post, application inflation seems to be hitting ever greater heights. With the online Common App allowing students to apply to 15+ schools at the click of a button, it can be hard for applicants to gauge their real chances at a particular school: there’s no way to know just how many of those 40,000 applicants are serious contenders. With so many competing for so few slots, sometimes getting rejected isn’t a matter of doing anything in particular wrong. It’s just “great kid, but only if room” – which, of course, there isn’t.
That said, there are still some specific, common reasons for why the college application process can produce less than stellar results. So if you want to know what NOT to do, I offer you the following list of 10 ways to get rejected from college. (more…)
Every year around this time, posts inevitably appear on College Confidential that go something like this:
I applied to every Ivy, Stanford, MIT, Duke, Northwestern, Johns Hopkins, and the University of Nebraska, and I got rejected everywhere except my safety school. I have a 4.5 GPA, 35 ACT, and good activities. wasn’t sure about HYPSM, but I thought I was totally set for Northwestern and Hopkins. What do I do???? Help!!!!
This year, there’s a whole long thread on the Parents Forum entitled “Why applicants overreach and are disappointed in April,” and I would strongly encourage anyone just beginning the college search to read through it, before the madness sets in and you fall in love (or your child falls in love) with a school that accepts only 5% of its applicants.
That is, 5% overall — the RD admission rate might in reality be closer to 2%. (more…)
The Critical Reader: AP® English Language and Composition Edition is now available for purchase on Amazon. The book is carefully aligned with the revised (post-2014) version of the AP Lang/Comp exam provides a comprehensive review of all the reading and writing skills tested.
- A complete chapter dedicated to each type of multiple-choice reading question and essay type.
- Numerous multiple-choice practice questions covering literal comprehension, purpose, tone/attitude, rhetorical strategies, and footnotes.
- Common essay pitfalls, with detailed examples of what to do and what not to do.
- Sample student essays with in-depth scoring analyses.
I’ve also posted a preview so that you can see for yourself.
Now, a couple of notes:
First, I learned a LOT about how essay scoring works while writing this book. While the essays are scored holistically, I cannot overemphasize how helpful it is to understand the various criteria that get factored in and how they affect the overall score. I spent an enormous amount of time picking apart the sample essays and analyses provided on the AP Central site, and pinpointing the various features that corresponded to different score levels. (Christine Hyzinski, who teaches AP English at Montgomery High School in New Jersey, also helped me with some of the sample essay scoring, for which I am immensely grateful.) By the time I got done with the book, I was pretty sure I could do a bang-up job as a grader for ETS — not that I’d ever be allowed to, but still…
Second, to those of you who already have the main (SAT) Critical Reader, I realize you might be wondering whether this is basically the same book. The answer is no.
While the AP book, like the current SAT reading book, is adapted from the original (2013) edition of The Critical Reader, redesigned SAT reading and AP Lang/Comp reading are two entirely different animals. Although some of the same concepts are discussed by necessity, all of the practice passages and questions in the new book are different, and the books have very different emphases. In addition, there is nothing remotely comparable to the essay chapters in the SAT book.
So if you’re concerned about just getting a repeat of what you already have, rest assured that’s not the case.
Harvard University has announced that it will be dropping the SAT/ACT Essay requirement, beginning with the class of 2023. Along with Princeton, Yale, and Stanford, Harvard was one of the last holdouts to require that students submit this component.
I wrote a series of critiques of the redesigned essay when the new test was first rolled out, and I still believe that it is deeply problematic – I think colleges are justified in viewing it with suspicion. At the same time, however, I believe that there are very compelling reasons for schools to continue requiring some sort of writing sample completed under proctored conditions.
Although some of my initial concerns about the SAT essay were unfounded, the principal issue remains that it is fundamentally a nonsense assignment, one presented in muddled language that says one thing and means something else. It asks students to analyze how an author uses “evidence” to build an argument, but seeks to remove outside knowledge from the equation. In reality, this is an absurd proposition: any even slightly substantive analysis of “evidence” is impossible without actual knowledge of a topic. (more…)
Wiki Ezvid, a video-based research site, has named The Complete Guide to ACT English, Third Edition, one of the ACT Prep books of 2018.
It’s ranked #1 in the “high end” (!) category, and #6 overall.
You can see the full list here.
If you look at many lists of GMAT® idioms, you’ll likely find dozens upon dozens of preposition-based constructions, e.g. insist on, characteristic of, correlate with. Although the GMAT does sometimes test these types of idioms, it is important to understand that they are not the primary focus of the test. Because of an increase in the number of international students taking the exam, the GMAC has elected to shift the focus away from idiomatic American usage and toward more issues involving overall sentence logic.
That said, there are still a handful of fixed constructions that the GMAT does regularly test. Many, but not all, of these fall into the category of word pairs (aka correlative conjunctions). Particularly if you are not a native English speaker, you are best served by focusing on these constructions, which stand a high chance of appearing, as opposed to memorizing dozens of preposition-based idioms that have only a minuscule chance of being tested on any given exam. (more…)
If you’re studying for the GRE® and want to learn some words for which ETS has, shall we say, traditionally shown a strong predilection (i.e., a proclivity, penchant, propensity, bent), I’m starting a Word of the Day email program. One email with a top word, a GRE-level example sentence, and a list of must-know synonyms/antonyms, every day, direct to your inbox, plus periodic quizzes. (more…)
Note: this exception is addressed in the 4th edition of The Ultimate Guide to SAT® Grammar and the 3rd edition of The Complete Guide to ACT® English, but it is not covered in earlier versions.
Both SAT Writing and ACT English focus test two specific aspects of the who vs. whom rule.
1) Who, not whom, should be placed before a verb.
Incorrect: Alexander Fleming was the scientist whom discovered penicillin.
Correct: Alexander Fleming was the scientist who discovered penicillin. (more…)
I’m putting up this post because I’ve received a number of queries from people who are interested my The Complete GMAT® Sentence Correction Guide but who aren’t really sure what differentiates it from other guides on the market or whether it meets their needs. So instead of continuing to respond to people on a case-by-case, I thought I’d address some of the most common questions/concerns all in one place.
While the book does by necessity cover many of the same general concepts and strategies as the other books on the market, albeit with a different organization, there are a handful of key points that bear emphasizing.
First and most importantly: the book is designed as a “bridge” to the actual exam. All of the rules covered are derived exclusively from an in-depth study of GMAC-produced questions, and each chapter ends with a list of relevant questions from the Official Guide and Official Verbal guide. In addition, specific questions are periodically referenced during in-chapter discussions. Although there are categorized Official Guide question lists circulating online, there is no other published guide that includes this type of concept-by-concept breakdown. (more…)