When scores for the June SAT were released last month, many students found themselves in for a rude surprise. Although their raw scores were higher than on their previous exam(s), their scaled scores were lower, in some cases very significantly so.
An article in The Washington Post recounted the story of Campbell Taylor, who in March scored a 1470—20 points shy of the score he needed to qualify for a scholarship at his top-choice school:
[T]he 17-year-old resolved to take the test again in June and spent the intervening months buried in SAT preparation books and working with tutors. Taylor awoke at 7:30 a.m. Wednesday and checked his latest score online. The results were disappointing: He received a 1400.
He missed one more question overall in June than in March but his score, he said, dropped precipitously. And in the math portion of the exam, he actually missed fewer questions but scored lower: Taylor said he got a 770 in March after missing five math questions but received a 720 in June after missing just three math questions. (more…)
If you’re a high school junior or senior, there’s a pretty good chance you’ve been inundated by emails, postcards, and perhaps even free “express” applications practically begging you to apply. Some of these schools you’ve heard of, and other you, well…haven’t. At any rate, the sheer volume of mail is pretty intense, if not downright overwhelming. And then there are the schools your guidance counselor recommended, and the ones you found in your Fiske guide, or maybe your copy of Colleges That Change Lives. How on earth do you sort through all the possibilities and winnow them down into a manageable list? (more…)
The Native Society, an online platform for innovation and entrepreneurship, recently interviewed me about my experience founding The Critical Reader as part of its NativeAdvice series.
From the interview:
How did you get into the industry?
In 2008, I was tutoring a student for the Writing section of the SAT. I didn’t want her to use up all the questions in the Official Guide, and so I went to the bookstore looking for additional practice material. I looked through the standard offerings and was pretty shocked at how poorly they reflected the actual test. I’d already written practice questions for a bunch of independent companies, but until then, it had never occurred to me that I could write my own materials. But as I looked through the guides on the shelves, I thought, “I can do so much better than this.” (more…)
image by Brendan Church
By the summer before senior year, many students find themselves in the following situation: they’ve been prepping for the SAT or ACT for months and have already taken it two or three times. But despite all the work they’ve put in, they just can’t seem to reach their goals. Perhaps their scores are just a bit too low across the board, or perhaps one section remains stubbornly resistant to improving.
It’s not surprising that many students who find themselves in this situation start to wonder whether they should switch from the SAT to the ACT or, somewhat less commonly, from the ACT to the SAT.
I worked with a few students who did ultimately switch tests, and I saw it go both ways. (more…)
Inevitably, Princeton, Brown, and now the University of Michigan have followed Harvard’s lead and announced that beginning with the class of 2023, they will no longer require applicants to submit the SAT or ACT with essay.
On one hand, the decision is understandable. As I’ve written about, the SAT essay is, to put it bluntly, a terrible assignment that bears virtually no relationship to the type of writing done in college. On the other hand, it serves to reveal whether a student is capable of cobbling together reasonably coherent, grammatical prose, which is unfortunately not something that can be taken for granted. Even if an essay score provides very limited information, the actual essay can provide important insight into an applicant’s writing skills. It also provides a check on the personal statement, allowing adcoms to view writing that is indisputably not padded by a parent or tutor. (more…)
Every now and then, I’ll get a plaintive email from a student who has been diligently prepping for the SAT or ACT for months but can’t quite seem to get their test-day scores to match their practice test scores. Often, they’ve worked through my books and don’t seem to have any problem applying the concepts when they take practice exams. When it comes to the real thing, though, they just can’t seem to make everything work.
This is obviously a very frustrating situation: the fact that these students are able to score well when the test doesn’t count suggests that they’re capable of scoring well when it does count – but in some ways, that just makes things worse. The goal seems so close, yet so far away. (more…)
(image from Wiki Commons)
For those of you who haven’t been following the case, a group called Students for Fair Admissions is suing Harvard for discrimination against Asian-American applicants. The suit follows a similar claim brought against Princeton.
As the New York Times reports:
Harvard consistently rated Asian-American applicants lower than others on traits like “positive personality,” likability, courage, kindness and being “widely respected,” according to an analysis of more than 160,000 student records filed Friday by a group representing Asian-American students in a lawsuit against the university.
Asian-Americans scored higher than applicants of any other racial or ethnic group on admissions measures like test scores, grades and extracurricular activities, according to the analysis commissioned by a group that opposes all race-based admissions criteria. But the students’ personal ratings significantly dragged down their chances of being admitted, the analysis found. (more…)
When it comes to talking about improving students’ reading, one of the factors that makes having a coherent conversation so challenging is that the word “reading” itself has two meanings: it can refer to decoding—that is, the literal process of matching squiggles on a page to their corresponding sounds in the English language—or it can refer to the much more sophisticated process of comprehension, which is also dependent on things like vocabulary, ability to navigate various types of syntax, and background knowledge. Although the same word is used to describe both of these abilities, the first meaning does not necessarily imply the second.
And as if that weren’t already complicated enough, there’s yet another factor that is often overlooked: listening. (more…)
The University of Chicago’s recent decision to go test-optional got me thinking: what if Bob Shaeffer over at FairTest got his wish, and the SAT and ACT were not merely made optional but flat out abolished? Let’s assume – as seems reasonable – that the rest of the system would remain unchanged.
So picture it: a world in which every one of an elite college’s 50,000+ applicants (or more) would be judged entirely on his or her specific merits, as a totally unique and authentic individual, and given full and complete consideration unmarred by input from the ACT or the College Board.
Wouldn’t that the result be a better system, a fairer system, a system that no longer punished disadvantaged students who couldn’t afford expensive test prep classes?
Probably not. (more…)
(photo by Bryce Lanham, Wikimedia Commons)
The University of Chicago has become the first of the truly elite schools to adopt a test-optional policy, which will take effect for the class of 2023.
From UChicago’s website:
The University of Chicago on June 14 launched the UChicago Empower Initiative, a test-optional admissions process to enhance the accessibility of its undergraduate College for first-generation and low-income students.
A strategic initiative to address key barriers encountered by underserved and underrepresented students, the UChicago Empower Initiative has three areas of focus: the use of technology for greater flexibility in the admissions process, including making submissions of standardized test scores optional; increased financial support, on-campus programming and online resources for first-generation, rural and underrepresented students, with full tuition aid for students whose families earn less than $125,000; and new scholarships and access programs to recognize those who serve our country and local communities. Each aims to empower historically underrepresented communities in the highly selective admissions process by increasing equity and access. (https://news.uchicago.edu/story/uchicago-launches-test-optional-admissions-process-expanded-financial-aid-scholarships)
Chicago’s justification for going test-optional is similar to that of other test-optional schools, but I do think that something a little more interesting is going on here – rhetorically at least. (more…)
Recently, a colleague who is foreign-language classroom teacher told me the following story: since she started teaching around a decade ago, she’s always made sure to introduce her beginning-level classes to the concept of cognates – words that are very similar in English and the Romance language she teaches, and that are derived from a common root.
Every previous year, her students had been perfectly receptive to the concept, but this year they would have none of it: they mocked the term cognate as an obscure “SAT word” and insisted that they shouldn’t be forced to learn it.
My colleague then asked her students how they expected to be able to read high-level material in high school and college without a strong vocabulary.
Nothing. Blank stares. (more…)
Not to long ago (5/30/18), I happened to post the following Question of the Day on Facebook:
It wasn’t that long ago that putting food in liquid nitrogen was something you’d only see in a high school science class, but it’s also becoming a mainstay of modernist cooking. It’s odorless, tasteless, and harmless because it’s so cold (–320.44°F to be exact), it boils at room temperature and evaporates out of your food as it rapidly chills it.
A. NO CHANGE
B. tasteless, and harmless, and because
C. tasteless and harmless, because
D. tasteless, harmless and because,
The Atlantic’s Jeffrey Selingo recently published about an article about an entirely predictable consequence of grade- and score- inflation on the selective college admissions process — namely, that the glut of applicants with sky-high GPAs and test scores is making those two traditional metrics increasingly less reliable as indicators of admissibility.
It’s not that those two factors no longer count, but rather that they are increasingly taken as givens. So while top grades and score won’t necessarily help most applicants, their absence can certainly hurt. (more…)
While browsing through Daniel Willingham’s blog the other night, I came across a link to an intriguing — and very worrisome — article about students’ media literacy that seemed to back up some of my misgivings about the redesigned SAT essay and, more generally, about the peculiar use of the term “evidence” in Common Core-land.
The authors describe one of the studies as follows:
One high school task presented students with screenshots of two articles on global climate change from a national news magazine’s website. One screenshot was a traditional news story from the magazine’s “Science” section. The other was a post sponsored by an oil company, which was labeled “sponsored content” and prominently displayed the company’s logo. Students had to explain which of the two sources was more reliable…
We administered this task to more than 200 high school students. Nearly 70 percent selected the sponsored content (which contained a chart with data) posted by the oil company as the more reliable source. Responses showed that rather than considering the source and purpose of each item, students were often taken in by the eye-catching pie chart in the oil company’s post. Although there was no evidence that the chart represented reliable data, students concluded that the post was fact-based. One student wrote that the oil company’s article was more reliable because “it’s easier to understand with the graph and seems more reliable because the chart shows facts right in front of you.” Only 15 percent of students concluded that the news article was the more trustworthy source of the two. (https://www.aft.org/ae/fall2017/mcgrew_ortega_breakstone_wineburg)
Think about that: the sponsored content was explicitly labeled as such, but still the vast majority of the students thought it was true because it had a cool graphic and presented information in a way that was easy for them to comprehend. If that many students had trouble with content that was labeled, what percentage would have trouble with content that wasn’t labeled?
These findings link up with a big part of what I find so disturbing about the rhetoric about “evidence” surrounding the redesigned SAT.
The College Board’s contention is that learning to interpret data in graph(ic) form is about learning how to “use evidence” — but at least insofar as the test is concerned, there is exactly zero consideration of where that data comes from, what viewpoints or biases its sources may espouse, and whether it is actually valid. In other words, all of what using evidence effectively actually entails. The only thing that matters is whether students can figure out what information the graph is literally conveying.
There’s a word for that: comprehension.
As I’ve written about recently, one of the things I find most concerning about the redesigned SAT essay is students’ tendency to write things like, Through the use of strong diction, metaphors, and statistics, author x makes a compelling case for why self-driving vehicles should be embraced.
The problem is that students are given no information about the source of the statistics, and as the excerpt above clearly illustrates, the use of statistics (even lots of them!) by itself says absolutely nothing about whether a source is trustworthy. But students are in no way penalized for suggesting that citing lots of numbers automatically makes an author’s argument strong. And that is huge, whopping, not to mention potentially dangerous, misunderstanding.
Moreover, students are also permitted to project their misconceptions onto the audience as a whole, e.g., Readers cannot fail to be impressed by the plethora of large numbers the author cites. (Actually, yes, some readers probably can fail to be impressed, but only if they actually know something about the subject.)
Taking these kinds of subtleties into account is largely beyond the scope of the graders — but then, the entire scoring model itself is part of the problem.
To be sure, this type of fallacy is a lot less over-the-top— and therefore less interesting to the media — than the patent absurdities students could get away with on the old essay, but it’s a lot more insidious and in its own way just as damaging (if not more so).
Condoning these types of statements only encourages students to conflate appearance and reality — exactly the sort of thing that leaves them open to being easily manipulated. That is exactly the opposite of fostering critical thinking.
In the real world, as I think is obvious by now, these types of misconceptions can have pretty huge consequences.
If you’re just starting to look into test-prep for the SAT or ACT, the sheer number of options can be a little overwhelming (more than a little, actually). And if you don’t have reliable recommendations, finding a program or tutor that fits your needs can be a less-than-straightforward process. There are obviously a lot of factors to consider, but here I’d like to focus on one area in which companies have been known to exaggerate: score-improvement.
To start with, yes, some companies are notorious for deflating the scores of their diagnostic tests in order to get panicked students to sign up for their classes. This is something to be very, very wary of. For the most accurate baseline score, you should use a diagnostic test produced only by the College Board or the ACT. Timed, proctored, diagnostics are great, but using imitation material at the start can lead you very far down the wrong path. (more…)
A while back, I happened to find myself discussing the AP® craze with a colleague who teaches AP classes, and at one point, she mentioned offhandedly that with the push toward data collection and continual assessment, schools are increasingly eliminating the type of cumulative final exams that used to be standard in favor of frequent small-scale quizzes and tests that can be easily plotted for administrators’ consumption.
I poked around and discovered that some schools have also eliminated cumulative mid-term or final exams because such assessments are insufficiently “authentic” (read: not fun) or because of concerns about stress, or because so much time is already devoted to state tests.
I wasn’t really aware of that shift when I was tutoring various SAT II and AP exams, but it explained some of what I encountered: students had been exposed to key concepts, but they hadn’t been given sufficient practice for those concepts to really sink in. They were learning only what they needed to know for a particular quiz or test and then promptly forgetting the material.
As discussed in my previous post, application inflation seems to be hitting ever greater heights. With the online Common App allowing students to apply to 15+ schools at the click of a button, it can be hard for applicants to gauge their real chances at a particular school: there’s no way to know just how many of those 40,000 applicants are serious contenders. With so many competing for so few slots, sometimes getting rejected isn’t a matter of doing anything in particular wrong. It’s just “great kid, but only if room” – which, of course, there isn’t.
That said, there are still some specific, common reasons for why the college application process can produce less than stellar results. So if you want to know what NOT to do, I offer you the following list of 10 ways to get rejected from college. (more…)
Every year around this time, posts inevitably appear on College Confidential that go something like this:
I applied to every Ivy, Stanford, MIT, Duke, Northwestern, Johns Hopkins, and the University of Nebraska, and I got rejected everywhere except my safety school. I have a 4.5 GPA, 35 ACT, and good activities. wasn’t sure about HYPSM, but I thought I was totally set for Northwestern and Hopkins. What do I do???? Help!!!!
This year, there’s a whole long thread on the Parents Forum entitled “Why applicants overreach and are disappointed in April,” and I would strongly encourage anyone just beginning the college search to read through it, before the madness sets in and you fall in love (or your child falls in love) with a school that accepts only 5% of its applicants.
That is, 5% overall — the RD admission rate might in reality be closer to 2%. (more…)