Recently, a colleague who is foreign-language classroom teacher told me the following story: since she started teaching around a decade ago, she’s always made sure to introduce her beginning-level classes to the concept of cognates – words that are very similar in English and the Romance language she teaches, and that are derived from a common root.
Every previous year, her students had been perfectly receptive to the concept, but this year they would have none of it: they mocked the term cognate as an obscure “SAT word” and insisted that they shouldn’t be forced to learn it.
My colleague then asked her students how they expected to be able to read high-level material in high school and college without a strong vocabulary.
Nothing. Blank stares. (more…)
Yesterday (5/30/18), I happened to post the following Question of the Day on Facebook:
It wasn’t that long ago that putting food in liquid nitrogen was something you’d only see in a high school science class, but it’s also becoming a mainstay of modernist cooking. It’s odorless, tasteless, and harmless because it’s so cold (–320.44°F to be exact), it boils at room temperature and evaporates out of your food as it rapidly chills it.
A. NO CHANGE
B. tasteless, and harmless, and because
C. tasteless and harmless, because
D. tasteless, harmless and because,
The Atlantic’s Jeffrey Selingo recently published about an article about an entirely predictable consequence of grade- and score- inflation on the selective college admissions process — namely, that the glut of applicants with sky-high GPAs and test scores is making those two traditional metrics increasingly less reliable as indicators of admissibility.
It’s not that those two factors no longer count, but rather that they are increasingly taken as givens. So while top grades and score won’t necessarily help most applicants, their absence can certainly hurt. (more…)
While browsing through Daniel Willingham’s blog the other night, I came across a link to an intriguing — and very worrisome — article about students’ media literacy that seemed to back up some of my misgivings about the redesigned SAT essay and, more generally, about the peculiar use of the term “evidence” in Common Core-land.
The authors describe one of the studies as follows:
One high school task presented students with screenshots of two articles on global climate change from a national news magazine’s website. One screenshot was a traditional news story from the magazine’s “Science” section. The other was a post sponsored by an oil company, which was labeled “sponsored content” and prominently displayed the company’s logo. Students had to explain which of the two sources was more reliable…
We administered this task to more than 200 high school students. Nearly 70 percent selected the sponsored content (which contained a chart with data) posted by the oil company as the more reliable source. Responses showed that rather than considering the source and purpose of each item, students were often taken in by the eye-catching pie chart in the oil company’s post. Although there was no evidence that the chart represented reliable data, students concluded that the post was fact-based. One student wrote that the oil company’s article was more reliable because “it’s easier to understand with the graph and seems more reliable because the chart shows facts right in front of you.” Only 15 percent of students concluded that the news article was the more trustworthy source of the two. (https://www.aft.org/ae/fall2017/mcgrew_ortega_breakstone_wineburg)
Think about that: the sponsored content was explicitly labeled as such, but still the vast majority of the students thought it was true because it had a cool graphic and presented information in a way that was easy for them to comprehend. If that many students had trouble with content that was labeled, what percentage would have trouble with content that wasn’t labeled?
These findings link up with a big part of what I find so disturbing about the rhetoric about “evidence” surrounding the redesigned SAT.
The College Board’s contention is that learning to interpret data in graph(ic) form is about learning how to “use evidence” — but at least insofar as the test is concerned, there is exactly zero consideration of where that data comes from, what viewpoints or biases its sources may espouse, and whether it is actually valid. In other words, all of what using evidence effectively actually entails. The only thing that matters is whether students can figure out what information the graph is literally conveying.
There’s a word for that: comprehension.
As I’ve written about recently, one of the things I find most concerning about the redesigned SAT essay is students’ tendency to write things like, Through the use of strong diction, metaphors, and statistics, author x makes a compelling case for why self-driving vehicles should be embraced.
The problem is that students are given no information about the source of the statistics, and as the excerpt above clearly illustrates, the use of statistics (even lots of them!) by itself says absolutely nothing about whether a source is trustworthy. But students are in no way penalized for suggesting that citing lots of numbers automatically makes an author’s argument strong. And that is huge, whopping, not to mention potentially dangerous, misunderstanding.
Moreover, students are also permitted to project their misconceptions onto the audience as a whole, e.g., Readers cannot fail to be impressed by the plethora of large numbers the author cites. (Actually, yes, some readers probably can fail to be impressed, but only if they actually know something about the subject.)
Taking these kinds of subtleties into account is largely beyond the scope of the graders — but then, the entire scoring model itself is part of the problem.
To be sure, this type of fallacy is a lot less over-the-top— and therefore less interesting to the media — than the patent absurdities students could get away with on the old essay, but it’s a lot more insidious and in its own way just as damaging (if not more so).
Condoning these types of statements only encourages students to conflate appearance and reality — exactly the sort of thing that leaves them open to being easily manipulated. That is exactly the opposite of fostering critical thinking.
In the real world, as I think is obvious by now, these types of misconceptions can have pretty huge consequences.
If you’re just starting to look into test-prep for the SAT or ACT, the sheer number of options can be a little overwhelming (more than a little, actually). And if you don’t have reliable recommendations, finding a program or tutor that fits your needs can be a less-than-straightforward process. There are obviously a lot of factors to consider, but here I’d like to focus on one area in which companies have been known to exaggerate: score-improvement.
To start with, yes, some companies are notorious for deflating the scores of their diagnostic tests in order to get panicked students to sign up for their classes. This is something to be very, very wary of. For the most accurate baseline score, you should use a diagnostic test produced only by the College Board or the ACT. Timed, proctored, diagnostics are great, but using imitation material at the start can lead you very far down the wrong path. (more…)
A while back, I happened to find myself discussing the AP® craze with a colleague who teaches AP classes, and at one point, she mentioned offhandedly that with the push toward data collection and continual assessment, schools are increasingly eliminating the type of cumulative final exams that used to be standard in favor of frequent small-scale quizzes and tests that can be easily plotted for administrators’ consumption.
I poked around and discovered that some schools have also eliminated cumulative mid-term or final exams because such assessments are insufficiently “authentic” (read: not fun) or because of concerns about stress, or because so much time is already devoted to state tests.
I wasn’t really aware of that shift when I was tutoring various SAT II and AP exams, but it explained some of what I encountered: students had been exposed to key concepts, but they hadn’t been given sufficient practice for those concepts to really sink in. They were learning only what they needed to know for a particular quiz or test and then promptly forgetting the material.