While browsing through Daniel Willingham’s blog the other night, I came across a link to an intriguing — and very worrisome — article about students’ media literacy that seemed to back up some of my misgivings about the redesigned SAT essay and, more generally, about the peculiar use of the term “evidence” in Common Core-land.

The authors describe one of the studies as follows:

One high school task presented students with screenshots of two articles on global climate change from a national news magazine’s website. One screenshot was a traditional news story from the magazine’s “Science” section. The other was a post sponsored by an oil company, which was labeled “sponsored content” and prominently displayed the company’s logo. Students had to explain which of the two sources was more reliable…

We administered this task to more than 200 high school students. Nearly 70 percent selected the sponsored content (which contained a chart with data) posted by the oil company as the more reliable source. Responses showed that rather than considering the source and purpose of each item, students were often taken in by the eye-catching pie chart in the oil company’s post. Although there was no evidence that the chart represented reliable data, students concluded that the post was fact-based. One student wrote that the oil company’s article was more reliable because “it’s easier to understand with the graph and seems more reliable because the chart shows facts right in front of you.” Only 15 percent of students concluded that the news article was the more trustworthy source of the two. (https://www.aft.org/ae/fall2017/mcgrew_ortega_breakstone_wineburg)

Think about that: the sponsored content was explicitly labeled as such, but still the vast majority of the students thought it was true because it had a cool graphic and presented information in a way that was easy for them to comprehend. If that many students had trouble with content that was labeled, what percentage would have trouble with content that wasn’t labeled?

These findings link up with a big part of what I find so disturbing about the rhetoric about “evidence” surrounding the redesigned SAT.

The College Board’s contention is that learning to interpret data in graph(ic) form is about learning how to “use evidence” — but at least insofar as the test is concerned, there is exactly zero consideration of where that data comes from, what viewpoints or biases its sources may espouse, and whether it is actually valid. In other words, all of what using evidence effectively actually entails. The only thing that matters is whether students can figure out what information the graph is literally conveying.

There’s a word for that: comprehension.

As I’ve written about recently, one of the things I find most concerning about the redesigned SAT essay is students’ tendency to write things like, Through the use of strong diction, metaphors, and statistics, author x makes a compelling case for why self-driving vehicles should be embraced.

The problem is that students are given no information about the source of the statistics, and as the excerpt above clearly illustrates, the use of statistics (even lots of them!) by itself says absolutely nothing about whether a source is trustworthy. But students are in no way penalized for suggesting that citing lots of numbers automatically makes an author’s argument strong. And that is huge, whopping, not to mention potentially dangerous, misunderstanding.

Moreover, students are also permitted to project their misconceptions onto the audience as a whole, e.g., Readers cannot fail to be impressed by the plethora of large numbers the author cites. (Actually, yes, some readers probably can fail to be impressed, but only if they actually know something about the subject.)

Taking these kinds of subtleties into account is largely beyond the scope of the graders — but then, the entire scoring model itself is part of the problem.

To be sure, this type of fallacy is a lot less over-the-top— and therefore less interesting to the media — than the patent absurdities students could get away with on the old essay, but it’s a lot more insidious and in its own way just as damaging (if not more so).

Condoning these types of statements only encourages students to conflate appearance and reality — exactly the sort of thing that leaves them open to being easily manipulated. That is exactly the opposite of fostering critical thinking.

In the real world, as I think is obvious by now, these types of misconceptions can have pretty huge consequences.