As I’ve written about recently, the College Board (and David Coleman in particular) appears to have a somewhat tenuous relationship to the concept of evidence. It is therefore entirely unsurprising that the new SAT reflects this muddled definition.
Consider the following:
In a normal academic context, the word “evidence” refers to information (facts, statistics, anecdotes etc.) used to support an argument.
An argument is, by definition, a debatable statement. The point of using evidence is to provide support for one side or the other. On the other hand, reality-based statements that cannot be argued with, at least under normal circumstances, are generally considered facts.
For example, the assertion “In the wake of the Volkswagen scandal, we should remotely monitor vehicles’ emissions” is an argument because someone could argue – reasonably or not – that the Volkswagen scandal is not in fact grounds for monitoring vehicles’ emissions. The word should indicates that this is a statement of opinion.
On the flipside, the statement “Barack Obama was elected President in 2008 and again in 2012” is a fact. No matter how much someone might dislike the president or disagree with his policies, it would be difficult to argue against this statement from a reality-based standpoint.
Now, answers to questions on standardized tests cannot be arguments – that is, they cannot be debatable. If they were debatable, there would be no way to establish answers that were objectively correct. And a standardized test without objectively correct answers would be completely useless.
It is, of course, possible for a standardized test to include questions about the evidence that various people whose viewpoints are discussed within a passage use to support their arguments, as well as what sort of information would be consistent/inconsistent with their claims or the author’s claims. The current SAT includes some such questions, and it seems that the new test will include a few as well.
While test-takers generally find those questions somewhat annoying, there’s nothing particularly problematic about them. In fact, when constructed well, they do an excellent job of testing the ability to “track” arguments throughout a passage and understand how they could be strengthened or weakened. In fact, the reading portions of graduate-level exams such as the GRE and the GMAT largely revolve around these types of questions.
The problem, however, is that the vast majority of “supporting evidence” questions on the redesigned SAT – that is, “command of evidence” questions, which ask test-takers which lines in a passage support the answer to the previous question – are not really about supporting evidence at all. Rather, they are literal comprehension questions asked in an unnecessarily roundabout and convoluted way. In reality, they merely ask test-takers to identify where in the passage the answer is located.
For example, a typical “command of evidence” pair is as follows:
The author of Passage 1 indicates that space mining could have which positive effect?
A) It could yield materials important to Earth’s economy.
B) It could raise the value of some precious metals on Earth.
C) It could create unanticipated technological innovations.
D) It could change scientists’ understanding of space resources.
Which choice provides the best evidence for the answer to the previous question?
A) Lines 18-22 (“Within . . . lanthanum”)
B) Lines 24-28 (“They . . . projects”)
C) Lines 29-30 (“In this . . . commodity”)
D) Lines 41-44 (“Companies . . . machinery”)
The correct answer to the first question is a statement of fact, not an arguable claim. Assuming that the question is valid, the author does objectively indicate that one of the answers is true and does not indicate that the other three are true.
Essentially, the only thing these questions are doing is asking test-takers to demonstrate that they understand that the text means what it means because it says what it says – a skill known as “comprehension.”
If a student does not go back to the text to understand its literal meaning, then obviously they need to be taught/reminded to do so; however, understanding that texts use words and sentences and paragraphs to convey particular ideas and pieces of information, and that it is necessary to look at the page and read those words to know what those ideas and pieces of information are, is the definition of knowing how to read. It is not a “higher order” skill by any stretch of the imagination.
Moreover, a student who consults the given line references and argues otherwise does not have a problem using “evidence.” Rather, he or she is misunderstanding the text – which in turn could be a result of poor vocabulary, difficulty making sense out of complicated/unfamiliar syntax, lack of background knowledge, or any combination thereof. Those are not problems that can be solved through endless drilling of formal skills.
The construction and appearance of these questions may be more complicated – that part would be hard to dispute – but they do not fundamentally test more sophisticated skills.
So why bother to create such convoluted questions in order to test what is in reality a straightforward skill – one that can easily be tested by simply referring students to the appropriate section of the text, as is the case on the current test? Well, see this post for some thoughts on that.