At this point, I’ve spent many months picking apart released dSAT questions and attempting to use their logic and patterns to construct hundreds of my own. I realize that I haven’t offered much in the way of opinions about the digital exam, but that’s mostly because I’ve been spending so much of my time over the last six-plus months trying to get the new editions of my SAT books finished. I assure you that I have been thinking very, very hard about the alterations.
So that said, are six key changes—both positive and negative—that I think are particularly deserving of attention, and I’d like to discuss them here.
1) Short-Passage, One-Question Format
Without a doubt, this changed makes the exam more streamlined and is ideally suited to the online format. Longer passages would require test-takers to continually scroll up and down, making the process of answering questions somewhat awkward and increasing the chances important information will be overlooked.
This shift also brings the SAT closer to exams such as the GRE and GMAT, which have been offered electronically for many years and feature primarily short passages.
As much as I appreciate the practical aspects of this decision, however, I still find it worrisome because it so clearly panders to ever-decreasing expectations about the amount that students in both high school and college should be expected/able to read. Anecdotally speaking, several colleagues who teach at the secondary and post-secondary levels have told me that they can now only assign a fraction of the reading they regularly assigned even five or ten years ago, and one (non-tenured) friend who teaches at an extremely selective university was recently reduced to bargaining with her students about the amount of reading they would tolerate—not for one assignment, but for the entire semester.
This really does not bode well.
In theory, at least, dealing with complex arguments in writing is one of the main purposes of college, and most serious ideas cannot be boiled down into a paragraph. Call me old-fashioned, but from my perspective a student who seriously cannot handle even 500-750 words of text is not ready for a university-level education.
In addition, one under-appreciated aspect of longer passages is that they force students to practice reading efficiently—a crucial skill for success in college. Complaints about long passages being boring, about taking too much time, etc., fundamentally miss the goal of the test. The point has never been to make students scrutinize every word but rather to zero in on key information while skipping over less-important details. (I tutored more than one student who struggled in college because they had never learned to read this way and simply could not keep up.)
Incidentally, the short passages on the digital test can generally be approached the same way—in some cases, the answer can be determined from a single sentence—but their conciseness, along with the generous time allotment, will result in many students being able to get through the test reading at a slower pace. As a result, Reading/Writing scores may be somewhat less predictive of preparedness for long reading assignments in college.
2) Vocabulary
Very discreetly, the College Board has added back this most traditional aspect of the SAT, albeit in slightly simplified form. Unlike the pre-2016 version of the test, which featured two-blank sentence completions, the dSAT will include questions with only one blank.
In terms of difficulty, these questions appear to sit fairly close to those on the pre-2016 test. If some of the most challenging words are unlikely to appear, that is probably counterbalanced by the complex language in the passages. For many students, just figuring out the logical definitions of the words in the blanks will be a real challenge.
While I would like to think that the revival is due to the recognition that a working knowledge of the moderately sophisticated, New York Times-level vocabulary (aka “obscure” words) traditionally included on the exam is in fact kind of important, I suspect that it is more likely due to more practical matters.
First, single-blank sentence completions are both easy to produce and ideally suited to the dSAT’s short-passage/single question format. And second, the College Board presumably has reams of psychometric data assessing student performance vocabulary items, facilitating the development of a scoring and scaling system for the digital test.
The fact that the College Board has drawn no attention to the return of these questions is entirely unsurprising. The shift to a focus on second meanings of common words (or, as the CB termed them, “relevant” words—an Orwellian formulation if ever there was one) was one of the biggest selling points of the 2016 redesign. Walking it back after only seven years does not paint the organization in a particularly flattering light.
3) The Elimination of Historical Documents and the Addition of Poetry and Drama
This is a shift I found somewhat surprising, given that again, the inclusion of historical documents was another one of the big selling points of the 2016 test. Given that these texts are all in the public domain, there are no potential copyright/distribution issues, it certainly would have been easy enough to excerpt 100-150 words with a clear focus.
Perhaps their removal from the exam signals an attempt by the College Board to step away from the idea that the SAT is a curriculum-aligned test (an absurdity, given the vast differences in curriculum across schools) as well as to shed some of its associations with Common Core. It is also possible that students performed exceptionally poorly on these passages, or in a way that might have somehow affected the scaling.
But why add poetry and drama? To differentiate the test from the ACT? Or perhaps because psychometric data from the AP Lit test and (retired) Literature SAT II could be used to benchmark certain questions? Or there could be another reason entirely.
At any rate, it turns out the Great Global Conversation wasn’t quite so great after all.
4) The Elimination of “Evidence” Pairs
Another little-mentioned change to the digital version of the test is the removal of “evidence” question pairs, which were introduced during the 2016 redesign. On a practical level, the reason behind it is obvious: these questions always come in pairs (test-takers must identify the section of the passage that provides “evidence” to the previous question), whereas each passage on the digital exam can only be accompanied by one question.
This change is not enough to fully counter my concerns about the elimination of long passages, but it definitely helps.
I find it almost impossible to overstate my distaste for these questions: wordy and tortuous, complicated for the sake of complexity, created for no purpose other than to give a false impression of sophistication (“higher-order thinking,” in edu-speak) and, in the process, creating a distorted picture of what evidence is.
I have made this argument before, but it bears repeating here: A text means what it means because it includes specific words, written in a specific order, conveying particular ideas and not others. At a basic, literal level, meaning is not something that a reader needs to continually “prove,” or “offer evidence for”; the compulsive rearticulation of it makes the reading process stultifying and trapped at a persistently low level.
The new test, in contrast, merely asks students to identify answers that “support a claim” (prose fiction or poetry). Although the claims involved are very simple and are more accurately characterized as descriptions of the text than arguments per se, this at least moves away from the warped Common Core terminology and toward a use of language that is more firmly grounded in reality.
This is a very good thing.
On the other hand, the “higher-order” game of smoke and mirrors continues with…
5) “Student Notes” and Graph/Chart Questions
While it might seem odd to group text- and graphic-based questions together, they appear to serve the same fundamental purpose—to make the SAT appear to test students’ ability to “synthesize” information (“higher-order” thinking), even though in the vast majority of cases they are reading comprehension questions only, with no actual synthesis required.
I discovered this as I was working through the initial released tests—after bouncing back and forth between the bullet points/graphics on the first few questions I encountered, becoming progressively more confused, I realized that in almost every case, I could find the answer based solely on the wording in the passage or question.
So why bother to include so much irrelevant and confusing information? I suspect that the answer largely involves the College Board’s reliance on the state testing market. To be adopted as a high-school graduation exam, the SAT must appear to be in alignment with various state standards (essentially Common Core renamed), on which “synthesis” is undoubtedly a feature. In addition, some state standards are now featuring an independent student-research component, divorced from any specific class or body of knowledge, and “notes” questions evidently represent an attempt to test this feature in a quantifiable way. But because this is abstract formal skill, there is no way to assess it meaningfully SAT-style.
This barely disguised trickery strikes me as kind of, well, tacky, not to mention hugely unfair. Given that the College Board is unlikely to publicize (via Khan) the fact that these questions do not really test what they appear to be testing, and in fact has a vested interest in making them out to be artificially complex, students who have access to only CB-affiliated prep will be at a real disadvantage.
The irony is that the test developed a reputation for trickiness when there were no actual trick questions, then added them in after a campaign proclaiming that the redesigned SAT would not include them. Given this behavior, I really do have to sympathize with the anti-testing folks sometimes.
5) Text Completions and Support/Undermine Questions
These are closer to straight-up logic questions, cousins of the logical reasoning passages that appear on the GMAT and LSAT. While support/undermine questions do appear occasionally on the current version of the test, they play a much more prominent role on the digital version. They seem generally quite well constructed (to the point where I actually wondered whether the College Board had slipped responsibility for writing the test back over to ETS), and I found no major issues with them. Good readers with a strong sense of logic will be fine, but a lot of kids will struggle with them.
6) The Adaptive Scoring System
For previous iterations of the SAT, the College Board has always released a clear scaled scoring chart: x number wrong on a particular test date corresponds to a scaled score between 200 and 800. There might be minor variations from exam to exam, but the everything sits within a narrow range within a given scoring system (“recentered” upward in both 1995 and 2016 so that a 1200 today is probably a couple of hundred points higher than it would have been in the early 1990s).
From what I understand, the adaptive scoring system is something of a black box, with certain questions—it is unclear which ones—being weighted more heavily than others. The sample score report struck me as low on specifics and largely devoid of meaningful information, although it did seem to suggest that the curve for Reading/Writing is larger than the one for Math. This opacity will presumably allow the College Board to quietly and continually adjust the scale as necessary to ensure scores do not drop so low as to further threaten the test’s market share.
To sum up:
Despite the issues discussed above, the digital exam actually strikes me as a significant improvement over the 2016 paper-based version. It’s lost some of its Common-Core tediousness and heavy-handedness, and seems more overall more reflective of the skills that college-bound students should be able to demonstrate. It also represents a return to an emphasis on logical reasoning. On the whole, the questions I worked through seemed appropriately challenging for high-school juniors and seniors, although the simplified format may mislead some test-takers into thinking it is easier than is in fact the case. Students who want to significantly improve their scores will still need to put in meaningful study time—the question is, given that they can choose to simply not submit scores—whether they will find it worthwhile to do so.
Thank you for this interesting article! Not to mention that there will no longer be the QAS (Question and Answer Service) with the digital SAT…that’s disappointing…And also, what do you think about the fact that the digital SAT is computer-based vs. the old paper-based version? Do you think some students will prefer paper-based and go to the ACT (in the US, that is)?
Great summary of the new test. Having taken all four “official” digital SAT exams now, I’d say that the digital test is “slightly” easier, but not much. For students in the 900-1200 Scoring bands, i suspect their scores will be almost the same on both the current pencil/paper version and the new digital one. The big gain is really PR for the college board- because it’s now only 2 hours 16 mins, it FEELS easier to students to take- the perception may help SAT enrollment a bit.
On the upper end of SAT scorers, I foresee more variance because of two cross currents; “slightly” easier verbal questions make it easier to get 100% correct, but GREATER penalty for each wrong answer. On the last digital test, one wrong verbal question = 770 score. Careless work will be severely punished for those seeking >1500 scores.
On the math, I observed some harder questions actually (different than the paper/pencil ones) , but with a more generous curve. Overall, the test is better and still retains (with the notable exception of the new note-taking questions, as Erica points out) some relevance to college and business-world skills.