Among the alterations made to the digital version of the SAT are changes to the amount of time per question. The current, paper-based version allows for just over a minute per question in Reading (65 mins./52 questions) and Math (25 mins./20 questions) vs. a bit under a minute for Writing (35 mins./44 questions).

However, the digital exam greatly increases the amount of time for both Math (35 mins./22 questions) and Writing (now integrated into Reading/Writing modules, with 32 mins./27 questions), whereas the amount of time per reading question actually decreases very slightly.

From an equity standpoint, the proportion of students with questionable diagnoses now receiving extra time has become so high that the move is perhaps designed to tacitly level the playing field somewhat. At the same time, by offering more generous timing, the College Board is obviously seeking to salvage what it can of the shrinking testing market and lure more students away from the ACT, whose timing has not changed in decades, and whose average scores are now being at a 30-year low. (Note that the College Board’s periodic “recentering” of the SAT scoring scale has prevented the organization from having to release a comparable report). The more forgiving Math timing is also presumably designed to help more students meet the benchmark—50 points higher than the Reading/Writing one—and thus to bolster graduation rates in states where the SAT is used as a high-school exit test. But if the move relieves some of the pressure on students, it may also make test results less meaningful.

In response to the change, Adam Grant, a professor of organizational psychology at Penn’s Wharton School and periodic New York Times contributor, published an op-ed piece in which he recounts his experience helping his daughter with math, beginning with the observation that her performance improved significantly when time constraints were removed. Combined with his undergraduates’ reports of struggling with time on the SAT, he concludes that the time pressure on standardized tests “rewards students who think fast and shallow — and punishes those who think slow and deep,” and praises the College Board for moving to help students show what they “really know.”

Even though the article was published nearly a month ago (9/20/23), which is practically ancient history in internet time, I think it’s worth responding to because it reinforces a kind of appealingly truthy narrative that makes an amalgam of several distinct issues (rushing vs. working efficiently; the role of speed in routine, lower-level skills vs. complex ones) while reinforcing some very alluring misconceptions about standardized testing. It also points to some disturbing trends in education that I think teachers and tutors as well as parents should be aware of.

Let me start with this: One of the earliest realizations I had as a tutor was that timing problems were almost never about speed but rather about knowledge (in fact, this was the topic of one of my earliest blog posts). Yes, rushing and careless mistakes might take a few points off here and there, but I as repeatedly found, the main reason students felt anxious and pressed for time was virtually always that they did not really understand the material being tested.

When I worked with a new student, I always began by having them do a test untimed, marking where they were when official time ran out, just to separate out the knowledge and time issues—and not once in nearly 10 years did I see a dramatic difference between timed and untimed scores. There was some variation, of course, but it fell within a relatively narrow range. The untimed version did, however, allow me to see “what students really knew”—or, rather, it exposed just how much they didn’t. In many instances, what students perceived as careless mistakes, or unlucky guesses when they couldn’t decide between two answers, were in fact indicative of major knowledge gaps.

Once students had mastered the requisite material—not just become familiar with it, but really internalized it—the time element essentially took care of itself. This was particularly true for Writing. Although I integrated speed into discussions of particular concepts, I found it largely unnecessary to cover as a separate element (the exception being ACT Reading, in which time was the main concern).

As for the number of Grant’s students at Wharton who had difficulty finishing the exam… Many of my colleagues assumed that Grant had to be exaggerating on this point, and my first inclination was to agree. Then, however, I realized that his students were in high school during Covid. If they were attending school via Zoom and had grown accustomed to having no tests, along with virtually unlimited time to complete assignments and high grades for little effort, then it is entirely possible that a disproportionate number of them did in fact have trouble finishing the SAT or ACT. Their difficulties could thus be attributed more to the highly relaxed demands of their schoolwork than to the unreasonable rigors of a standardized test. And if schools have continued to emphasize projects, posters, and other types of fuzzy assignments designed to hide how far students have fallen behind, it is unsurprising that loosened time constraints would be touted as a selling point for the SAT.

I worry that a vicious cycle is occurring here: The more unaccustomed to moderate academic stresses students become, the less well they will be able to tolerate them, and the more well-meaning adults will race to remove them (because mental health). Teenagers will consequently receive even less practice managing challenges, further decreasing their resilience and encouraging adults to smooth the way even more.

Next point: Because curriculums vary so dramatically, and because the SAT and ACT double as high-school exit exams in some states, the concepts they assess cannot exceed a moderate level of sophistication. In order for the tests to have any validity, and to be kept in multiple-choice format and at a manageable length, answers must be straightforwardly correct or incorrect. Although they do not require straight-up “rote” knowledge—questions always involve some type of application and may require several steps of logic—they are not designed to require deep thought in the pondering-the-meaning-of-the-universe sense. In terms of these types of routine skills, there is in fact a direct correlation between knowledge and speed. Simply put, people who have truly mastered them can perform them without having to stop and think very hard—the steps necessary to find the answer are clear. The speed, though, is not in itself the point; rather, it is a side effect. And the lack of thought is a not a sign of superficiality but of mastery.

Of course, people with very strong skills may indeed make careless errors if they rush or get nervous, and indeed they may sometimes need to learn to work more slowly (for the record, I count myself in this group); however, their pace will remain relatively brisk even when they are being careful and methodical, and they are unlikely to have difficulty finishing within the allotted time.

In contrast, students with a weaker level of understanding will generally hem and haw, fish around and plug in, get distracted, grasp at strategies and explanations that are only loosely related to the task at hand, and misunderstand essential concepts. This is not “deep thinking” in any meaningful sense, even if it may seem that way to a casual observer. While students in this category may eventually fumble their way to some right answers, they are also likely to get stuck on easier questions and then either race to finish or run out of time. Indeed, this is why the time constraints exist.

I certainly understand the seductiveness of Grant’s explanation which, after all, serves to flatter the sensibilities of New York Times readers; however, it strikes me as a version of the “people who can perform basic skills automatically are like unthinking robots, incapable of higher-level thought” argument, which flies so hard in face of observable reality as to be offensive. Besides, after years of having to urgently get students working after they finished reading a question but before their eyes glazed over and they began staring into space—at which point I’d lost them completely—I think it greatly overestimates the average teenager’s level of knowledge and self-management ability.

Anecdotally, colleagues have also told me that they can no longer administer exams that they routinely gave 10 years ago without the slightest problem, because their students can no longer keep up. Moreover, they’ve observed students who once rushed slightly to finish tests become much slower after receiving extra time, without improving significantly—rather than work steadily and systematically through questions, they would sit and stare at answer choices for long stretches, “neurotically” (as one teacher described it) going back and forth between two options.

And interestingly, I came across a 2011 M.A. thesis (“Automaticity of Basic Math Facts: the Keys to Success?”) from the University of Minnesota that reported a whopping 90% (!) of undergraduates surveyed there had not achieved automaticity in elementary-level math facts. Furthermore, students who took longer to answer questions did not perform better on the given probe. In fact, the opposite was true:

Some mistakes were to be expected, of course, especially because speed was demanded and some participants might have felt stressed. Still, it was surprising how many mistakes were made, even among those who took considerable time with their answers (with fluency in the lower tiers). This argued against the proposition that students “know” their math facts even when their fluency is low. Further evidence of this was the fact that some students made the same mistakes on subsequent problems involving the same factors, suggesting their mistakes were due to not knowing those particular facts rather than just random errors. The frequency and types of mistakes showed that many students not only lacked automaticity, but were in fact unable to accurately construct the answers. 

So I think some skepticism is justified here.

In terms of more complex skills or problems, yes, of course, people with a high level of knowledge may perform more slowly because they are able to draw from a wider frame of reference; consider connections to other situations; and attend to details and nuances that a person with less experience would overlook. That said, people who struggle with fundamentals generally cannot complete advanced tasks at a particularly high level and often are unable to tackle them at all, regardless of how fast or slow they may happen to think.

In short, working slowly may be sign of depth, but it may also signal a lack of understanding; and working quickly may be a sign of superficiality, but it may also signal a very high level of knowledge.

One could, in fact, assert that one of key signs of mastery is the ability to work either quickly and accurately, or slowly and accurately, combined with the capacity to adjust one’s approach depending on the situation.

To give a personal example, I can blaze through a 75-question ACT English section in about 10 minutes. On the other hand, in my own writing I often work at a pace so plodding that I wager most people would find it intolerable: it is not uncommon for me to spend several hours writing a single practice question and, in the absence of any time constraints whatsoever, I can spend days or weeks—or even, in a few instances, months—composing one blog post. My first book, which was under 200 pages, took me two-and-a-half years.

So, please tell me: am I a “fast and shallow” thinker, or a “slow and deep” one?

If I were feeling snarky—which frankly I am—I would point out that Grant did not have “unfair” time constraints imposed on his piece and yet managed to make only a very shallow argument.

But moving back to the SAT and ACT, it is also important to understand that these tests are used for their predictive ability: their validity rests in part on their assessment of how many questions test-takers can answer correctly under constraints designed to correlate with success in first-year college classes. The College Board has decades of research on this topic, and so it is reasonable to assume that decisions regarding time constraints have not been made arbitrarily.

While the concepts assessed on the SAT and ACT do not on their own merit deep thought, they are essentially the sub-components of more complex undertakings. Only when these skills have been mastered to the point at which they can be done on autopilot is the brain freed up to think deeply about genuinely sophisticated concepts in a meaningful, coherent way. A student who struggles to finish SAT sections because they are “deeply” mulling, say, whether, say, a statement that begins with it can be a sentence, or whether to add or multiply first when solving an equation, is obviously at higher risk for struggling in college than one for whom these skills are automatic.

In terms of reading, brute decoding speed is also an essential component of mastery: Accurately processing words at the speed of speech, or around 200 words per minute, is essential to translating them into meaningful language, to grasping things like tone and connotation. A person who reads significantly more slowly is essentially trudging through a series of disconnected words, something that makes it virtually impossible to get through dense reading assignments and that can severely restrict one’s choice of major.

In terms of Math, my knowledge of the research is less extensive, but I’m aware that there is a significant body of research demonstrating a link between speed, fluency, and higher-level achievement.

Now for the part I find most troublesome.

When I was searching for Grant’s article on the New York Times website while writing this post, I discovered that the Education section had posted a link to the piece under the very (mis)leading title “Is It Time to Get Rid of Timed Tests?”, and that students from several high schools in various parts of the U.S. had been assigned to comment on it (under their real names and high schools, something that strikes me as inappropriate if not overtly unethical).

Predictably, the vast majority of students not only agreed with Grant but asserted that timed tests should be abolished completely. As I read through the comments, though, I also became aware of three strains of thought that strike me as really deeply concerning, and that I think should worry anyone involved in education.

1) Lack of understanding = deep thought

When slow thinking is taken as conclusive evidence of deep thinking, and fast thinking of superficiality, students who struggle with time on exams because they have not really learned the material, or because they have wasted time staring off into space—as several students openly admitted to doing—are encouraged to believe that they are intellectually superior to those fast workers who, they are encouraged to assume, only know how to think on a surface level.

According to this twisted logic, ignorance can be viewed something better than knowledge. And the less someone knows, the more convenient as a convenient an ego-booster this explanation becomes. Indeed, the smug tone of the commenters who proclaimed themselves “slow, deep thinkers” was impossible to miss—nor were their numerous errors in the basic mechanics of writing (although presumably being a “deep thinker” places one above such trivial concerns).

2) Speed = Identity

Obviously, yes, some people naturally work more quickly than others, but speed is by no means a fixed trait. People can and do get faster at things all the time.

In this view, however, the trope that “everyone does things in a different way” morphs into “the speed at which a person does things is a fundamental aspect of their identity, so requiring students to work more quickly represents an attack on their deepest selves.”

As a result, providing students with as much time as they need—even if that means eliminating time constraints entirely—is seen as the only acceptable way to respond.

3) The victimization complex

This tendency is perhaps the most disturbing of the three, and I could not help but notice that several student commenters latched very hard onto it. Here, slower workers are not merely held to be penalized by time constraints but encouraged to proclaim themselves “victims”—that is the literal word they used.

The vindictiveness in their words was clear: teachers or other adults who impose “unfair” time limits are perpetrators, with the subtext that they must be punished.

For the record, my gripe is not with the kids here—teenagers have, after all, moaned about tests since time immemorial—but rather with the adults who encourage this type of thinking.

This does not bode well.

A couple of weeks after Grant’s op-ed ran, his Times Opinion colleague Jessica Grose published an article exploring why the teaching profession is hemorrhaging members at such an alarming clip. The piece provoked a flood of comments from teachers describing the extraordinary level of pushback they encountered from students as well as parents when they attempted to impose a minimum of academic accountability.

Now, I strongly doubt that Adam Grant intended for his piece to add fuel to this situation, or that he even considered the role his piece might play in it. Indeed, it is hard for many people to imagine just how fraught the situation has become. But instructors who do insist on pushing kids academically, even in the most caring way, must walk an increasingly narrow line, always careful not to do or say anything that might provoke threats of legal action. Who would want to enter such a profession? Or remain in it for decades?

This is not a tenable situation. And for many teachers, is becoming less tenable by the day.