When proclaiming that the SAT and the ACT are not tests that can be effectively coached, the College Board and the ACT like to trot out the following statistic, courtesy of the National Association for College Admissions Counseling

Existing academic research suggests average gains as a result of commercial test preparation are in the neighborhood of 30 points on the SAT and less than one point on the ACT, substantially lower than gains marketed by test preparation companies.

Let’s take a moment and unpack this assertion. First, one of the key words here is “commercial test prep” (e.g. Kaplan and Princeton Review); nowhere is tutoring through “boutique” companies or private tutoring mentioned. As someone who has helped more than one student increase their verbal scores alone by 350+ points on the SAT and 10+ points on the ACT, I have some grounds for disputing the idea that the shortcomings of commercial test-prep should not be extended to test-prep in general.

That’s not, however, what I really want to focus on here. What interests me, rather, is the idea of average gains and the way in which that average was determined. I’ve been thinking about this thanks to Debbie Stier, who put up a very interesting blog link to the following article by cognitive psychologist Daniel Willingham, a professor at the University of Virginia.

Willingham makes the point that:

When a teacher presents a reading strategy to students, we can assume that there are three types of students in the class: students who have already discovered the strategy (or something similar) on their own, students who are not fluent enough decoders to use the strategy, and students who are good decoders but don’t know the strategy. Only the last group of students will benefit from reading strategy instruction. When a researcher finds an average effect size of d=0.33 for teaching students the strategy, that effect is probably actually composed of many students who showed no benefit and a smaller number of students who showed a large benefit.

I think that something very similar is going on in many strategy-based prep classes. My guess is that only around half of the people who take those tests (those scoring 500+) actually have solid enough literal comprehension skills to even make any sort of strategy-based prep worthwhile.

What this means is that if someone’s comprehension skills are truly up to par (meaning, more or less, that they can pick up a College Board Critical Reading passage at random, understand the gist of it, and summarize the main point and tone), they actually stand to benefit immensely from strategy-based prep. It probably won’t help for the ones scoring 750+ from the start because they’re already using many of the standard strategies, even unconsciously, but for many of the still-small percentage scoring in the 600 to low 700 range, the increase can be very substantial. Many of the ones who persistently score in the 500s, however, won’t succeed in raising their scores at all because they lack the core skills on which to base the strategies they learn.

This points to a disturbing conclusion: the real problem isn’t that people can game the test by learning strategies (aka “tricks”) but rather that many test-takers don’t even even have strong enough comprehension skills to be helped by those strategies in the first place.