If you’re just starting to look into test-prep for the SAT or ACT, the sheer number of options can be a little overwhelming (more than a little, actually). And if you don’t have reliable recommendations, finding a program or tutor that fits your needs can be a less-than-straightforward process. There are obviously a lot of factors to consider, but here I’d like to focus on one area in which companies have been known to exaggerate: score-improvement.

To start with, yes, some companies are notorious for deflating the scores of their diagnostic tests in order to get panicked students to sign up for their classes. This is something to be very, very wary of. For the most accurate baseline score, you should use a diagnostic test produced only by the College Board or the ACT. Timed, proctored, diagnostics are great, but using imitation material at the start can lead you very far down the wrong path.

If you or your child is signed up for a practice test at a local testing center, find out what type of test will be used, and if it isn’t official material, ask to bring your own. Yes, I know that sounds a little over-the-top, but I cannot stress how important this is in terms of figuring out what areas actually need work. A practice exam not produced by the test-makers may not pick up on areas where are there are real problems, or indicate problems where there are none (for example, on a topic that doesn’t even show up on the real test). And that could ultimately cost you months of time and money.

Even though I haven’t tutored in several years, I still get emails in April from panicked parents whose children have been prepping for months and still have scores that are all over the place. Please believe me when I say that a few well-placed precautions upfront can be the difference between a relatively straightforward test-prep process and one marked by frustration, anxiety, and parent-adolescent discord.

But all that said, there is another, much subtler scenario that doesn’t normally get discussed: namely, the inclusion of comparison scores obtained long before junior year.

To be fair, this is probably an issue primarily for companies and tutors who work with the kinds of serious studiers who have taken the SAT in middle school for the Johns Hopkins or Duke programs, but I think it’s worth taking into account for anyone looking to gauge a tutor or program’s efficacy.

To state what should be an obvious point: far from being tests of some sort of generalized, abstract intelligence, the SAT and ACT are tests intended for high school juniors and seniors. Not really tenth graders, not ninth graders, and certainly not seventh or eighth graders. While a handful of truly exceptional students may achieve high scores on these exams well before junior year, twelve- and thirteen-year olds are, as a rule, not supposed to do well.

As a result, when students do take these exams at a young age,  it is not uncommon for them to receive scores that are hundreds of points lower than the ones they receive as high school juniors or seniors. Colleges understand this perfectly well and do not penalize applicants in the least if a set of low scores from a middle-school exam happens to be submitted along with a much higher set of junior- or senior-year scores. (Ok, College Confidential people, do you hear that? Harvard won’t reject you because you scored a 590 when you were twelve. No one cares about your middle school scores. Seriously.)

The problem comes when tutors or companies use those lower scores as a benchmark. This isn’t technically lying, but it’s massaging the numbers in a way that’s deliberately misleading. Again, I don’t think this phenomenon is hugely widespread, but if a company has a disproportionate number of students making what seem like unusually large gains (particularly in areas where it isn’t uncommon for test prep to start early), it’s something to investigate.

Say a student scored an 1250 Math/Verbal combined on a “let’s-just-see-how-you-do test” at the start of tenth grade, and then scored a 1450 without prep at the beginning of junior year, after covering a lot of the unfamiliar material in class. If, after three months of prep, that student scores a 1550, it really is more accurate to say that the scores rose by 100 points — not 300 points.

On the other hand, a student who starts at 1100 at the start of junior year and raises that score to a 1400 after six or eight months can genuinely claim to have improved by 300 points. And that is a major achievement.

Obviously, companies have an incentive to use scores from as far back as possible in order to boost their stats; it’s also not unheard of for parents to latch onto earlier scores for bragging rights about how much their offspring improved.

But unless they are exceptionally high or low, scores obtained before the end of sophomore year are of typically of limited value at best and entirely entirely useless at worst.

So if you’re in the process of interviewing a company or tutor, don’t just ask about their average score improvement (obviously an imperfect metric, but still good to have a general idea). Also ask how improvement is gauged. Are baseline scores typically from late sophomore or early junior year? Or are scores from earlier tests counted as well?

It’s a two-second question, but the answer you get should be revealing in more ways than one.