When the redesigned SAT was rolled out this past March, most test-prep professionals that there would be a few bumps; however, there was also a general assumption that after the first few administrations of the new test, the College Board would regain its footing, the way it did in 2005, after the last major change.

Unfortunately, that does not appear to be happening. If anything, the problems appear to be growing worse.

If you’ve been following my recent posts, much of this will familiar. That said, I think it’s worth summing up some of the most important practical concerns about the new test in a single post.

1) Test security

The redesigned SAT has been plagued by security problems since its first administration in March 2016. The College Board has long recycled tests, re-administering exams internationally after they have been given in the United States, a practice that has continued with the new exam and that has created numerous opportunities for cheating.

Predictably, problems appeared as soon as the new test was introduced: on March 28th, Reuters broke a story detailing the College Board’s decision to administer the exam even after it was revealed that it had already been compromised in Asia. 

Disturbingly, this seems to be turning into a pattern. On July 25th, Reuters also reported that hundreds of questions intended for the October 2016 exam had been leaked, raising serious questions about the College Board’s ability to keep tests secure and the testing process fair. 

2) Test validity

As soon as the College Board released the new Official Guide in June 2015, tutors and other test-prep professionals began commenting on the greatly diminished quality of the questions on the new SAT.

After a long silence about just who would be writing the new exam, a College Board representative finally confirmed (via Twitter!) that the SAT would no longer be written by ETS, as it had been since the 1940s, but rather by the College Board itself. That meant the most experienced ETS psychometricians would no longer be doing quality control.  

In June, Manuel Alfaro, a former College Board director of test development posted a series of tell-all reports on LinkedIn, detailing the shockingly disorganized process by which questions were created and vetted.

Among his revelations: questions were being revised after field-testing (meaning that substantially altered questions were effectively being tested out on the actual exam); one test advisory committee member wrote a scathing, 11-page letter stating that the test items were “the worst he had ever seen;” and David Coleman repeatedly ignored pleas from College Board employees concerned about the quality of the items.

The assertion that there is a severe shortage acceptable test items is borne out by the fact that some students who took the June SAT received exams identical to the March test. Let me reiterate that: some students retook the exact same test only two months after they first sat for it.

The College Board has clumsily tried to get around this problem by barring tutors from non-released exams, and by demanding that students not discuss specific questions publicly.

There is nothing to suggest this problem is going away anytime soon. Assuming the College Board elects not to include the compromised items on the October test (which may or may not be a reasonable assumption), where will they obtain a sufficient number of valid replacement items in time? Will the same exam again be given multiple times in the same year?

3) Score delays

Traditionally, SAT scores have become available around two-and-a-half weeks after test administrations. This year so far, students have had to wait up to two months for their scores. The College Board has not yet publicized score- release dates for 2016-2017, so it is unclear whether these delays will continue. 

In addition, the College Board has traditionally released the October, January, and May tests through the Question and Answer Service (QAS). Typically, these exams are made public approximately six weeks after the test is administered. This year, however, two of the exams administered in May projected to be released at the end of the August, reportedly because a problematic question needed to be replaced — an unprecedented occurrence. (According to a College Board official, they’re still “figuring out the meta-data.” Whatever that means.) 

If October scores are delayed because of the security breach, or if some of the items need to be replaced, it is reasonable to expect another long wait for that test.

4) Lack of authentic practice exams

Old SAT: 10 tests in the Official Guide, plus several additional official practice tests released by the College Board.

ACT: Five tests in the previous edition of the Official Guide, plus two entirely new tests in the updated edition. There are also several additional official practice tests floating around the web. 

New SAT: Four tests in the Official Guide. The May exam, which normally would have been released mid-June, is still unavailable as of mid-August. 

It was originally reported that the College Board/Khan Academy would be releasing an additional four tests last fall, but that plan was tacitly shelved at some point. 

So yes, there is ample practice material on Khan Academy, but there is no substitute for using authentic, full-length practice tests, to figure out test-taking issues such as pacing and endurance.

5) Inconsistent and distorted scaling/scoring, and unhelpful score reports

In the past, all of the students taking the SAT in the United States received the same test, although different tests presented the nine multiple-choice sections in different orders to hinder cheating.

The new SAT always presents the four sections in the same order, so different students are now given entirely different tests.

Because different tests are scaled differently, students who answer the same number of questions correctly may receive different scores. Although students will still have a general idea of how many questions they need to answer correctly in order to achieve their goals, this does make it more difficult to plan strategically.

In addition, percentiles were formerly calculated based only on the scores obtained by all of the students taking the SAT. Now, however, the College Board has also created a “National Percentile” category, which compares actual test-takers to all students nationally, even ones who did not take the test. As a result, performance is inflated.

Although the College Board has released concordance scales between the old SAT and the new SAT, and between the new SAT and the ACT, it remains unclear how reliable/accurate these scales are, or how colleges will view them. The ACT has also taken the College Board to task, questioning the validity of the SAT-ACT concordance table.

Prior to March 2016, SAT score reports included commonsense, helpful information such as the number of questions answered correctly and incorrectly on each section. 

Now, however, SAT score reports consist primarily of edu-jargon that many students are likely to have difficulty interpreting (e.g. “Make connections between algebraic, graphical, tabular, and verbal representations of linear functions”), making it difficult for students as well as tutors to understand where and why points are being lost, and what specific steps are required for improvement.