The article, which takes as its starting point the question of what role taxpayer funds should play in supporting a nominally non-profit private organization, goes far beyond what its rather dry, technocratic title would seem to imply. In fact, the implications are so head-spinning that I actually had to read the piece several times to absorb it in full. It pulls together a lot of the threads I’ve been attempting to trace over the last couple of years, and provides a plausible answer to the question of how the College Board has continued to bounce back from scandal after scandal in a way that most other organizations in its position could not. (more…)
The SAT and ACT have released their scores for the class of 2018, accompanied by the predictable wailing and gnashing of teeth about persistently low levels of STEM achievement.
As Nick Anderson of the Washington Post reports:
Forty-nine percent of students in this year’s graduating class who took the SAT received a math score indicating they had a strong chance [75%] of earning at least a C in a college-level math class, according to data made public Thursday. That was significantly lower than on the reading and writing portion of the tests: 70 percent of SAT-takers reached a similar benchmark in that area.
What the article quite remarkably fails to mention is that the benchmark verbal score, 480, is a full 50 points lower than that for math. Given the discrepancy, it is entirely unsurprising that fewer students met the benchmark in math.
Let’s try some basic — and I do mean basic — critical thinking with statistics, shall we?
To understand what a 480 verbal score on the redesigned SAT actually means, consider that it translates into about 430 on the pre-2016 exam, which in turn translates into about a 350 (!) on the pre-1995 SAT.
This is not “college ready” in any meaningful sense of the term. In my experience, students scoring in this range typically struggle to do things such as identify when a statement is a sentence, or grasp the concept that texts are making arguments as opposed to “just saying stuff.” But to reiterate one of my favorite points, this is in part why the SAT was changed: the decline in reading/writing scores was becoming embarrassing. And if you can’t change the students, the only other option is to change the test, and the scoring system along with it. (more…)
The last couple of weeks have seen some new developments in the most recent SAT scandal. Initial reports stated that some questions from the August 2018 test administered in the U.S. had been leaked in Asia before the exam. Mercedes Schneider did a little bit of digging, however, and discovered that wasn’t exactly the case. In reality, the problem goes a lot deeper—and in this case, the problem doesn’t lie with Asian testing centers or students: (more…)
When scores for the June SAT were released last month, many students found themselves in for a rude surprise. Although their raw scores were higher than on their previous exam(s), their scaled scores were lower, in some cases very significantly so.
An article in The Washington Post recounted the story of Campbell Taylor, who in March scored a 1470—20 points shy of the score he needed to qualify for a scholarship at his top-choice school:
[T]he 17-year-old resolved to take the test again in June and spent the intervening months buried in SAT preparation books and working with tutors. Taylor awoke at 7:30 a.m. Wednesday and checked his latest score online. The results were disappointing: He received a 1400.
He missed one more question overall in June than in March but his score, he said, dropped precipitously. And in the math portion of the exam, he actually missed fewer questions but scored lower: Taylor said he got a 770 in March after missing five math questions but received a 720 in June after missing just three math questions. (more…)
New, inflated SAT scores cause confusion, happiness (or: what the media doesn’t say about the new SAT)
Nick Anderson at The Washington Post reports that the scoring of the redesigned SAT is causing some confusion:
The perfect score of yore — 1600 — is back and just as impressive as ever. But many students could be forgiven these days for puzzling over whether their own SAT scores are good, great or merely okay.
The first national report on the revised SAT shows the confusion that results when a familiar device for sorting college-bound students is recalibrated and scores on the admission test suddenly look a bit better than they actually are. (more…)
Reuters’ Renée Dudley has come out with yet another exposé about the continuing mess at the College Board. (Hint: Coleman’s “beautiful vision” isn’t turning out to be all that attractive.)
This time around: what will happen to the new supposedly Common Core-aligned SAT if Common Core disappears under the incoming, purportedly anti-Core presidential administration?
As Dudley writes:
The Core’s English Language Arts standards call on students to grapple with important readings, including hallowed U.S. documents such as the Declaration of Independence and works of American literature. Coleman’s redesigned SAT embraced the same concept. The Core’s reading standards “focus on students’ ability to read carefully and grasp information … based on evidence in the text” – a pillar of the new SAT. And the Core’s math standards call for “greater focus on fewer topics” – another principle echoed in Coleman’s new SAT.
Former College Board vice president [Hal] Higginbotham was among the first to raise concerns about hitching the SAT’s future to the Common Core.
In his February 2013 response to Coleman’s “beautiful vision,” Higginbotham noted that some states wouldn’t begin implementing the learning standards until the 2014-2015 school year, the same time period in which Coleman wanted to launch the redesigned SAT. It would take years for teachers and students to get fully up to speed on the new curriculum, he and others argued.
“That circumstance leads me to wonder whether all students will have arrived at the starting line at the same time and whether the playing field for them will be level,” Higginbotham wrote in his memo to Coleman. Some students might be “more comfortable and competent than others in what will be presented” on a test aligned with the Common Core, he wrote.
As a consequence, a Common Core-based SAT “will inadvertently favor students from those geographies that have made the most progress” with the standards, Higginbotham wrote. Such a situation “raises fundamental questions of fairness and equity.”
It’s unclear how Trump’s election – and his choice of a Common Core opponent for secretary of education – might affect the SAT and the College Board. Coleman hasn’t spoken publicly about the president-elect’s views.
I’ve followed Dudley’s series of articles on the Common Core with great interest, and for the most part, I think she’s done a very valuable service in terms of revealing some of the more serious problems plaguing the new exam — problems that include the recycling of recent exams so that students received the same exam they had already taken, the leaking of test forms before the exam, and the inclusion of items that did not meet the specifications set out by the College Board.
In this case, however, Dudley’s reporting inadvertently (I assume) encourages some fundamental misunderstandings about Common Core, what it actually involves in terms of curriculum, and how it relates to the redesigned SAT.
A few key points here.
First, in regards to the idea that Common Core could be uniformly rescinded: the federal government’s role in CCSS is limited, at least in terms of imposing the standards. CC was adopted by individual states, and individual states will decide whether to retain or abandon the Standards (or pretend to abandon them while renaming them State Standards).
To be fair, Dudley does mention that CCSS was adopted on a state-by-state basis; her concern is that anti-Core sentiment at the top may translate into more states dropping the Standards.
That, however, brings me to my second point. As Diane Ravitch points out, the DOE may be effectively outsourced to Jeb Bush and Co., major proponents of Common Core. Coleman even released an announcement *praising* Betsy DeVos’s appointment as Secretary of Education.
Despite nominal political divisions, all of these people are effectively on the same side, at least where charters, school “reform” (privatization), school choice, etc. are involved. There may be degrees of disagreement over, say, the value of vouchers or the accreditation of for-profit vs. non-profit charters, but they are basically ideologically aligned.
She’s even spent millions lobbying politicians in her home state of Michigan asking them NOT to repeal Common Core…
Next time, Dudley might want to take a piece of edu-speak to heart and “dig deep” before taking anyone in the president-elect’s circle literally.
Third, the notion that schools can somehow teach a Common Core “curriculum,” and that students who have not used that curriculum (at least on the verbal side) will be at a significant disadvantage, reveals the extent to which popular understanding and coverage of the Core are muddled.
To reiterate: the redesigned SAT does not test any specific body of knowledge related to English, nor does the Core require significant concrete knowledge beyond vague formal skills (comparing and contrasting, identifying main ideas, etc.) whose mastery largely depends on students’ knowledge about the subject at hand.
In the eleventh grade standards, for instance, U.S. Historical Documents are provided as examples — Madison’s Federalist 10 is cited as a source for analyz[ing] how an author uses and refines the meaning of a key term or terms over the course of a text, but the text itself is not actually required reading.
While a handful of documents are mentioned by name (The Declaration of Independence, the Preamble to the Constitution, the Bill of Rights, and Lincoln’s Second Inaugural Address), the primary directive is to analyze “seminal texts” and seventeenth-, eighteenth-, and nineteenth-century foundational U.S. documents of historical and literary significance. (http://www.corestandards.org/ELA-Literacy/RI/11-12/)
As for the new SAT, the majority of the Reading questions on that exam are effectively designed to test whether students understand that texts say what they say because they say it — in other words, comprehension.
The questions are phrased in a byzantine manner, to be sure, but that is primarily to give the illusion that they are testing skills more sophisticated than the ones they are actually testing (and far less sophisticated than those tested on the old SAT).
The combination of vague standards and quasi-random selection of historical passages for the exam means that the best-prepared students are those who have prior knowledge of the passages in question.
But because the College Board does not publish a comprehensive list of documents, movements, individuals, etc. with which students should be familiar (that would cross the line from “standards” to “content”), preparation for that portion of the exam largely depends on what students happen have covered in history class — which in turn depends on individual schools, even individual teachers. And that is a matter of chance, on many levels.
Leveling the playing field? Hardly.
That’s the fundamental problem with the coy, standards-aren’t-curriculum-but-they-sort-of-are game the College Board is trying to play. Students’ ability to employ skills such as analyzing language, identifying main ideas, or evaluating sources, is always to some extent dependent on their knowledge. The unspoken assumption of the Core seems to be that students will of course be learning formal skills in context of a well-structured, coherent curriculum, but that’s often not at all how things work in practice.
If it is never made clear what specific content students must master, and teachers are trained to focus primarily on formal skills, students probably won’t acquire the knowledge they need to apply the formal skills in any meaningful way.
Failure to understand that means any coherent conversation about the problems with the Core is a non-starter.
As for the relationship between student performance on the Verbal portion of the SAT and access to a Common-Core-aligned curriculum … Anyone who thinks that a student whose English classes have been devoted to endlessly reiterating the importance of using “evidence” — that is, citing from a text — to “prove” that a book says what it says will necessarily be better prepared for the SAT than a student who has learned something of substance, really does not understand the issues at play here at all.