Update on the August SAT scandal

Update on the August SAT scandal

The last couple of weeks have seen some new developments in the most recent SAT scandal. Initial reports stated that some questions from the August 2018 test administered in the U.S. had been leaked in Asia before the exam. Mercedes Schneider did a little bit of digging, however, and discovered that wasn’t exactly the case. In reality, the problem goes a lot deeper—and in this case, the problem doesn’t lie with Asian testing centers or students: (more…)

What went wrong with June SAT scores?

What went wrong with June SAT scores?

When scores for the June SAT were released last month, many students found themselves in for a rude surprise. Although their raw scores were higher than on their previous exam(s), their scaled scores were lower, in some cases very significantly so.

An article in The Washington Post recounted the story of Campbell Taylor, who in March scored a 1470—20 points shy of the score he needed to qualify for a scholarship at his top-choice school:

[T]he 17-year-old resolved to take the test again in June and spent the intervening months buried in SAT preparation books and working with tutors. Taylor awoke at 7:30 a.m. Wednesday and checked his latest score online. The results were disappointing: He received a 1400.

He missed one more question overall in June than in March but his score, he said, dropped precipitously. And in the math portion of the exam, he actually missed fewer questions but scored lower: Taylor said he got a 770 in March after missing five math questions but received a 720 in June after missing just three math questions. (more…)

New, inflated SAT scores cause confusion, happiness (or: what the media doesn’t say about the new SAT)

New, inflated SAT scores cause confusion, happiness (or: what the media doesn’t say about the new SAT)

Nick Anderson at The Washington Post reports that the scoring of the redesigned SAT is causing some confusion:

The perfect score of yore — 1600 — is back and just as impressive as ever. But many students could be forgiven these days for puzzling over whether their own SAT scores are good, great or merely okay.

The first national report on the revised SAT shows the confusion that results when a familiar device for sorting college-bound students is recalibrated and scores on the admission test suddenly look a bit better than they actually are. (more…)

The new administration, Common Core, and the new SAT

Reuters’ Renée Dudley has come out with yet another exposé about the continuing mess at the College Board. (Hint: Coleman’s “beautiful vision” isn’t turning out to be all that attractive.)

This time around: what will happen to the new supposedly Common Core-aligned SAT if Common Core disappears under the incoming, purportedly anti-Core presidential administration? 

As Dudley writes:

The Core’s English Language Arts standards call on students to grapple with important readings, including hallowed U.S. documents such as the Declaration of Independence and works of American literature. Coleman’s redesigned SAT embraced the same concept. The Core’s reading standards “focus on students’ ability to read carefully and grasp information … based on evidence in the text” – a pillar of the new SAT. And the Core’s math standards call for “greater focus on fewer topics” – another principle echoed in Coleman’s new SAT.

Former College Board vice president [Hal] Higginbotham was among the first to raise concerns about hitching the SAT’s future to the Common Core. 

In his February 2013 response to Coleman’s “beautiful vision,” Higginbotham noted that some states wouldn’t begin implementing the learning standards until the 2014-2015 school year, the same time period in which Coleman wanted to launch the redesigned SAT. It would take years for teachers and students to get fully up to speed on the new curriculum, he and others argued.

“That circumstance leads me to wonder whether all students will have arrived at the starting line at the same time and whether the playing field for them will be level,” Higginbotham wrote in his memo to Coleman. Some students might be “more comfortable and competent than others in what will be presented” on a test aligned with the Common Core, he wrote.

As a consequence, a Common Core-based SAT “will inadvertently favor students from those geographies that have made the most progress” with the standards, Higginbotham wrote. Such a situation “raises fundamental questions of fairness and equity.”

and later: 

It’s unclear how Trump’s election – and his choice of a Common Core opponent for secretary of education – might affect the SAT and the College Board. Coleman hasn’t spoken publicly about the president-elect’s views.

I’ve followed Dudley’s series of articles on the Common Core with great interest, and for the most part, I think she’s done a very valuable service in terms of revealing some of the more serious problems plaguing the new exam — problems that include the recycling of recent exams so that students received the same exam they had already taken, the leaking of test forms before the exam, and the inclusion of items that did not meet the specifications set out by the College Board. 

In this case, however, Dudley’s reporting inadvertently (I assume) encourages some fundamental misunderstandings about Common Core, what it actually involves in terms of curriculum, and how it relates to the redesigned SAT. 

A few key points here.

First, in regards to the idea that Common Core could be uniformly rescinded: the federal government’s role in CCSS is limited, at least in terms of imposing the standards. CC was adopted by individual states, and individual states will decide whether to retain or abandon the Standards (or pretend to abandon them while renaming them State Standards).

To be fair, Dudley does mention that CCSS was adopted on a state-by-state basis; her concern is that anti-Core sentiment at the top may translate into more states dropping the Standards. 

That, however, brings me to my second point. As Diane Ravitch points out, the DOE may be effectively outsourced to Jeb Bush and Co., major proponents of Common Core. Coleman even released an announcement *praising* Betsy DeVos’s appointment as Secretary of Education.

Despite nominal political divisions, all of these people are effectively on the same side, at least where charters, school “reform” (privatization), school choice, etc. are involved. There may be degrees of disagreement over, say, the value of vouchers or the accreditation of for-profit vs. non-profit charters, but they are basically ideologically aligned. 

As Steven Singer has written about (link also courtesy of Ravitch), Devos, who has claimed to be opposed to the Core:  

[Is] a board member of Jeb Bush’s pro-Common Core think tank, Foundation for Excellence in Education, where she hangs out with prominent Democratic education reformers like Bill Gates and Eli Broad…

She founded, funds and serves on the board of the Great Lakes Education Project (GLEP), an organization dedicated to the implementation and maintenance of Common Core…

She’s even spent millions lobbying politicians in her home state of Michigan asking them NOT to repeal Common Core…

Next time, Dudley might want to take a piece of edu-speak to heart and “dig deep” before taking anyone in the president-elect’s circle literally. 

Third, the notion that schools can somehow teach a Common Core “curriculum,” and that students who have not used that curriculum (at least on the verbal side) will be at a significant disadvantage, reveals the extent to which popular understanding and coverage of the Core are muddled.

To reiterate: the redesigned SAT does not test any specific body of knowledge related to English, nor does the Core require significant concrete knowledge beyond vague formal skills (comparing and contrasting, identifying main ideas, etc.) whose mastery largely depends on students’ knowledge about the subject at hand.

In the eleventh grade standards, for instance, U.S. Historical Documents are provided as examples — Madison’s Federalist 10 is cited as a source for analyz[ing] how an author uses and refines the meaning of a key term or terms over the course of a text, but the text itself is not actually required reading. 

While a handful of documents are mentioned by name (The Declaration of Independence, the Preamble to the Constitution, the Bill of Rights, and Lincoln’s Second Inaugural Address), the primary directive is to analyze “seminal texts” and seventeenth-, eighteenth-, and nineteenth-century foundational U.S. documents of historical and literary significance. (http://www.corestandards.org/ELA-Literacy/RI/11-12/)

As for the new SAT, the majority of the Reading questions on that exam are effectively designed to test whether students understand that texts say what they say because they say it — in other words, comprehension. 

The questions are phrased in a byzantine manner, to be sure, but that is primarily to give the illusion that they are testing skills more sophisticated than the ones they are actually testing (and far less sophisticated than those tested on the old SAT). 

The combination of vague standards and quasi-random selection of historical passages for the exam means that the best-prepared students are those who have prior knowledge of the passages in question.

But because the College Board does not publish a comprehensive list of documents, movements, individuals, etc. with which students should be familiar (that would cross the line from “standards” to “content”), preparation for that portion of the exam largely depends on what students happen have covered in history class — which in turn depends on individual schools, even individual teachers. And that is a matter of chance, on many levels. 

Leveling the playing field? Hardly. 

That’s the fundamental problem with the coy, standards-aren’t-curriculum-but-they-sort-of-are game the College Board is trying to play. Students’ ability to employ skills such as analyzing language, identifying main ideas, or evaluating sources, is always to some extent dependent on their knowledge. The unspoken assumption of the Core seems to be that students will of course be learning formal skills in context of a well-structured, coherent curriculum, but that’s often not at all how things work in practice. 

If it is never made clear what specific content students must master, and teachers are trained to focus primarily on formal skills, students probably won’t acquire the knowledge they need to apply the formal skills in any meaningful way.

Failure to understand that means any coherent conversation about the problems with the Core is a non-starter. 

As for the relationship between student performance on the Verbal portion of the SAT and access to a Common-Core-aligned curriculum … Anyone who thinks that a student whose English classes have been devoted to endlessly reiterating the importance of using “evidence” — that is, citing from a text — to “prove” that a book says what it says will necessarily be better prepared for the SAT than a student who has learned something of substance, really does not understand the issues at play here at all. 

The SAT and zombies

The New York Times op-ed columnist Paul Krugman often talks about zombie ideas – ideas that are unsupported by any evidence but that continue to linger on in the mainstream, where they are kept alive by Very Serious People who should really know better, but who collectively choose to bury their heads in the sands because it suits their needs to do so.

As far as the SAT is concerned, I would like to nominate two myths in particular for zombie status:

1) Arcane vocabulary

As I’ve pointed out countless times before (hey, someone has to keep saying it), virtually all of the supposedly obscure vocabulary tested on the old SAT was in fact the type of moderately sophisticated and relatively common language found in the New York Times

As I’ve also pointed out before, this is a misconception that could be clearly rectified by anyone willing to simply look at a test, but alas, people who hold very strong convictions are wont to reject or ignore any evidence to the contrary. Call this Exhibit A for confirmation bias. 

2) The guessing penalty

To be clear, there is no automatic correlation between guessing and answering questions correct vs. incorrectly. A student can guess wildly and still get a question right, or answer with absolute certainty and get it wrong. In theory, it was possible to guess on every single question of the old SAT and still receive a perfect score; likewise, a student could conceivably answer every question confidently without getting a single one right.

The quarter-point deduction for wrong answers on the old SAT was designed as a counterbalance to prevent students from receiving scores that did not reflect their knowledge, and to prevent strategic guessers from exploiting the structure of the test to artificially inflate their scores (as can now be done on both the new SAT and the ACT).

More than any other ideas, though, at least one of these two seems to make an appearance in virtually every article discussing the SAT, regardless of how valid the other points are.

For example, in a recent discussion of this year’s slight drop in SAT scores (old test), Nick Anderson of The Washington Post states that “The College Board jettisoned much of the old test’s arcane vocabulary questions, dropped the penalty for guessing and made the essay optional” – a sentence that remarkably contains not just one but two SAT words! 

And in her otherwise excellent Reuters article on the College Board’s failure to ensure that SAT math questions conformed to the test specifications, Renee Dudley makes several references to the “obscure” vocabulary on the old test. Just for grins, I went through her article looking closely at the choice of vocabulary and found nearly a dozen “SAT words” (including some real faves like prescient and succinct).

She also alludes to the fact that “The new test contains no penalty for guessing wrong, and the College Board encourages students to answer every question.” 

As I read Anderson and Dudley’s articles, it occurred to me that the inclusion of these zombie ideas has actually become a sort of rhetorical tic, one that anyone writing about the changes to the SAT is effectively obligated to mention. 

Obviously, these references involve two of the biggest changes to the test and can hardly be avoided, but I think that something more than just that is going on here. 

Consider, for example, what isn’t said: although it is sometimes stated that rSAT math problems are intended to have more of a “real world” basis, the fact that geometry has been almost entirely removed from the exam is almost never explicitly mentioned.

In addition, the kind of disparaging language used to describe SAT vocabulary is notably absent when it comes to math. I have yet to encounter any piece of writing in which geometry was dismissed an “obscure” subject that lacked any relevance to (c’mon, say it with me) “college and career readiness.” Nor does one regularly read articles sympathetic to students who whine that they’ll never actually use the Pythagorean Theorem for anything outside geometry class.  

Why? Because depicting a STEM subject – any STEM subject – that way would be taboo, given the current climate. Even if the College Board has decided that geometry isn’t one of the “skills that matter most,” the virtual elimination of that subject from the test is a matter that must be pussyfooted around. 

On a related note, the arcane vs. relevant discussion also plays to fears that students will be insufficiently prepared to compete in the 21st century economy. The goal in emphasizing “relevant” vocabulary is to provide reassurance that the students won’t fall behind; that the College Board can now be trusted to ensure they are prepared for the real world. 

At the same time, this is essentially a rhetorical sleight of hand designed to disparage the humanities without appearing too obviously to do so – a euphemism for people who do not know what euphemisms are because, of course, such words have been deemed irrelevant, and why bother to learn things that aren’t relevant?

The unspoken implication is that acquiring a genuinely rich, adult-level vocabulary is not really an important part of education; that it is possible to be prepared for college-level reading equipped with only middle school-level words; and that it is possible to develop “high level critical thinking skills” without having a commensurate level of vocabulary at one’s disposal. In short, that it is possible to be educated without being educated.

That is of course not possible, but it provides a comforting fantasy.

Call this the respectability politics of anti-intellectualism – a way of elevating ignorance to the level of knowledge by painting knowledge not as something overtly bad but as something merely irrelevant. That is a much subtler and more innocuous-sounding construction, and thus a far more insidious one.

As for the “guessing penalty” myth… This phrase is in part designed to reinforce a narrative of victimization. Its goal is to elicit pity for the poor, under-confident students whose scores did not reflect what they knew because they were just too intimidated to bring themselves to pick (C), even if they were almost sure it was the answer. 

Framing things in terms of guesses rather than wrong answers makes it much easier to evoke sympathy for these students. After all, why should anyone – especially a member of an already oppressed group – be punished for guessing?

The conflation of guessing and punishment also helps perpetuate a central American myth about education, namely that more confidence = higher achievement. By that logic, it is assumed that students (sometimes implicitly but often explicitly understood as female, underrepresented minority, and first/generation low-income) would perform better if only they knew they wouldn’t lose additional points for taking a risk. If these students felt more confident, so the argument goes, their scores would improve as well.

In reality, however, there is often an inverse relationship between confidence and knowledge: if anything, the most confident students tend to be ones who least understand what they’re up against. (True story: the only student who ever told me he was going to answer every question right was scoring in the high 300s.) Helping these students feel more confident does nothing to increase their knowledge and can actually cause them to overestimate their abilities. In fact, when students begin to acquire more knowledge and obtain a more realistic understanding of where they actually stand, it is common for their confidence to actually decrease.

The really interesting part about the phrase “guessing penalty,” however, is that it can also be understood in another way – one that directly contradicts the way described above.

An alternate, perhaps more charitable, interpretation of this phrase is that students were formerly penalized for guessing too much. Not realizing that they would lose an extra quarter-point for wrong answers, they would try answer every question, including ones they had no idea how to do, and lose many more points than was necessary.

Understood this way, the term “guessing penalty” refers to the fact that the scoring system made it almost impossible for students to wild-guess their way to a high score. I suspect that this was the original meaning of the term. (As a side note, I can’t help but wonder: when people argued for the elimination of the quarter-point penalty, did they realize that they were actually arguing in favor of making the SAT easier to game?)

According to this view, students who cannot afford tutors or classes to teach them “tricks” about which questions to skip cannot possibly compete with their more privileged peers. Here again, the obvious goal is to frame the issue in terms of equity.

At this point, one might observe a contradiction: students on one hand were described as being so cowed by the thought of losing ¼ of a point that they could not even bring themselves to guess, and yet they were on occasion also presented as being so oblivious to that penalty that they tried to answer every question. 

But back to the subject at hand.

Another reason I suspect the socio-economic argument against the “guessing penalty” has so much traction is that it would seem to be backed up by commonsense reality.

While plenty of students managed to figure out the benefits of skipping sans coaching, it is also true that a certain type of student could benefit significantly from some help in that department. Given two students with the same level of foundational knowledge, starting scores, and ability to integrate new information, the one with the tutor would typically be at an advantage. That’s pretty hard to dispute.

Whether this particular type of help is inherently more problematic than other types of help – help that more privileged students will continue to receive, quarter-point penalty or no quarter-point penalty – is, however, subject to debate.

Based on my experience, I would actually argue that in fact the quarter-point deduction made the old SAT an overall harder test to tutor than it would have been otherwise, and far less vulnerable to the kind of simple tricks and strategies that mid-range students can, to some extent, use on both the new SAT and the ACT.

The reality is that teaching students to skip questions on the old SAT was not always such a straightforward process; in some cases, it was a downright nightmare. It was only really effective when students had a good sense of which questions they were likely to answer incorrectly – that is, when the only questions they consistently got wrong were the ones they had difficulty answering. Unfortunately, this was usually only the case for about the top 10-15% of students.

In contrast, trying to help a student who was consistently both confident and wrong figure out which and how many questions to skip was often an exercise in futility. Because such students often didn’t know what they didn’t know, and had a corresponding tendency to overestimate their knowledge, there was no clear correlation between how they perceived themselves to be doing and how they were actually doing. This was most problematic on the reading section, where easy and hard questions were intermingled; there was no way to tell them, for example, to focus on the first twenty questions. 

When students’ knowledge was really spotty, it was difficult to determine whether they should even be encouraged to skip more than a few questions on the entire test because there was absolutely no guarantee they’d get enough of the questions they did answer right to save their score from being a complete disaster. And it was also necessary to be careful when discussing which question types to avoid because if students came across one such question phrased in an unfamiliar way, they might not recognize it as something to avoid. 

As a tutor, I came to loathe those situations because they forced me to treat the test as a cheap guessing game, particularly if the students were short term. Eventually, I stopped tutoring people in that situation altogether because things were so hit-or-miss. Often, their scores did not improve at all, and sometimes they even declined.

In addition, some students flat-out refused to even try skipping, regardless of how much I begged/pleaded with/cajoled them. I had students who repeatedly promised me they would try skipping some questions on their next practice test and then answered every question anyway, every time. I never even managed to figure out how many questions, if any, they should skip, and so I couldn’t advise them.

At the opposite extreme, I had students who knew – knew – that they could skip at most one or two questions suddenly freak out on the real test and skip seven.

The point here is that no matter how much tutoring they had received, and no matter how many thousands of dollars their parents had paid, the kids were the ones who ultimately had to self-assess in the moment and make the decisions about what they likely could and could not answer. Sometimes they stuck to the plan, and sometimes they panicked or got distracted by the kid sitting in front of them tapping his pencil and spontaneously threw out everything we’d discussed. No one could do it for them. And if their assessments were inaccurate and they messed up, their score inevitably took a real hit. The limits of tutoring were exposed in a very blatant way.

One last point:

On top of everything I’ve discussed so far, there is also the issue of which groups of students get compared in discussions about equity. When it comes to test-prep, there is a foundational level below which strategy-based tutoring is largely ineffective. If we’re talking about the most profoundly disadvantaged students, then it’s unlikely the kind of classes or tutoring that are generally blamed for the score gap would bring these students up to anywhere remotely close to the range of their middle-class peers.

Yes, certain individual students might draw considerable benefit, but on the whole, the results would probably be quite small. The amount of intervention needed to truly close the gap would be staggering, and it would have to start long before eleventh grade. But that’s a deep systemic issue that goes far beyond the SAT, and thus it’s easier to simply make superficial changes to the test.

I suspect – although I do not have any hard evidence to back this up – that the effects of tutoring are felt most strongly somewhere in the middle: between say, the lower-middle class student and the upper-middle class student who attend similarly good schools, take similar classes, and have similar skills and motivation levels – students who stand to benefit more or less equally from tutoring. If the former cannot even afford to take a class while the latter meets with her $150/hr. private tutor twice a week for six months, there’s a pretty good chance the difference will show up in their scores.

This is of course still a problem, but it’s a somewhat different problem than the one that usually gets discussed.

Moreover, the elimination of the wrong-answer penalty will give privileged mid-range students an even larger advantage. Yes, students who do not have access to coaching can now guess randomly without worrying about losing additional points, but students who do have access to coaching can be taught to guess strategically, filling in entire sections with the same letter to guarantee a certain number of points while spending time on the questions they’re most likely to answer correctly.

This is particularly true on the reading section. Because there are fewer question types, and the passages are not divided up over multiple sections, students on the lower end of average who have modest goals can be more easily taught to identify what to spend time on and what to skip than was the case before.

The result is that the achievement gap is unlikely to disappear anytime soon, regardless of the College Board’s machinations.

The College Board informant returns (and the College Board goes after him)

The College Board informant returns (and the College Board goes after him)

This past June, Manuel Alfaro, a former Executive Director of Test Design and Development at the College Board, wrote a stunning series of tell-all posts on LinkedIn in which he detailed the numerous problems plaguing the redesigned SAT as well as the College Board’s attempts to alternately ignore and cover up those problems.

For several weeks, Alfaro posted nearly every day, each time revealing more disturbing details about the College Board’s bumbling ineptitude and equally clumsy attempts to hide it. 

Then, after 16 posts, he disappeared. 

I wrote about Alfaro’s major revelations here and here, so I’m not going to repeat them in this post; if you’d like to read the full series for yourself, you can do so via Alfaro’s LinkedIn page.

Given the accusations, it wasn’t hard to speculate about why Alfaro had gone silent so abruptly: presumably, the College Board’s team of legal vultures had either paid him off or threatened to make his life miserable if he didn’t keep his mouth shut. 

More than one commenter who appeared to have some personal familiarity with Alfaro pointed out that he isn’t the type of person to back down easily. 

As it turns out, both sides were right.

Alfaro has indeed returned, with two new posts (see here and here) revealing yet more lurid details about the College Board’s exploits. I strongly encourage you to read them. 

Unsurprisingly, the College Board has also come after him: Alfaro’s home was apparently raided by the FBI as a result of accusations that he was the person who released hundreds of test items to Reuters. But he also deliberately refrained from posting until after the April (school-day) and May SATs were released.

Why?

Because, he asserts, the administered test did not match the specifications laid out for the redesigned SAT, leaving a significant percentage of test-takers unable to finish the exam. Again, he claims, top College Board officials were aware of the problem but took no steps to rectify it prior to administration.

According to Alfaro, the extraordinary delay in releasing the tests was most likely due to the College Board’s need to rewrite the affected items, post-administration, to make them conform to the specifications. (Perhaps this what one College Board official meant when he stated that the delay was the result of a problem with the “metadata.”)

He therefore waited to begin posting until after the exams had been released in order to see whether — or rather, how and to what extent — the College Board had doctored them.  

So the issue is not only that the College Board has released an insufficient number of full-length exams; it is that even the exams that have been released may not be representative of the real test.  

Alfaro also states that in order to beat out the ACT for the Colorado state testing contract, the College Board spuriously claimed that the new exam tested scientific reasoning by counting every question that referred to a scientific topic — regardless of whether the question actually tested science in any way. 

I’m not sure what’s more disturbing: that the College Board actually argued that rSAT tested science even though it clearly does no such thing, or that Colorado school officials actually bought the College Board’s claims (as did schools officials in Illinois, Michigan, and Connecticut). After all, the only thing they needed to do was spend five minutes looking at the test

I think the that the major takeaway from all this is that the College Board is operating on the very cynical but all too often valid assumption that if one proclaims that something is true loudly and often enough, it ceases to be relevant whether that thing is actually true.

Thus, it is not necessary for the new SAT to actually require students to use evidence (the way the old SAT essay did, for example) — it is merely sufficient to call things “evidence-based.” Likewise, the College Board need only indignantly proclaim its commitment to “transparency,” regardless of whether there is evidence to suggest that such thing exists in any meaningful way.  

Most people — even those in charge of education for hundreds of thousands of students — will not bother to question important-sounding executives in suits who come bearing talking points about equity and slick PowerPoint presentations.

Provided that things are spun correctly and the necessary talking points are adhered to strictly enough, almost any absurdity can be made to sound reasonable. (Gee, who’d have thought that bringing in a McKinsey consultant would result in THAT?! Or maybe that was precisely the point.)

Such is the beauty of a post-fact world.

This isn’t exactly news at this point, but it bears repeating. In politics, enough people are clued into reality to spark a good deal of pushback after a certain threshold of ridiculousness is reached. (I was worried for a while that this wouldn’t be the case, but I was proven wrong). In education, however, people tend to be less informed about the details, and thus the issues are considerably easier to obscure.   

What makes the game the College Board is playing particularly dangerous is that it distorts key terms in the lexicon of education itself (critical thinking, evidence, higher-order thinking) so that they come to mean something far different, or even the opposite, of what they are traditionally understood to mean. Words become unmoored from their definitions.

And if any of this is questioned, the response is always along the lines of “it’s complicated.” Obfuscation is thus recast as nuance.

The result is an exercise in doublespeak in which the College Board says one thing and the public understands another. (“Isn’t it wonderful that students have to use evidence – that will really help them develop those higher-order thinking skills!”)

An organization that has a fundamental responsibility to help students learn to use language correctly is instead teaching a far different lesson, namely the importance of jargon and spin. 

As I’ve said before, from a sociological perspective it is utterly fascinating to watch this phenomenon play out in real time, but it is also terrifying to witness the ease with which people can be induced to ignore what is under their noses and to excuse the propagation of blatant falsehoods. (“I mean, everybody knows that guessing penalty doesn’t really mean there’s a penalty for guessing. It’s just called that.”) 

So is this ultimately the goal: to teach students to repeat a series of platitudes and buzz words, without any regard for their underlying meanings? I really am beginning to think this is the case. 

Critical thinking, for example, is often touted as the most important thing for students to develop, but people who exhibit a nuanced understanding of topics are typically derided as “wonks.” Witness the way the media bemoans Donald Trump’s lack of specifics but then turns around and sneers at Hillary Clinton for having the nerve to discuss her policies in detail, of all things. 

From what I’ve observed, the present goal of the education system seems to be to get students to about a seventh- or eighth-grade level very quickly and then more or less leave them there; real advanced work is for nerds. (And real advanced STEM work is for robotic Asian nerds — yes, there is a racially tinged component here.)  

I maintain that most people who extol the virtues of critical thinking would not much like the real thing if they saw it. It just involves too much work and too many facts. And worse, it’s not always fun

To be sure, this type of anti-intellectualism has been a consistent feature of American life since the nineteenth century, and granted I’m not an expert, but I’m not quite sure whether it has ever been embraced to quite this extent by educators themselves. 

The question is, have things progressed so far that the people who run the education system are incapable of noticing these things? And when people do point them out, will they have any effect?