I think it’s fair to say that one of progressive education’s central characteristics is its obsession with so-called “active learning” and its abhorrence of student passivity.
The Center for Research on Teaching and Learning at the University of Michigan defines active learning as “a process whereby students engage in activities, such as reading, writing, discussion, or problem solving that promote analysis, synthesis, and evaluation of class content,” which seems like a perfectly reasonable pedagogical prescription.
Obviously, one of the primary goals of teaching is to encourage students to engage with the material; it would be difficult for anyone to seriously argue that students should approach material passively.
The problem, however, is that the definition of active learning has become increasingly literal. Perhaps unsurprisingly given the American obsession with sports, “active” has now come to be interpreted as “physically active.” The assumption is that if students are not moving around, or participating in a debate, or sharing their ideas in a small group, then they cannot possibly be learning.
The Stanford Teaching Commons website provides a typical example:
Whether you’re facing a lecture hall filled with 300 students or a seminar table with 15 students, one of your primary goals for the class should be to actively engage students with the material. Students learn more when they participate in the process of learning, whether it’s through discussion, practice, review, or application (Grunert, 1997). This is in stark contrast to traditional styles of teaching, where students are expected to sit for hours, listening and, theoretically, absorbing information presented by the instructor.
For example, encouraging short partner discussions during lectures (i.e., think-pair-share), adding problem- or case-based research projects to the curriculum, and incorporating time for small-group critical analysis exercises during seminars are all great ways to actively engage students in learning.
Let’s consider these two statements. First, in regards to “traditional styles of teaching,” the reality is that most undergraduate lectures last no more than an hour and are also broken up into separate, discussion-based recitation sections consisting of 20 or so students. Furthermore, professors are usually perfectly willing to entertain questions, either during or after their lectures, and to stop and clarify points that a class is clearly having difficult grasping.
The stereotype of the boring old professor droning on certainly does exist, but I would wager that it’s a far less common phenomenon than it’s usually made out to be – especially at places like Stanford.
This description is thus in many ways a caricature, a straw man argument designed to induce distaste for the traditional. When you consider that Stanford is arguably more of an incubator for future Silicon Valley techies than a university, that is hardly surprising; in this case, however, you’re likely to find identical rhetoric espoused at pretty much every other elementary school, middle school, high school, and university in the United States.
Also, to make what should be an obvious point, it is entirely possible for students to be passive while working in groups – they simply sit back and let their more motivated classmates do the work, regardless of whether the teacher assigns roles.
Partner-based work is no guarantee either. A truly unmotivated student who is assigned to work in pairs may simply spend time distracting his or her partner. (Granted, such students are unlikely to attend Stanford, but still.)
To be very clear about this, I am not arguing for a return to a time when college consisted exclusively of lectures and memorization, nor am I suggesting that professors should not make use of a variety of pedagogical techniques as necessary and appropriate.
Rather, my beef is with current assumptions about just what constitutes passive vs. active learning, and about how those assumptions can cause effective forms of pedagogy to be both misunderstood and dismissed.
Consider, for example, the traditional lecture-note taking model – the version that involves writing notes by hand rather than typing them. Because I attended high school in the pre-ubiquitous laptop days of the 1990s, I have a good deal of experience with that phenomenon.
Now, taking notes by hand as teacher lectures is typically held up as the epitome of student passivity, but in my experience, it actually demands a type of active engagement that is greatly minimized when students write on a computer.
Because there is no way to write fast enough to transcribe a lecture verbatim, note-taking by hand is an act that requires constant negotiation. It is necessary to decide which points are important enough to be written down and how they should be organized (headers, titles, roman numerals, etc.), and to summarize and condense them clearly while still retaining the essential ideas. These are sophisticated skills, which need to be taught as well, and they require students to consistently and actively apply their individual discretion and judgment.
When I was required to write huge amounts of notes, for example, I developed my own shorthand. I abbreviated constantly, and drew arrows and symbols. Although I thought nothing of these types of shortcuts at the time, having relied on them largely out of necessity, I suspect they are crucial to developing the ability to move easily between concrete and abstract.
A decade later, when I began tutoring SAT reading, I was baffled by the extent to which my students struggled with these skills, as well as by their persistent refusal to write things down. As I compared my own decidedly low-tech high school experience with their technology-flooded one, I slowly began to piece together the reason behind their difficulties. Now, I am increasingly disturbed by the emphasis on rapid group- and technology-based tasks that merely appear sophisticated at the expense of ones that actually build they type of foundation that ultimately allows more for sophisticated work.
Beyond that, it is shortsighted to assume that the note-taking process automatically precludes engagement with the actual content of a lecture. To argue otherwise is effectively to suggest that it impossible to listen and think simultaneously! People are not automatons – assuming they have some level of interest in the subject and are competent note-takers, most of them will spontaneously make connections between what they are hearing to things they have learned before; indicate questions and points of confusion; and mark ideas that are particularly interesting or important. This is in fact a type of dialogue; it just happens to be occurring in writing rather than speaking on one end.
Another feature of this type of learning that is often overlooked is the time scale on which it occurs. Students have weeks or even months to review, absorb, ponder, and formulate responses, in a self-directed way. This stands in sharp contrast to the immediate – and often superficial – responses that typical group work tends to encourage.
It seems to me also there is a performative aspect to the whole idea of “active learning,” one that I find vaguely disturbing. Students are expected to demonstrate – to make a show of – the fact that they are learning, in a very obvious visual way. The overt expression of excitement and happiness is taken as evidence that true learning is occurring. The underlying assumption seems to be that learning only exists if it can be directly and easily observed, and if it corresponds to the correct emotions. I suspect that this is related to the current obsession with measuring and quantifying, and to the value placed on instant feedback; processes that do not provide immediate results are inherently suspect. I also suspect it reflects the relentless American focus on happiness. People, even children, who do not convey outward positivity are suspect.
A student who is merely sitting and thinking is assumed not be doing much of anything at all. In contrast, one who weighs in vociferously on a subject about which he or she is largely ignorant is more likely than not to draw praise.
Learning, of course, does not always take place a showy way. Rather, it can be a bumpy, unpredictable, idiosyncratic process. It occurs in fits and starts, sometimes in the company of others and other times in solitude. A student may struggle with a concept for months, then suddenly find that it mysterious “clicks” months later for no apparent reason. A system built around instant feedback completely ignores that fact.
The result of all this emphasis on constantly “proving” that one is learning is a system that prizes superficiality over substance, quantity over quality, and confidence over humility. (Indeed, studies have found that although American students are middling academically compared to their peers internationally, they are consistently tops in confidence.)
There is also a striking obliviousness to the motivations of more reticent students. I recently came across an article on the NPR website that captures this phenomenon in a manner so pitch-perfect it almost lapses into parody. It cites one expert who suggests that to accommodate quieter members of a class, teachers should allow students to “walk around the room, writing ideas on tacked-up pieces of paper. They can respond to each other’s ideas — like a sort of silent dialogue.”
This is active learning reduced to its most absurd extreme. The notion that some students might simply be more interested in listening to a knowledgeable adult explain things, or in puzzling things out on their own, is not even entertained. It is as if any physical activity, no matter how ridiculous, must be posited as an alternative preferable to having teachers talk and students listen.
And then there’s this. Discussing why some students are quiet, Erica Corbin, Director of Community Life and Diversity at Manhattan’s über-elite Chapin School has this to say:
Personality might be some of it,” she explains, “and we also might have kids who are quiet because they have been shut down. We might have kids that are quiet because they anticipate being shut down whether they have been or not.
Shutting down for all kinds of reasons, she adds. Stereotypes. Biases. Trouble at home: “When we’re thinking about students who are quiet, how does that also connect with their race … their gender … their sexuality?”
Newsflash: students who do not feel compelled to constantly voice their opinions in class might remain quiet for intellectual rather than emotional reasons. They might, for example, want to sit back and gather the facts before passing judgment. But that possibility is not even acknowledged.
Also overlooked in this oh-so-trendy discussion of victimhood is the possibility that students who are genuinely traumatized, or who come from chaotic home environments, are likely to benefit from having a stable, competent adult present information in a clear and structured manner. The last thing a student in that situation needs is a classroom resembling a three-ring circus. As the product of a not-quite-stable home, I can state that it was a profound relief to be able to just sit in a chair and write, knowing that an adult was in charge and that it was ok to let someone know more than me.
Although it may surprise readers of this blog who are accustomed to hearing my unrestrained opinions, I tend to refrain from commenting on a topic until I’ve gathered enough information to weigh in. Before then, I’m more likely to spend some time hanging out in the background, reading and observing, familiarizing myself with the major arguments and players, and parsing the rhetoric of the standard talking points. Only after doing these things do I begin to figure out just where I stand.
I’ve been this way for much of my life. I was not terribly talkative in class during high school, not because I was shy (something I’ve never been) but because I recognized that I didn’t really know enough to say anything particularly insightful. I realize that many people would nowadays interpret this as a sign of low self esteem, but it was a deliberate decision on my part: I was fully aware that there was a lot I didn’t know, and I wasn’t going to run my mouth off just for the sake of a participation grade. And the truth is that when I was 16, my thoughts were not notably interesting or original.
All the while, though, I was listening intently and absorbing and contemplating. The things I learned have remained in my head for years; I still regularly think about some of the questions my teachers posed (is it better to do a good thing for a bad reason, or a bad thing for a good reason? why are some people compelled to consciously act against their own self-interest?). And when I did finally begin to voice my opinions publicly – after college (where I did start to speak in class); after living in two foreign countries and attending school in one; after working with dozens of students ranging from Florida homeschoolers to Park Avenue penthouse dwellers – I really and truly had something to say.
While looking for models for the little sendup of progressive education that I posted recently, I came across a New York Times op-ed piece entitled “What Babies Know About Physics and Foreign Languages.” As the title and tag line (“Our kids don’t need to be taught in order to learn”) suggest, the piece is a pitch-perfect paean to the progressive ethos, touting the benefits of allowing preschoolers to learn “naturally,” through imitation.
While preschoolers can of course acquire many important skills this way, my immediate response to the article was to wonder how quickly Gopnik’s assertions about the benefits of natural learning for four year-olds would be misappropriated as an endorsement for treating higher levels of education this way.
As it turned out, I got my answer pretty quickly.
It came in the form of an Inside Higher Ed article entitled “Playing, Learning, and the Teaching Problem,” by Barbara Fister, a college librarian in Minnesota, and it pretty much epitomizes the phenomenon I was attempting to satirize.
Given that the article represents an outstanding specimen of its genre, I thought it might be interesting, not to mention instructive, to take a closer look at how the piece functions – that is, to consider the stock features and rhetorical moves it employs.
What makes Fister’s piece such a prime exemplar is the way in which it takes a real issue of concern, namely the difficulty that college freshmen have using and citing sources, and proceeds to draw a series of nonsensical and potentially destructive conclusions.
The article is also of particular interest to me because it gets at the heart of one of my major areas of interest: the high school-college reading and writing gap.
Note: if you’re studying for the SAT essay or planning to take AP Comp, you might want to pay attention to this. In particular, notice my use of concessions such as to be fair…, it is true that…, and this is a valid point. Although I take a clear stance against Fister, I also consider what she gets right. This a deliberate strategy designed to produce a nuanced analysis, one that shows I’ve considered the issue from multiple angles. I’m not “just saying stuff.”
So, that said, let’s start by considering Fister’s opening:
Allison Gopnik, a psychology professor at UC Berkeley, had an interesting piece in last Sunday’s New York Times that has been sticking to my brain like a burr. She argues that our current obsession with preparing children for success gets in the way of their learning. She describes several recent experiments with small children, who are naturally curious and determined to figure things out – and remarkably good at it. They learn about the world around them by observation, imitation, and play, not by being taught. In fact, she argues, if they are taught, they will imitate accurately, but being told how to do something takes away the opportunity to figure things out. When small children observe and imitate, they are testing the physical world around them and coming up with their own understanding of how things work. Explicit instruction short-circuits that process.
The first thing to notice here is that Fister uses Gopnik in order to establish credibility. Gopnik is a well-known cognitive psychologist who has published extensively about the way young children learn. As Fister immediately goes out of her way to point out, Gopnik is a professor at UC Berkeley (an elite institution) writing in The New York Times (an elite publication). The purpose of the first sentence thus functions as an appeal to authority, providing weight for the argument that follows: Gopnik is an expert in a scientific field, ergo her ideas – and, by extension, the author’s – have the backing of scientific fact.
The next thing to notice is that the following sentences outline one of the central tropes of progressive education writing, namely the “learning should be natural and teaching destroys it” argument (“She argues that our current obsession with preparing children for success gets in the way of their learning…”.). This is an idea that can be traced through a long line of educational theorists, starting with Rousseau in the eighteenth century and proceeding up through Emerson and Dewey, and ultimately ending up as accepted doctrine in pretty much every American school of education.
Now, it is possible, and even probable, that Fister believes she is being daring and innovative by making this connection – that she is pushing back against a rigid and hidebound system that sells students short by constraining their natural love of learning. But from the perspective of someone who has repeatedly encountered this exact argument, expressed in almost exactly the same words, it comes off as clichéd and almost painfully naive.
Also of note here is the fact that Fister is writing in Inside Higher Ed – this is a publication geared toward members of the academic community. As most reasonable people would agree, the pedagogical needs of four year-olds are quite different from those of 18 year-olds. While preschoolers are acquiring basic physical, social, and emotional management skills necessary to function independently in everyday life (skills that are indeed more important at that point than filling out worksheets), college freshmen are accruing more abstract, specialized, intellectual skills necessary to function in the public and professional adult sphere. Most of these skills are not “natural” by any means.
These two situations are not really comparable, yet as Fister moves from Gopnik’s research to her own story, she implicitly conflates them:
Every fall, as new students arrive on campus, I struggle with what to do to help them feel comfortable exploring ideas in the library and beyond. In that first semester, they tend to be stressed and pressed for time. They don’t have models in front of them for how scholars do research, but they’re often asked to find scholarly sources to use in an argument as they are introduced to academic writing. They are intensely curious about what the teacher wants, if not about the topic they’re researching, and often focus on getting that boring task done as efficiently as possible. It’s not just that there’s no time for creativity, or that they think creativity is a violation of the rule that you have to quote other people in this kind of writing. It’s simply too big of a risk.
Although Fister describes the typical freshman’s plight in great detail, she offers no compelling pedagogical justification for equating the needs of 18 year-olds with those of preschoolers, beyond the superficial fact that both are intensely curious and not entirely sure of what the adult world expects of them.
More tellingly, she casts the problem in terms of “creativity,” the primary lens through which education is understood within the progressive universe. It’s a cue suggesting that her argument is about to veer off course. And here we have the next step down the road to perdition, so to speak. Having identified some reasonable problems (students lack models for how to do research, they’re stressed out and lack time), she then edges onto less stable territory (being creative is too risky). The slide from one thought to the other is so subtle that the leap of logic is almost unnoticeable.
When you think about the problem carefully, though, it does not really make sense to think of the problem as one of creativity. If students lack models for writing that integrates multiple sources, then the solution should involve professors or TAs providing them with models and walking them through how those models work; and students are stressed out and lack time, then they need help managing their time. It’s possible that some of these students must juggle work as well as family obligations, but it’s also possible that some of them are spending hours drinking beer and playing video games (these are college freshmen, after all). In any case, creativity is not really the issue.
And what about the claim that students “don’t have models [for research] in front of them?” Perhaps professors are not taking the time to explain their research in class, but the idea that university students lack access to models of scholarship is, quite frankly, absurd. Set a foot on pretty much any campus, and you’ll find bulletin boards overflowing with announcements about lectures, conferences, and colloquia. Models of genuine scholarship abound. If undergraduates are not witnessing these core activities of academia for themselves, it is most certainly not for lack of access or opportunity.
What makes this type of discourse so slippery, however, is that Fister’s description then takes a solid turn back into reality:
When [freshmen] come to the library for an hour or two to do some guided poking around, they want to know the rules. How many sources do I need? Will these ones do? They’re not particularly interested in how those sources got there. In fact, they’re often not interested in reading them. After all, it’s right there in the rubric: points for citing articles accurately; nothing about reading them. When they patch together quotes lifted out of context, it’s hardly surprising. Since they haven’t seen how scholarship actually happens, and have never seen this way of mapping out the evolution of ideas, they have nothing yet to imitate or puzzle out. The rules are all they have.
This description is in most ways perfectly fair and accurate. College freshmen do in fact frequently arrive on campus not knowing how citations work in college, and not understanding how to integrate all those required points of view into their own writing. High schools rarely prepare students for this type of assignment, and even at very good high schools, the level of work and the expectations simply are not the same as they are in college.
The professors who give these types of assignments to freshmen may also be unaware of just how underprepared their students are – just how innocent they are about how the “game” of academia is played. That too is unsurprising: when people spend their entire professional lives performing a skill at a high level, it is easy for them to forget how mysterious that skill can seem to novices. Things that they would consider self-evident (academic writing integrates multiple points of view, some of which support the author’s argument and some of which contradict it) are often nothing of the sort. Misunderstandings are thus inevitable.
At the same time, though, one has to give students some credit. A rubric may indeed state that students only need “cite” articles, without mentioning that students must also read those articles, but any student – even a genuinely confused one – who simply goes through an article pulling out quotes, knows at some level that he or she is gaming the system. There is naiveté, and then there is deliberate misunderstanding. Does Fister seriously think 18 year-olds lack the ability to make the distinction?
Again, the problem is not a lack of “creativity.” It is unlikely that students have been required to engage, in writing, with a range of ideas, including ones they do not agree with, and in any case, it is exceedingly unlikely that they possess the rhetorical tools to do so. It is also an issue of students having learned, in high school, that they will be rewarded for doing the bare minimum spelled out on the rubric.
The second issue points in turn to several larger problems, namely that high school teachers are increasingly required by their administrators to track and report as much data as possible; detailed rubrics are necessary to monitor discrete skills. In addition, teachers are not infrequently faced with contentious students and parents who are ready to complain about poor grades. In that context, a rubric is a defense mechanism that can be used to justify taking off points in a particular area. And the pressure to hand out good grades based on nothing more than adherence to the rubric comes from all sides.
It is hardly surprising that students who are graded this way starting in elementary school naturally grow accustomed to having all aspects of their assignments spelled out for them. As these students arrive in college, schools are forced to adapt to their needs. So when Fister attributes students’ slapdash approach to using sources to professors’ use of rubrics, she is overlooking the fact that professors have likely adopted this particular tool because freshmen would literally have no idea of how to complete assignments otherwise.
It is also likely that professors have learned, through experience, that if they do not spell out the specifics of an assignment, they will receive a stack of papers almost entirely devoid of any sources, quotations, or outside points of view. In this regard, Fister mistakes a result for a cause. Making requirements clear is a way of ensuring that students, who after all are novices at this kind of work, meet a basic minimum standard. It is not in the least incompatible with creative thought.
To be fair, the continued spelling out of specific criteria might reinforce a less-than-motivated student’s passivity; however, it does not prevent a student who is genuinely interested in writing a good paper from doing stellar job. Students’ approach to the assignment is inevitably colored by their interest level in the subject, their academic backgrounds, what goes on in class, and dozens of external factors over which professors and especially librarians have no control.
The assumption that specific directives are by themselves responsible for stamping out creativity is, however, another one of the central tropes of progressive education. The problem is that concrete, realistic alternatives are almost never proposed in their place, only hazy what-ifs.
Sure enough, Fister again proceeds from a reasonable assumption…
But if you’re hoping students will practice writing formal academic writing using sources in the way academics do so that when they finally have a chance to do actual research they’ll know how to package it, you’re doing them no favors.
…to far more dangerous territory.
Showing new students how to find sources and cite them might actually interfere with the kind of learning we want, the kind babies do when they are doing what comes naturally – figure out through imitation and play.
In other words, when a skill is being taught ineffectively, the solution is not to improve the way it is taught, but rather to stop teaching it entirely in the hopes that students will naturally acquire it on their own.
This is not just a matter of throwing the baby out with the bathwater; it’s dumping both of them in the middle of the Pacific.
Yes, it is obviously unfair to expect students who don’t really know what academic research is all about to suddenly wake up one morning and use sources the same way a 40 year-old Ph.D. would. But what on earth would teaching college students through “imitation and play” actually look like? Would groups of students gather around their professors in their offices as they sit reading academic journals, or as they run statistical models on their computers? Would they dress up in suits and stage mock conferences during class-time to present their “research?” Would they play video games teaching them to navigate departmental politics?
Joking aside, I’d really like to know just what Fister means by this.
Again, the implication here is one of the guiding assumptions of progressive education: that having fun automatically translates into mastery. It’s true that students who are enjoying themselves are more likely to pay attention in class, but moving from high school writing to college writing is genuinely challenging for many students. It requires considerable sustained practice, some of which can be very tedious, and it also requires students to think in ways that can be new and uncomfortable. (I tutored one college student who had a habit of dismissing any argument she disagreed with as “propaganda.”) Students who are accustomed to always having things be made “fun” are unlikely to develop ability to persevere when things aren’t. The focus on managing students’ emotions, a staple of progressive rhetoric, is ultimately a distraction. There is no formula that reliably produces self-motivation, or “grit.”
Besides, aren’t the articles that students are “often not interested in reading” the very models that Fister claims students lack access to? Should professors have to spell out for their students that it is actually necessary to read sources, not just cite them? Should they actually have to sit and demonstrate what reading an article looks like? And are professors truly not talking about their research in class? College professors – even adjuncts – virtually always teach classes in their areas of specialization.
Professors are also pushing back against 18 years of exposure to popular culture that depicts professors as clueless eggheads with British accents and bowties who are barely capable of tying their own shoes. Students may be excited to be in college, and they may be eager to obtain a college degree; however, the vast majority of them are not in school to become professional academics but rather to earn a credential that will make them eligible for higher paying jobs. The kind of research they are being asked to do is often tangential to their reasons for being in college, and there is only so much even the most brilliant and inspiring professor can do to counteract such an entrenched utilitarian mindset.
Furthermore, while copying experts is certainly an important aspect of learning a skill, it is absurd to imagine that novices can become experts through mimicry alone – particularly when those skills are not “natural” but rather formal and often highly technical. Adopting the behaviors of an expert does not, alas, turn a novice into one. If anything, students who are naïve in the ways of the academy need more explicit teaching about how academic research and writing work, not less. Giving students some models to look at and then just turning them loose is a horrible idea.
According to another well-known cognitive scientist, Daniel Willingham (who studies things like reading and critical thinking in older students), it is necessary to understand about 98% of the vocabulary in a given piece of reading in order to comprehend it accurately. Since most college freshmen are not familiar with the specialized terminology used in many academic disciplines, and since academic articles tend to be dense and written for other academics, it is exceedingly likely that even the most advanced college freshman will have some gaps in their understanding – to say nothing of the average student, coming from an average high school, who may or may not have read an entire book sometime in the past four years. Even if students are motivated to read independently, they may completely misunderstand what they are reading.
Students who are struggling to literally comprehend even a single page of jargon-laden academic prose will not, therefore, automatically “catch” what they need to, and be miraculously inspired to view the library as a place where people can “discover and have fun while they are building their own understanding.” On the contrary, many of them will get bored, and frustrated and, if they don’t have anyone to ask for help, give up.
At the other extreme, students may latch onto the jargon and use it to cover a lack of substantive thought. Even if they adopt the pose of academics, their understanding remains superficial.
Fister also laments that “[t]he way we search now isn’t through connections, the way scholarly conversations work. We have been doing everything we can to flatten those conversations into a Google-like search box that takes terms in and returns a list of things to choose from, trying to make it easier and more familiar.” This is a valid complaint, but the problem is that in order to join those scholarly conversations, it is necessary spend time learning what has been said, who the major players and what the major trends are, where the major debates and controversies lie, and so on. Acquiring this knowledge requires a certain amount of patience as well as humility. That can involve a bit of a paradigm shift for students who have for years been praised for proclaiming their own, often shakily supported, ideas. Many of them will not take well to such a change.
Fister is also on shaky ground when pooh-poohing the tendency to view the library as a place for learning to follow “obscure rules that, if broken, carry harsh penalties.” (The denigration of detailed, formal processes as “obscure” is another classic move in the progressive playbook.) While researching a topic in which one is truly invested can be an intellectually thrilling experience, there are rules, very strict ones in fact, and the real-world consequences for breaking them can be harsh. Aside from the fact that real academic publications expect real professors to adhere strictly to particular citation styles, there is the not insignificant issue of plagiarism.
A student who does not understand the importance of citing sources, or who fails to attribute his or her ideas properly – even inadvertently – can end up on academic probation or worse. It is crucial that entering college students be made aware of the stakes. Universities take matters of intellectual property very, very seriously, and leaving freshmen to their own devices when it comes to matters of citation would be nothing short of disastrous. A librarian of all people should understand this.
Ironically, one of the people Fister cites has actually compiled a rather extraordinary list of things that beginning college students need to be taught about just what it is that academics do, and what it means to be a member of an academic community. (If you click the link, ignore all the cultural studies babble and scroll about halfway down the page.) The list is remarkable for its specificity, as well as for the fact that it takes absolutely nothing for granted. It provides exactly the framework that many college freshmen are lacking. I would argue that this, along with a copy of They Say/I Say, is what beginning college students need.
I realize that I’ve spent a lot of time belaboring some of these points, and that I’m doing so with a level of scrutiny that Fister’s piece might not seem to merit; however, given that the solutions it extols have already wreaked so much havoc on K-12 education, and are creeping into higher ed as well, I think it’s important to consider the implications with a dose of hardheaded realism.
It is all too easy for an outsider to get swept up in the warm fuzziness of it all and not realize that some of these proposals are unworkable and even harmful. And when that outsider is a billionaire donor who has no personal experience with education, save his own, and is convinced that everyone would be successful if all those fussy old professors just stopped all that gosh-darn boring teaching stuff and let students discover the joy of learning, then this type of rhetoric can have very significant effects. It’s bad enough at the high school level, but college students are typically paying thousands of dollars, and may even be incurring significant debt. No matter how innovative the architecture or how smart the technology, it is just plain unfair to leave college students to their own devices (pun intended) and expect them to figure out all the hard stuff themselves.
Just imagine if people talked about sports the same way they talk about education…
In the nineteenth century, when modern sports were invented, athletics served as an extension of the factories in which many of their players worked, reinforcing hierarchies and training athletes to be obedient and “play by the rules.” Today’s sports leagues are heirs to that model. Unsurprisingly, for many athletes, playing a sport has become a source of stress rather than one of joy.
Nothing could be more natural than the desire to run and play, but this inborn tendency is all too frequently destroyed by a system that emphasizes rote drilling of individual skills at the expense of more authentic forms of participation.
A new, more progressive model is clearly required, one that harnesses players’ innate love of games and movement, and that places players rather than sports at the center of the athletic process.
Instead of forcing players to abide by a narrow set of rules, rendering athletes passive and stifling their natural creativity, athletic programs should abandon the traditional one-size-fits all approach and strive to develop the whole player.
Thus, coaches should act as facilitators, dividing teams into smaller groups so that players can learn from one another and avoiding heavy-handed tactics such as directly instructing players in how to stand, kick, or dribble. And rather than repeatedly drilling low-level skills such as throwing and catching, a surefire way to stifle players’ natural love of games, coaches should create opportunities for players to develop higher-order performance skills. For example, a league could stage a mock Olympics, with each team dressing up in the uniform worn by the athletes of a specific country. Groups of players could research different aspects of their adopted teams and create posters presenting what they have learned.
Just as importantly, coaches should avoid treating teams as a single entity, or talking to players in a harsh or critical manner. Rather, they should adapt their coaching to athletes’ unique playing styles and seek to inspire each team member as an individual. When players commit penalties or other violations, coaches should not impose punishments such as “time outs” but should seek to understand the motivating forces behind players’ behavior.
Today, every baseball team across the entire country is forced to abide by a single set of regulations, as is every basketball team, soccer team, lacrosse team, and so on. What a boring way to play! Wouldn’t it be wonderful if players were instead encouraged to take an active role in constructing their own games?
If every team were responsible for inventing its own rules, for example, different teams could learn from one another, and players’ ability to innovate, think critically, and solve problems creatively would be vastly improved. Football players could learn from basketball players, and field hockey players could learn from sprinters, erasing artificial divisions between sports and facilitating players’ ability to communicate with other types of athletes. And rather competing against one another, teams could instead collaborate with one another in order to achieve common goals.
Furthermore, the ceaseless ranking of players and teams, as well as the awarding of medals and trophies, creates perverse incentives that are frequently damaging to players’ self-esteem. This model of athletics is not only often developmentally inappropriate, but it normalizes competition, substituting rewards for intrinsic motivation.
If players were no longer ranked or measured according to standardized criteria set out by bodies such as the NBA, the NFL, and the United States Tennis Association, they would be free to develop their true athletic potential. Instead of passively relying on external metrics such as passes, kicks, and goals for validation, they would be inspired to take ownership of their personal athletic development.
Becoming an athlete involves so much more than rigidly adhering to a group rules laid out by experts who often do not understand the relevance of sports to players’ lives. When players are encouraged to explore their unique passions and acquire a deep sense of themselves as athletes, everyone benefits. It is high time for athletics to be brought into the twenty-first century.
In my previous post, I outlined some of the ways in which the progressive methodologies that pervade much of the American system inadvertently fuel a reliance on the private tutoring industry.
On its surface, the tutoring model would seem to be the holy grail of progressive education. Teachers are encouraged to “personalize” their approach to fit students’ unique learning styles, “empowering” them to “find their passions” and “take ownership of the learning process.” But this perspective is based on both a simplification and a misunderstanding of how teaching and learning actually work.
Oftentimes, tutoring is assumed to be effective simply because it epitomizes personalized learning. But although personalization is a component of what makes tutoring effective, it is far from the only element – nor, I would argue, is it the most important element.
Likewise, the importance of soft factor such as personality “fit” and the ability to inspire is somewhat overblown. Obviously, those factors do count for something. I had students I adored, whom I always looked forward to seeing, and whose families I have remained friendly with for nearly a decade now. I even had one student who genuinely fell in love with English, ended up as editor of his college newspaper, and is now a professional journalist! (To be fair, he was a star in English class before I showed up.)
But the reality is that I also taught students of whom I was not particularly fond; tutoring them was, to be frank, a job. It is unrealistic to expect that any tutor, like any other human being, will get along with every other person with whom they work. The point, though, is that provided those students did their work and showed up diligently, they still improved very significantly.
Conversely, some of the students whom I got along with wonderfully, and who could rhapsodize wide-eyed about their love of learning, never quite seemed to make the kind of improvement they wanted. Almost invariably, these students attended the most progressive schools. Somewhere along the way, they had clearly absorbed the belief that being excited about learning was synonymous with actually learning.
These students were often very enthusiastic, and we had a wonderful time together, but they were somehow unable to put in the necessary practice on their own. I always got the sense that they had never done the kind of work that real improvement would have required – that they literally had no concept of it. Sometimes, they even switched tests in the hopes that their scores would magically rise without their having to put in too much work. Needless to say, that approach did not pay off.
Interestingly, I have several colleagues who regularly find themselves in the position of being “second round” tutors – tutors who are called in after a student has failed to make sufficient progress with another tutor, or even multiple tutors. Like me, they are often stunned at the types of basic information their students’ previous tutors failed to impart, or at least to impart in a way that students were able to absorb. If personalization were truly the issue, these types of scenarios would not occur with such alarming regularity.
I suspect that many, if not most, of these formers tutors are well-meaning, but techniques that are ineffective in the classroom are just as ineffective in one-on-one situations. An adult who lets a student flail around for 15 minutes trying to “discover” a concept that could be easily taught in three is doing a major disservice. I’ve witnessed this kind of teaching, and it’s almost painful to observe. (I have to restrain myself from grabbing tutors by the shoulders, crying, “Just teach it to them already!”) One is left with the impression that these tutors have been so thoroughly indoctrinated with the importance of indirectly “guiding” students that they cannot really see what is happening in front of them.
For their part, students do not generally get overtly upset because they want to please their tutors, and tutors can consequently pat themselves on the back for helping students take control of their own learning. But the result is that basics are made out to be inordinately complicated and confusing, preventing students from ever really getting a handle on the subject or progressing to more advanced activities.
Sometimes this state of affairs goes on for months before it becomes apparent that something just isn’t working. Finally, parents start hunting around for yet another tutor, one who can really get the job done. At that point, they’re eager to have someone knowledgeable and competent, with a demonstrated track record, tell their teenager (and possibly them as well) exactly what to do.
I’ve had several recent discussions with fellow “second-round” colleagues about just what it is that makes the most effective tutors so effective, and the overwhelming consensus is always that the best tutors possess a very particular type of efficiency. Not only do they know their subjects phenomenally well and are able to present them in such a way that students can both retain the material and apply it when it counts, but they can anticipate the problems a student is likely to have and tailor sessions so as to cut off those problems before they even have a chance to occur. As a result, they can sometimes accomplish in only a few sessions what another tutor might not be able to accomplish in months.
Not coincidentally, this type of targeted tutoring is highly traditional in many ways – even if it does contain what are usually thought of as progressive elements. It is student-centered insofar as it is targeted to the student’s particular needs; however, its primary aim is not develop unique gifts or creativity (although the student may sometimes discover a new gift as a result) but rather to transmit information in as clear, coherent, and systematic manner as possible, and to ferret out points of weaknesses so that they can be directly addressed.
Although this type of tutoring must be a conversation in which the student is an involved participant, it is a conversation in which the tutor is unapologetic about knowing more than the student does and is fully willing to embrace responsibility for that fact. It also involves very considerable amounts of repetition.
In that regard, it is the polar opposite of pretty much everything current wisdom about education holds dear.
But because this type tutoring is so personalized, and often so engaging, no one really notices its more traditional features. (Indeed, “traditional” and “boring” are so thoroughly conflated in the popular imagination that any teaching that is not boring is automatically assumed not be traditional.) Besides, when college admission is on the line, educational theories are the last thing anyone worries about. And the inescapable fact is that whatever someone happens to think about it, this type of teaching works.
Thus, tutoring largely escapes the kind of criticism that, in another context, would be heaped on the type of pedagogy it employs.
To come at this from another angle, I think it’s fair to say that the lack of regulation is simultaneously the best and the worst aspect of the tutoring industry. Anyone can throw an ad up on Craigslist and advertise their services, and there are a lot of hacks out there. On the other hand, the lack of oversight means that private tutors are not compelled to march in lockstep with pedagogical fads. They remain free to use techniques more common in 1986, or even 1966, without any fear of pushback. Pragmatism is free to trump ideology.
People who wonder why bright college grads don’t want to go into teaching should look no further than the tutoring industry because there are certainly plenty of them there. If schools don’t offer sufficient autonomy – in my experience, successful tutors tend to be somewhat quirky as well as fiercely independent – the private sector certainly allows these individuals free reign, not to mention potentially far higher compensation.
The supply side only exacerbates the issue.
As more students come through a progressive-inf(l)ected system, college included, fewer and fewer graduates have experience with the most effective type of direct instruction. And people can’t normally teach in a format they haven’t experienced themselves. On this subject, a quick anecdote: A colleague, a decorated AP teacher, told me the story of a group of younger teachers sent in to observe him teach. They had heard he was “traditional” and were astonished to discover that he did not simply talk at his students for the entire class but actually allowed them to ask questions. And this was in a highly ranked district in one of the most educated states in the country.
Furthermore, the promotion of STEM and “practical” (read: business) degrees has also lead to an ever-declining number of students achieving advanced competencies in the humanities. Despite the popular rhetoric about useless English majors working at Starbucks, the reality is that only 6.1% of all college students received degrees in all areas of the humanities combined in 2014.
In addition, humanities departments at many schools are notorious for their lack of rigor as well as their grade inflation. It is usually safe to assume that even English majors have never had to diagram sentences.
The people who do acquire serious skills in the humanities tend to come out of a small group of elite schools and be fairly privileged to begin with. Within that group, the number of people who can also teach well is quite small indeed. An even smaller group actually wants to teach. And don’t even get me started on “soft” factors like reliability.
Now, basic economic theory of course states that decreased supply of an item or skill will cause prices for that item or skill to rise. And nowhere is this clearer than in the private market for test-prep tutoring, where the ability to effectively teach a certain set of skills usually deemed “irrelevant” is actually in very high demand.
As a tutor, I spent a good deal of time covering material I had been directly taught for free, in public school – material that was very clearly foreign to my students. To put it mildly, it always seemed to me that there was something not quite right about that.
Essentially, the direct instruction of crucial skills that used to be – and that should still be – standard fare in classrooms across the country has now become something accessible to a much smaller fraction of students.
Techniques that would be viewed with distaste when associated with less privileged students have been transformed into a coveted marker of status. I know of one Manhattan tutoring firm, famous for its exorbitant rates, whose tutors reportedly dictate notes while students write them word-for-word, by hand.
I do think that this situation is in large part the result of misplaced good intentions. But in seeking to avoid one extreme, it is possible to go too far in the other direction. Pedagogical strategies that are appropriate for preschoolers are far less suited to high schoolers; and to return to one of my favorite themes, what makes students happy in the short term is not necessarily what will serve them best in the long run.
Looking back on my own high school experience, some of the teachers from whom I learned the most were not the inspirational ones, but rather the merely competent and unremarkable ones who, in their own steady, dull way, taught me what it meant to acquire a rock-solid foundation in a subject. For a girl whose parents could not help her with her homework (not until I was out of college did I realize that some of my classmates had probably received that kind of support), and who was only dimly aware of the concept of professional tutoring (which would have been unaffordable anyway), that was not a small thing.
That foundation took me very far in college, and I literally would not be where I am without it. I suspect that those types of teachers are in much shorter supply today. I am sorry for that, and for all the students whose educations will be shortchanged because of allegiance to a theoretical ideal.
Unfortunately, as education schools increasingly promote the teacher-as-facilitator model, other approaches are largely reduced to a caricature. And as teachers come under increasing administrative pressure to employ progressive pedagogies, teachers who don’t fit the mold are unlikely to remain in the classroom for decades the way their predecessors did. That is a shame for the education system – but for tutoring industry, it is a boon.
In continuation of my previous post, some thoughts on one of progressive education’s favorite tools: group work.
A good deal of fuss is currently being made of the importance of preparing students to work collaboratively in groups, in preparation for the twenty-first century economy. In the context of these discussions, group work, much like “critical thinking,” is typically presented as a formal skill that can be developed in the absence of any specific context.
On the surface, this is one of those claims that seems eminently reasonable. Because many well-paying jobs in the current economy do in fact require some degree of collaboration among workers, it seems logical that children should be trained to work collaboratively.
But a school is not the workplace, and the embrace of group work as a goal in and of itself overlooks the fact the real-life conditions under which adult workers collaborate are markedly different from the conditions under which students are expected to do so.
First, when adults are hired for skilled, white-collar jobs (presumably the type of “twenty-first century jobs” schools are currently devoted to preparing students for), they are typically hired because of their expertise in a particular field.
When they collaborate in groups with their colleagues, it is not for the purpose of fulfilling some pedagogical imperative but rather because their particular areas of expertise make the individual group members particularly well-suited to working together toward a specific goal.
In addition, it can normally be taken for granted that while members have a variety of strengths and weaknesses, they all possess a full array of basic competencies; indeed, it is reasonable to assume they would not have been hired otherwise.
In sharp contrast, the primary purpose of school is not (or, at the very least, should not be) to have children share their expertise in the service of a particular goal, but rather to acquire a broad range of fundamental knowledge. With the exception of a minuscule percentage of students who are truly capable of performing at an adult level in a particular area, children are not experts in the same way that adults paid to do a particular job are.
Whereas employees are contracted to serve their employers’ bottom line, schools exist to serve children — not in the sense of waiting on them hand and foot, but rather in the sense of assuming responsibility for equipping them with the skills necessary to become functioning members of society. This is a fundamentally different paradigm from that of the working world, and for that reason, the two cannot truly be equated.
As is the case in so many other areas of education these days, part of the compulsive focus on group work results from the confusion between behaving like experts and actually being experts. The assumption is that if students are taught to display the same behaviors that experienced adult professionals display, then they will actually come to possess the know-how of those adults (the “cargo cult” theory of education, or “rote understanding”). Doing is substituted for knowing.
Furthermore, children are not hired by schools as a result of their meeting specific professional criteria, but rather are placed in them according to a variety of geographic, socio-economic, and academic factors. The range of basic skills to be exhibited in a given classroom thus tends to be far wider than those exhibited by adults collaborating in a professional setting.
So while groups composed of adults are of course sometimes plagued by slackers and whiners and generally difficult personalities, there is usually a baseline level of competence that can be taken for granted; in contrast, groups of students are considerably more likely to contain members who are genuinely lacking in basic skills.
As a result, stronger students inevitably end up covering for weaker ones (why struggle with something unpleasant when you can pan it off on someone else?), regardless of whether teachers are careful to assign each member a specific role in an attempt to preclude that possibility.
Is that really the lesson that group work is intended to impart — that diligent, knowledgeable students should learn to cover for their less diligent and knowledgeable peers, while the latter should learn to exploit the generosity of the former?
That is undoubtedly how the world actually does work sometimes, but it is highly questionable whether schools should be going out of their way to facilitate those types of interactions in the name of promoting an amorphous ideal of “collaboration.”
Let me cite Hannah Arendt here:
The authority that tells the individual child what to do and what not to do rests with the child group itself–and this produces, among other consequences, a situation in which the adult stands helpless before the individual child and out of contact with him. He can only tell him to do what he likes and then prevent the worst from happening…
As for the child in the group, he is of course rather worse off than before. For the authority of a group, even a child group, is always considerably stronger and more tyrannical than the severest authority of an individual person can ever be. If one looks at it from the standpoint of the individual child, his chances to rebel or to do anything on his own hook are practically nil; he no longer finds himself in a very unequal contest with a person who has, to be sure, absolute superiority over him but in contest with whom he can nevertheless count on the solidarity of other children, that is, of his own kind; rather he is in the position, hopeless by definition, of a minority of one confronted by the absolute majority of all the others…
Therefore by being emancipated from the authority of adults the child has not been freed but has been subjected to a much more terrifying and truly tyrannical authority, the tyranny of the majority.
(But obviously, children jabbering away at each in their groups look so happy and engaged that this sort of coercion could not possibly be taking place!)
As a matter of strict practicality, there is no way that any teacher can control each individual group in a classroom designed to resemble a three-ring circus. Adult group dynamics can undoubtedly be toxic at times, but at least they take place in the context of people with fully developed prefrontal cortexes. And at any rate, a student whose primary takeaway from years of group work is a finely honed ability to foist responsibility onto others is hardly anyone’s dream employee/colleague. Experience can cut both ways.
Furthermore, the practice of letting students consistently play to their strengths in the service of pretending that they are experts deprives them of the less-pleasant but essential experience of gaining important competencies that do not come easily. Learning to master the basics is a crucial part of what school is for. If students fail to gain fundamental knowledge in school and do not have parents or tutors to fill in the gaps, where, then, will they acquire these skills?
The attempt to prepare students for the working world by having them imitate a common workplace behavior thus has the paradoxical effect of making them less prepared to join the adult world. If employers complain that their younger hires have difficulty collaborating, they are still talking about people who had the skills to be hired in the first place. People who lack the skills to do the job are, for obvious reasons, not part of the equation — and one of employers’ biggest complaints involves recent graduates who lack the skills necessary to be hired in the first place.
It is also highly debatable whether employers and teachers even have the same definition of group work.
In school, it has come to refer to the classroom practice of dividing students into small groups of three to five, in which each student is assigned a specific role, with the expectation that they will teach one another as well as demonstrate that they are learning actively. Meanwhile, the teacher circulates, monitoring interactions, gathering data, and occasionally providing clarification (but not, heaven forbid, direct instruction!) tailored to individual students’ and groups’ unique learning styles. (If you think this is a caricature, you probably don’t have much experience with contemporary educational pedagogy.)
It seems reasonable, however, to assume that employers would have a somewhat broader definition of what constitutes group work. In a real-world setting, successful participation in group work involves, among other things, the ability to work autonomously for long stretches (projects often stretch for weeks or even months) and to summarize one’s clearly work to others; to interact with people of different ages and experience levels, in-person and electronically, and to adjust one’s communication style accordingly; to express one’s ideas clearly and and succinctly in writing, complete with proper grammar, spelling, punctuation, etc.; and to complete assignments by their deadlines.
The development of these skills is hardly contingent on the amount of group work one has participated in throughout school. In fact, students whose primary mode of interaction involves chatting with their peers are unlikely to accrue the skills necessary to communicate effectively with anyone over the age of 21. Again, the effect is to keep students in a prolonged state of immaturity.
Finally, the notion the students will fail to acquire the soft skills necessary to flourish in the twenty-first century workplace unless schools reject direct, teacher-led instruction in favor of small, student-led groups runs counter to both history and common sense. Participating in groups is part of participating in life. A family is a group. A class is a group. A sports team is a group. (American children probably spend nearly as much time participating in extracurricular group activities as they do in school.) Adults have successfully worked in groups for thousands of years; modern civilization could not have been built otherwise.
The economic shift toward high-knowledge jobs and away from manual ones should therefore not necessitate the knee-jerk replacement of pedagogical techniques that, after all, have remained pretty effective throughout centuries of upheaval. It is the height of arrogance to assume that the recent proliferation of digital devices suddenly nullifies an entire body of accumulated wisdom — that because years now begin with 20- rather than 19-, we are so unique, so modern, that the past can no longer inform our approach to learning.
And on a practical note, as someone who has now hired a number of people for various positions, I can state that in some regards, things haven’t really changed all that much. A person who possesses the requisite skill-set for a job, presents themselves professionally, is reliable, pays attention to details, writes clearly, has a healthy dose of common sense, and is generally pleasant to be around, will not have an outsized amount of trouble finding or retaining some sort of job in most fields — even in the twenty-first century.
Note: for one other great take down of group work, see http://www.aft.org/ae/spring2015/bennett.