Some Historical Perspectives on Reference Inflation

Do letters of recommendation mean anything?
By Peter Stearns

One of the revealing changes that has taken place in American life over the past half century is the decisive transformation of letters of reference from candid appraisals to hymns of praise. This development has affected thousands and has spread across a variety of venues—from high schoolers seeking college admission to graduate applicants, medical school residents, and even tenure-track faculty. And it’s a national distinction: all comparative studies suggest that Americans have come to expect a particularly lavish style, compared even with Canadians, who in other respects are usually regarded as being nicer. In a Facebook post, philosopher Jonathan Birch recounts a standard academic joke that captures the difference from his native Britain: the American professor writes of his graduate student, “Jones is, without doubt, the most agile thinker of his generation,” while his British counterpart muses, “His work is quite good, I would even say it compares favorably with the work of some of my other PhD students.”

The process through which this change has occurred has rarely been studied. In one sense, we hardly need a further examination: relevant players—for example, admissions officers—understand the current game and are at least vaguely aware that the situation was different in the past. Many colleges and universities have dropped a reference requirement altogether because of the unreliability of letters of recommendation, or have switched to a largely numerical system that will, at most, pick out a few really bad apples, with most candidates sailing through with inflated scores. As grades count for less and student diversity grows, reliable recommendations might be genuinely helpful to candidates and institutions alike. But, for the moment at least, the process is out of control.

Some people in the admissions domain believe that the inflationary process began relatively recently, pointing to the 1990s as the inflection point. But they confuse the causation, as in frequent claims that legal liabilities play a major role in explaining the new behaviors; in fact, these liabilities are largely the recent result of the process of change and are not terribly significant even today. A brief historical exploration will not only set the record straight—on the timing and specific factors involved—but may also encourage a larger assessment of the implications for American academic life.

Historical Background

American references were once more candid than they are today, and arguably more useful as well. Earlier commentary, while noting that many referees did not put as much time into their task as they should (surely an observation that remains relevant), also reported a tone that would be hard to find today. Lloyd N. Morrisett and Frank Evans Berkheimer, in their graduate theses from the mid-1930s, provide a few samples of language from contemporary letters of recommendation: “Some people in this section have questioned her deportment on certain occasions,” one letter states. “His pupils are fairly well interested in their work, but never excel. I believe you could procure his services at his present salary,” reads another. One recommender writes, “She is married but her husband is not with her. . . . If she were not my sister I would like to speak of her in detail.” And then there is the clincher: “Please destroy this letter when you have read it.”

The contemporary contrast was captured in a comment in a 2000 Chronicle of Higher Education article by Alison Schneider: “Puffery is rampant. Evasion abounds.” Or, as the philosopher Sissela Bok writes in her study of lying, the current mindset assumes that inflated praise “helps someone, while injuring no one in particular.” The experience of having a wellmeaning referee claim, concerning three quite different candidates, “absolutely the best student I have ever had,” is intriguingly commonplace. Inflation has, over time, inevitably promoted intensification: today, the reference that is not unabashedly effusive, that fails to conceal any shortcomings, is not likely to pass muster. As Andrew Flagel, senior vice president for students and enrollment at Brandeis University, puts it, “I can attest that over the course of my career [since the early 1990s] . . . anything less than a superlative and entirely supportive recommendation from a college counselor can be regarded as a message that there are concerns about the student.” Growing percentages of faculty and counselors simply refuse to write letters unless they can be almost entirely positive (in one survey, only 6 percent agreed that they might include negative comments). The American School Counselor Association touts ethical standards but urges that referees “at the same time highlight the students’ strengths.” Another study suggests that only 7 percent of all reference letters for college applicants offer ratings of average or below—a statistical improbability. The College Board similarly opined in 2017 that “letters of recommendation work for you when they present you in the best possible light, showcasing your skills and abilities.”

The origins of the inflation trend can be squarely placed in the 1960s. A 1966 study by psychologist George Siskind of references for medical interns found that only 6 percent contained any specific criticism, while a full 87 percent were resoundingly positive. Siskind posited three possibilities for the enthusiasm: either “the writers want to see angels, so they write about them”; “the writers do not care what they see, they only want to write about angels”; or “the writers want to deceive us.” References in this case were no longer focused on the capacity to do a good job. And while this particular comment is unusual for the time—it took a while for fuller awareness of change to emerge—scattered reactions in the 1970s and 1980s further suggested a process that was well under way. Thus from 1988, again with reference to medical interns, psychologists Rodney Miller and Gregory Van Ryborek ask, “Where are the other 90%?” as letters began to announce that everyone was now in the top decile. A 1984 study by Paul Kasambira specified an even wider pattern in college and graduate school applications that had developed over the previous fifteen years and had become “shameful at best, disastrous at worst.” Evaluators were increasingly confronted with “the contradiction presented by a volume of glorious recommendation letters and a candidate’s weak academic record.” Clearly, reference inflation began more than fifty years ago and has simply climbed further since.

The change was all the more remarkable in that it coincided with another development, one that might have pointed in the opposite direction. Before the 1950s, most school and college recommendation letters had emphasized moral qualities and social background; by the 1960s, criteria had decisively shifted to intellectual achievements and skills. Comments about a candidate for graduate school being “cultured,” “loyal,” or “poised” dropped away in favor of emphasis on critical-thinking ability, writing skills, or imagination. The transition reflected the growing postwar diversity of the student body, which helps explain the growing avoidance of commentary on social background. The overall shift, as several observers noted, was meritocratic: let student abilities speak for themselves, regardless of origins or polish. But while the change in focus was real, it did not lead to the kind of assessment rigor that meritocracy might suggest. Rather, referees proved increasingly eager to combine their growing attention to student performance with the implicit claim that this same performance was little short of miraculous.

The surge of reference inflation overlaps with the larger phenomenon of grade inflation—it is part of a basic process of change, not a belated addition to it. But the initial timing facilitates analysis of the reasons for change, helping to eliminate or at least minimize some frequently offered explanations.

While student diversity may have played some role in the process, reference inflation did not begin because of any particular desire to protect categories of minority students, for wide interest in educational affirmative action emerged somewhat later. Nor did the growing use of faculty on contingent appointments play a role in launching the new trend (whatever their later role in grade inflation), for here, too, the chronology does not fit. Nor did the increasing business orientation of American colleges and universities, which provides new motivations for student retention— again, this trend emerged more recently and (unlike grade inflation) may not much affect the reference process. And while concern about protecting students from the Vietnam War coincided with the initial steps toward reference inflation and may have affected grading policies, reference letters were less relevant with draft boards.

Even more decisively, there is no real evidence at all that a fear of lawsuits entered into the picture. It is impossible to prove that the broader litigiousness of the national culture did not affect counselors and faculty, but there was no relevant legal action at all until well after the inflation trend was under way. To be sure, the 1974 Family Educational Rights and Privacy Act (FERPA) did allow students to see their recommendations, unless they specifically waived the right, and we know that letters have been more candid when students have waived that right. But this correlation merely added fuel to an existing flame: references had already started to become more laudatory. Finally— though this last commonplace is slightly harder to dismiss—it is probable that reference inflation began before widespread use of student course evaluations emerged in the late 1960s, though by the 1990s, disgruntled faculty were eager to blame this villain for their colleagues’ indulgence. It is worth noting as well that, in contrast to grades, references were usually written well after course ratings had been completed, which casts additional doubt on explicit linkages.

A New Culture

If the origins of reference inflation do not stem primarily from the specific factors discussed above, we must fall back on admittedly mushier but intriguingly sweeping possibilities. Reference inflation (and probably grade inflation as well, though additional factors entered in here) reflects a major change in the ways educational authorities, and probably elements of a wider public, thought about relatively able students— the sort who were most likely to seek recommendations in the first place—even as the emphasis on academic criteria increased. A new kind of culture was developing that sought to protect and encourage a cluster of students, rather than focusing on distinctions among them.

The first factor in the new equation has often been noted: competition for college slots heated up beginning in the 1960s, thanks to baby boomer enrollments and increasingly national recruitment by the high-prestige private colleges and universities. This inevitably placed a higher premium on favorable endorsements. Even here, however, the result could have been greater rigor, not less—which is where a change in referee culture contributed as well. Also worth considering are increased class sizes in some institutions, which might well reduce detailed knowledge of some student candidates on the part of available referees.

A second factor, less widely realized, shaded off from the first. National concern about identifying and nurturing the better American students expanded during the 1950s and 1960s, as international competition (Cold War and economic alike) became more obvious. Cultivating students, not winnowing them, gained new priority. Predominantly middle-class high schools—and counselors—increasingly accepted the obligation to place a growing number of graduates in college, making these decades an inflection point in the criteria by which school success was itself determined. As one college counselor, Marguerite Jackson, put it in 1965, “The ‘self-made man’ is a vanishing breed and the term ‘college-bound’ is becoming a stable concept in our vocabulary.” The role of reference letters clearly gained new prominence in this larger transition. Pressures on guidance counselors intensified, encouraging many of them to believe, or at least to claim, “that their superior students are the best in the country”— but also to seek ways to boost the chances of the cluster of students right below the top group.

Even in this context, relevant authorities might have emphasized their role as guardians of the gate rather than promotional collaborators (this article has not systematically considered the gender factor in references, which is a vital subject but is largely separate from reference inflation overall). But faculty at various educational levels, and not just professional advisers, became increasingly invested in their students’ success; one of the earliest explicit comments on reference inflation thus referred to a new level of, in Paul Kasambira’s words, “fear among the faculty that a student might not get into a desired program, because of some mildly critical comment even amid praise.” With few exceptions, instructors began to feel considerably more comfortable as facilitators than as judges. Students themselves, and their parents, increasingly assumed greater ability than they in fact possessed—the belief that “all the children are above average,” closely tied to the growing interest in promoting self-esteem in the schools from the 1960s onward, made it increasingly unacceptable to offer anything but enthusiasm in response. Efforts to establish a friendlier classroom atmosphere similarly constrained critical comment— always remembering that the better students were particularly likely to be seeking recommendations in the first place. Ultimately, within the context of growing competition for college, a broader set of cultural changes launched the pattern of reference inflation— changes that ultimately embraced students, parents, and most educational authorities alike.

After students had demonstrated a minimal level of competence, encouragement became the key goal; for the faculty, niceness now topped meritocratic precision. Praising students, or even colleagues, within a certain group became an extension of a new kind of faculty self-image. The friendly mentor took over from the careful evaluator.

Once the trend was launched, of course, additional factors did enter in—soon including FERPA disclosure concerns. The US News & World Report ratings frenzy began in 1983, further affecting applicant expectations and institutional responses and ultimately stoking fears of litigation. From the 1990s onward a few court cases spiced the process, though again the actual level of relevant litigation remained modest. More important was the escalating quality built into the trend itself. Praise became increasingly essential simply as a matter of fairness to the students involved, since everybody else seemed to be lavishing it so freely. What might still have been an acceptably mild criticism in the 1970s, though already surrounded with considerable hyperbole, might seem to be a kiss of death two decades later, when everyone else’s referees were presumably adorning the path with roses. Some other arbitrator—grades, or advanced-placement test scores, or the cumulative wisdom of admissions committees—must be left to do the dirty work.

Contemporary trends show few signs of modifying a pattern that now has more than half a century’s worth of precedents. Understanding the phenomenon as the product of considerable change is, at the least, an interesting historical detail. Deciding how significant it is, discussing its ongoing impact on the academic process, wondering if an alternative might be preferable (or even achievable)—these are larger issues that a historical sketch prepares us to contemplate.  

Peter N. Stearns is a University Professor of History at George Mason University and former provost. He works on the history of emotion and other aspects of changing American character in recent decades.

Photo by WWing/iStock