|
« AAUP Homepage

|
The True Scholar
In the Age of Money, self-interest dominates the academy. But it may not explain how the world works. Perhaps it’s time to reconnect scholarship to morality.
By Robert N. Bellah
Using "The True Scholar" as a title may sound rather quaint in modern academic discourse. Yet on occasion, and perhaps unreflectingly, we still use the expression. When we say of someone that he or she is a true scholar, or a true scientist, we mean not only that he or she is knowledgeable or skillful, though we do mean that, but that the person has a character, or a stance toward the world, that is clearly normative or ethical, not merely cognitive. In our common use, then, though not in our reigning philosophies,
the true and the good are not two different things, but aspects of one thing. This essay is an effort to make that commonsense perception more conscious and defensible in the argument about what scholarship in its multiple meanings, including teaching, is all about.
Let me turn to that cantankerous but highly intelligent philosopher, Alasdair MacIntyre, to open my argument. In 1991 in "The Mission of a Dominican House of Studies in Contemporary North America," an unpublished manuscript, he wrote:
What contemporary universities have characteristically lost, both in their practice and in their theorizing, is an adequate grasp of the relationship between the intellectual and the moral virtues. . . . For while the university thinks of itself as a place of enquiry, it happily rejects the thought that such enquiry should be envisaged as having any one overall telos or good and that that good might itself be adequately intelligible only as an ordered part of the human good. What goods enquiry is to serve and how they are to be envisaged is instead to depend upon the choices and preferences of the enquirers and of those who supply their material resources. For academic freedom on a liberal view of it requires that rival beliefs about the human good, including denials that there is such a good, should be encouraged to coexist in a university which is itself to be uncommitted. What enquiry needs from those who practice it is not moral character, but verbal, mathematical and problem-solving skills. A few qualities of character are of course highly valued: industriousness, a show of deference to one’s professional superiors and to the academic system, cheerful collegiality, and sufficient minimal honesty to ensure reliability in reporting research findings. For these are qualities of character functionally necessary, if skills are to be successfully put to work. But there is no overall end to be served by those qualities or those skills, no agreed or presupposed ultimate good in view. What is the outcome?
It is fragmentation, so that by and large what goes on in one area of enquiry has little or no connection with what goes on in other areas.
The fragmentation that MacIntyre accurately points out is perhaps the result not so much of the lack of a notion of the human good as of the presence of an idea of the human good that is left undiscussed. I will return to this matter.
Pure Reason Versus EthicsA major source of our problem is the iron curtain drawn by Immanuel Kant between the cognitive and the ethical, between, in his terms, pure reason and practical reason. According to Kant, an unbridgeable gap separates the two realms. We cannot get to one from the other, but each requires a beginning from scratch on its own terms. As a result, our modern quasi-Kantian university has decided to commit itself to cognitive inquiry and to push ethical inquiry to the margins as a subfield in philosophy or something the professional schools can worry about. The quasi-Kantian university actually carries a much more substantive ethical message than it admits to, but before going into that, I want to explore alternative possibilities.
While for Plato the Good, the True, and the Beautiful had an ultimate unity, Aristotle saw a clear distinction between the intellectual and the moral virtues. And it was Aristotle more than Plato who influenced the subsequent tradition in the West. So, long before Kant, we had a problem with how the two sets of virtues were to be related. But Aristotle, unlike Kant, perceived a relationship. While from one point of view wisdom, sophia, was the highest virtue, from another point of view the governing virtue was phronesis, inadequately translated as prudence or practical reason. Let me interpret phronesis as judgment, remembering that this is judgment only in an elevated sense of the term. One could say, pushing Aristotle just a bit, that judgment is the most intellectual of the practical virtues and the most practical of the intellectual virtues. In other words, it is the place they come together. Judgment in this use of the term involves a sense of proportion, of larger meaning, of what a situation requires, at once cognitively and ethically.
When we say that an action or a person is "truly human" we are using phronesis, judgment. We mean simultaneously that this action or this person is such as humans can be and such as they ought to be. Similarly, when we call something inhuman, like ethnic cleansing, we are saying that it falls below not only the level of what humans ought to do, but what we expect them to do.
We use judgment in this sense all the time and not only in the humanities and the social sciences; we could not conduct the scholarly enterprise without it. Thus we rely not only, as MacIntyre claimed, on the "functional virtues" supportive of a limited view of scholarship, but also on judgment, which, as I am using it, is one of the highest virtues. But MacIntyre’s criticism is correct insofar as we do not take responsibility for what we are doing. We claim devotion to pure cognitive inquiry without any other intent, and we argue that the only normative basis for our inquiry is freedom; we do not take conscious responsibility for the fact that freedom without judgment would lead to self-destruction.
No Higher PurposeIn Three Rival Versions of Moral Enquiry, MacIntyre describes three notions of what the university is today. Adapting his terminology, I will call these notions traditional, positivist, and postmodernist. The traditional is of course where we came from: the tradition of liberal education with its strong ties to the classics and, in America, to theology. Beginning in the last decades of the nineteenth century, the traditional was gradually displaced by the positivist model of untrammeled inquiry, which embraces subjects never included in the older curriculum and throws off the narrow conception of what a classical and Christian education ought to be. But it also, in part inadvertently, dismisses any defensible notion of phronesis or judgment that might have held the enterprise together in the face of positivism’s penchant for fragmentation. Quite recently, postmodernism has arisen partly as a criticism of what its proponents see as the false cognitive neutrality of the positivist university. Postmodernists have argued, not without evidence, that the university exists only to support existing structures of power, particularly in the areas of class, race, and gender. But postmodernism rejects tradition just as readily as it discounts positivism, perceiving tradition as yet another form of power play. In so doing, it fails to bring back any notion of judgment as a governing virtue. Indeed, it rejects the idea of a governing virtue altogether.
But changes in the university, and therefore in scholarship, over the last hundred years have not come about only because of altered intellectual understandings. Changes in the relationship between the university and society have also played a part. The university has never been a place devoted solely to the formation of character or to pure inquiry. The university has always been an avenue of social mobility. One’s life chances are enhanced by attaining a university degree—about that plenty of empirical evidence exists as far back as one can go.
Mobility aspirations have long placed pressures on universities, but for much of that time, they were gentle pressures. By and large, the university’s authority to tell upwardly mobile young men, and later young women, what they needed to know was not basically challenged. And the liberal arts, as a central core of the curriculum, continued to draw students even after the positivist model of the university had gained dominance. But in recent decades, students have begun more and more to tell us what they want to know. The fact that a much higher percentage of the population goes to college accounts for part of this change, as do shifts in our culture. But the phenomenon has drastic consequences for the curriculum, hiring, and scholarship, which I will describe in a moment. In a world of consumers, consumer students now make the decisions, for better or for worse, that were once made by faculty.
But consumer students are not the only pressures that universities have faced. Universities, and so scholarship, have been seen as serving external purposes, above all those of the state and the economy. By far the most influential outside purpose deriving from the state has been the pressure to contribute to war efforts. The university was mobilized, if briefly, during World War I; more totally during World War II; and even more thoroughly during the long twilight period of the Cold War lasting until just about a decade ago. Universities grew accustomed to large government research grants, not only in the natural sciences, but in the humanities and the social sciences as well for fields such as area studies. Since the end of the Cold War, the most important external purpose the university is supposed to serve has been the economy, though economic usefulness has been a university purpose to some degree at least since the founding of land-grant colleges in the nineteenth century. I wrote of these pressures in the January–February 1999 issue of Academe, so I won’t elaborate further here.
Age of MoneyIt might be helpful to look at evidence of changes in the university relative to my theme, namely, that the true scholar requires a true university, or at least something like one. I have suggested that the very notion of a true university depends on the survival of what MacIntyre means by traditional inquiry: inquiry in which the link between the intellectual and the moral virtues is not entirely broken, in which something like judgment has at least a degree of influence. In the current understanding of the university, the humanities, even though they are at the moment rent by civil war, are closest to this understanding.
In the May–June 1998 issue of Harvard Magazine, James Engell and Anthony Dangerfield published a survey of trends in the humanities titled "The Market-Model University: Humanities in the Age of Money." Here are some of the most important findings:
- The humanities represent a sharply declining proportion of all undergraduate degrees.
- Between 1970 and 1994, the number of B.A.’s conferred in the United States rose 39 percent.
- Among all bachelor’s degrees in higher education, three majors increased five- to tenfold: computer and information sciences, protective services, and transportation and material moving.
- Two majors, already large, tripled: the health professions and public administration.
- Business administration, already popular, doubled.
- English, foreign languages, philosophy, religion, and history all suffered absolute declines.
In addition, the authors point out that
Measured by faculty salaries—a clear sign of prestige and clout—the humanities fare dismally. On average, humanists receive the lowest faculty salaries by thousands or tens of thousands of dollars; the gap affects the whole teaching population, regardless of rank.
Humanists’ teaching loads are highest, with the least amount of release and research time, yet they’re now expected, far more than three decades ago, to publish in order to secure professorial posts.
Humanists are also, more than others, increasingly compelled to settle for adjunct, part-time, nontenured appointments that pay less, have little or no job security, and carry reduced benefits or none.
There’s even more, but I don’t want to be too depressing. Perhaps these trends cannot be found everywhere, but the article cites my own alma mater. It shows that the trends have occurred at Harvard. It would seem that few schools have entirely escaped them.
Having observed that "the humanities’ vital signs are poor," the authors seek an explanation and find it in what they call the Age of Money:
When we termed the last thirty years the Age of Money, we were in part referring to the dollar influx of research grants, higher tuitions, and grander capital improvements. But there’s another, more symbolic, aspect to the Age of Money, and one not less powerful for being more symbolic. The mere concept of money turns out to be the secret key to "prestige," influence, and power in the American academic world.
They argue that there are "Three Criteria for the power of money in academia, whose rule is remarkably potent, uniform, and verifiable. Academic fields that offer one (or more) of the Three Criteria thrive; any field lacking all three languishes."
In the Age of Money, they continue, the royal road to success is to offer at least one of the following:
- A Promise of Money. The field is popularly linked (even if erroneously) to improved chances of securing an occupation or profession that promises above-average lifetime earnings.
- A Knowledge of Money. The field itself studies money, whether practically or more theoretically, i.e., fiscal, business, financial, or economic matters and markets.
- A Source of Money. The field receives significant external money, i.e., research contracts, federal grants or funding support, or corporate underwriting.
If this picture of the contemporary university is accurate, and it would be hard to argue that it does not contain at least some truth, then our life together in the university is governed by neither the intellectual nor the moral virtues but by a vice, namely, cupidity, acquisitiveness, or just plain avarice, the same vice that dominates our society as a whole in the Age of Money. To the extent that this is true (and I do not believe it is the whole truth), it has come about more through default than by intention: it is the result of many small decisions made by administrators and faculty concerned to keep their institutions afloat in a changing society. Yet insofar as we are dominated by one of the classic vices rather than the intellectual and moral virtues, we have ceased to be a true university, which makes it increasingly difficult for us to be true scholars.
Rational Choice TheoryIn America, and to some degree throughout the world, we seem to have returned in the past thirty years to something from the last decades of the nineteenth century, that is, unconstrained laissez-faire capitalism. And just as the theory of social Darwinism mirrored the strident capitalism of the late nineteenth century, so the rise of rational choice theory reflects the emergence of neo-laissez-faire capitalism in the last thirty years. Rational choice theory is more subtle, more technically sophisticated than social Darwinism, but it is still an offspring of the same lineage, one that ultimately goes back to utilitarianism, the commonsense philosophy of the Anglo-American world since at least the eighteenth century. (Rational choice theory assumes that social life can be explained principally as the outcome of the rational choices of individual actors, who typically base their actions on what they perceive to be the most effective means to their goals.)
Rational choice theory is now taken as a given in economics and has spread out into many neighboring disciplines: political science, sociology, law, even religious studies. If the theory is true, we need to admit not only that acquisitiveness is the fundamental human motive, but also that, as it was put in the 1980s, "greed is good." And we must also concede that we were mistaken all these years, in all the religions and philosophies of mankind, in thinking cupidity a vice instead of our chief virtue. We are only beginning to see the full implications of such thinking in our society and our universities today.
Yet a powerful argument can be mounted against rational choice theory as an adequate explanation of the human condition, one that gives us hope that all is not lost in the defense of the intellectual and moral virtues. Let me briefly outline the history of rational choice theory, based on the as-yet-unpublished work of S. M. Amadae, who has completed a brilliant and illuminating dissertation on its history titled "Rational Choice Theory in Economic, Political, and Policy Science, 1944–1975: A New Chapter in Economic and Political Liberalism." Surprisingly, Amadae’s is the first attempt to write a history of this influential movement.
Do you know the institution primarily responsible for the emergence of rational choice theory after World War II? The Rand Corporation.
Rand began in 1946 with an initial infusion of $10 million from the Army Air Force. It was meant to maintain the collaboration of scientists, scholars, and the military after the end of World War II in a quasi-governmental, quasi-private institution. In the 1950s the corporation became closely associated with the Ford Foundation and engaged with—that is, employed, gave short-term fellowships to, or consulted with—virtually every major contributor, in no matter what field, to the emergence of rational choice theory. To quote Amadae directly: "Locating the development of the conceptual apparatus for rational choice theory within the national security environment counters a basic myth frequently perpetuated about the origin of rational choice theory."
The myth, she says, has two parts: (1) that the idea of the rational actor in the rational choice sense was always at the heart of economics, and (2) that rational choice theory involves the export of economic models to other disciplines. The recognition of the importance of Rand, however, allows for a correct understanding. Amadae writes:
This lineage [that is, the origin of rational choice theory in Rand] reveals two crucial facts which are otherwise hopelessly obscured. The conceptual framework for rational choice theory was developed to solve strategic, military problems and not problems of economic modeling. Furthermore, this idea set was developed to inform policy decisions, not merely retrospectively to analyze behavior as the social sciences often claim of their own methodology. . . . The theory of rational action had interlocking descriptive, normative, and prescriptive components, and was developed to inform action respecting nuclear strategy and complex questions of weapons procurement.
Indeed, the first real classic on rational choice theory in economics was Kenneth Arrow’s Social Choice and Individual Values, published in 1951 but written mostly in 1948, when Arrow was at Rand, where he had been, according to Amadae, "assigned the task of deriving a single mathematical function which would predict the collective political outcomes for the entire Soviet Union."
I don’t dispute that rational choice theory had, by the 1980s, become central in economics, nor that it has had an enormous influence in recent years, particularly through the University of Chicago’s economics department, on many other fields, including my own. But I want to set the record straight on the origin of rational choice theory by showing that it did not originate in disinterested theorizing in some university ivory tower. Instead, it emerged from the very practically oriented Rand Corporation and had, in that context, "interlocking descriptive, normative, and prescriptive components," to use Amadae’s words. Probably the single most important theoretical source of rational choice theory was Von Neumann and Morgenstern’s Theory of Games and Economic Behavior, published in 1944. Mainstream economists regarded the book as unimportant until they finally absorbed Arrow’s work.
Fatal FlawWhatever one thinks of game theory, rational choice theory as developed at Rand was prescriptive, and it did indeed determine action. Its first great empirical test came when one of its primary devotees, Robert McNamara, who was not a professor but a president of the Ford Motor Company and then secretary of defense, had a chance to use it as the basis for decision making in the Vietnam War. (I won’t develop the chain that links McNamara to Rand, but it is a tight one.) I think it is safe to say that McNamara’s Vietnam test was not a success. And the reason was that the North Vietnamese would not behave as rational actors are supposed to behave, because they had absolute value commitments, or ideological zealotry, or whatever you want to call it, which simply was not explicable in rational-actor terms.
I want to suggest two things from this example. One is that rational choice theory is wrong not because much human action cannot be explained in such terms—much human action can indeed be explained in rational-actor terms—but because all human action cannot be explained in such terms. For a theory that claims to be total, the existence of exceptions is fatal. They are particularly so when the decisions the theory cannot explain turn out not to be minor cases of unexplained variance, but decisions critical to the understanding of human action.
Today, rational choice theory, born in the intense engagement of the Cold War as a tool for the prosecution of that war, is now ensconced in the university and taught to students as scientific truth. When Gary Becker writes A Treatise on the Family to show that choices about marriage and family stem from each individual’s maximizing his or her competitive, strategic self-interest, is that a treatise about the True or the Good? Or, indeed, is it about virtue or vice? Is there any way of teaching that book as though it had no practical intent? Even a student who says, "Well, I’m not really like that," will conclude that "if other people are, then I had better behave in strategic terms or I will be taken advantage of."
Let me conclude by recounting an exchange I had with one of my ablest recent students. He wrote, quoting a well-known French sociologist, that all human action is motivated by a competitive struggle to increase some form of capital. I said to him, "Is that true of you? Are you just out to increase your capital? How could I ever trust you if that were true?" I don’t say he was instantly converted, but my reaction had a sobering effect on him. He responded, "I never thought of applying this theory to myself."
Well, theories do apply to ourselves, and they have tests that are both empirical and ethical. Often, it is impossible to tell where the cognitive leaves off and the ethical begins. Scholars live in the world, and the world we live in right now is dominated by money. If we believe that the struggle for strategic advantage is the truth about human beings, then we should realize that we are not just teaching a scientific truth; we are preaching a gospel. We have done that before in our intellectual history, and we decided that it was wrong. But a lot of things that we thought had gone away have returned in recent years. So if we don’t think that the struggle for strategic advantage is the whole truth about human beings, then in our scholarship and our teaching we should begin consciously to accept that our work is governed by the virtue of judgment, at least in aspiration. That alone would be an enormous contribution in our present situation.
Robert Bellah is Elliott Professor of Sociology emeritus at the University of California, Berkeley, and coauthor of Habits of the Heart (1985) and The Good Society (1992).
|