Numbers Are Not Everything

Stop making so many personnel decisions based on quantitative, rather than qualitative, data.
By Milton W. Cole

Numbers—of publications, grant money, PhD students, and invited talks, for example—play too large a role in assessments of faculty. My thirty-five years of experience in higher education have convinced me that overreliance on such numbers is a big problem, especially, but not exclusively, in the sciences.

During my first few years as a faculty member, salary increases in my department were based almost entirely on the number of publications and on whether one had external grant support. Most research articles present minor, incremental contributions to a field, while just a few have significant impact—so why simply count? In my own field, theoretical physics, some ideas surface, are analyzed, and are then written up for publication within just a few weeks or less, so typical publication rates are quite high. In some experimental laboratories, by contrast, the construction of the apparatus can take several years. (In other laboratories, of course, the experiments are accomplished much more quickly.) Hence, there is a natural disparity in publication rates. Every scientist recognizes this counting problem, but administrators, who need to assess the relative merits of faculty, sometimes find it difficult to understand that numbers are not really adequate or “objective” criteria for evaluation.

In many instances, the pressure to publish more articles is actually counterproductive. A junior faculty member striving for high numbers of publications might well be making the wrong investment in the prime of his or her career. In one case I know well, an individual published essentially the same research several times, disguising his misbehavior with different titles.

The expression “least publishable unit” (nicknamed LPU) refers to one’s ability to break up an extended research project into little pieces for the sake of a higher publication count. A scientist publishing LPUs will have many more publications than someone publishing more substantial articles, even though the integrated scientific content might be the same. Is this approach what we want to teach our graduate students? Is this situation not a reductio ad absurdum of the supposedly objective meaning of numbers?

What Counts?

The “numbers problem” takes a variety of forms and has unfortunate consequences. One of my colleagues, for example, told me that he rarely publishes in the journal that is most appropriate to his research field because his department’s chair does not “count” publications in that journal. Why not? Because the “impact factor” of that journal, the mean number of citations per journal article, is low. To whom is he trying to communicate the results of his research—his department head or scientists in his field?

Some years ago, I served on the university senate’s research committee. The discussions seemed to take for granted that funding level is an absolute measure of research quality. Because I voiced skepticism about this assumption, the committee delegated me to make a comparison between funding level and research quality. Using the 1994 National Research Council Assessment of Research Graduate Programs (http://sites.nationalacademies.org/pga/Resdoc/index.htm) and other, more recent, surveys, I found that for a diverse group of some twenty-five physics departments, there was no overall correlation between quality and funding level per capita.

One of the more perverse consequences of the emphasis on numbers involves collaborative research projects. In many situations, junior faculty members are told that collaboration is not advantageous for their careers because skeptical evaluators might attribute the funding and the output of the collaboration to the senior partners. I find this perspective ironic. The junior partner is often the one doing most of the work! And with the percentage of research money going to groups (rather than individuals) increasing, devaluing such collaborations can discourage faculty members from pursuing an important source of research support. Most significant, to me, is that collaborations usually involve complementary expertise, so they provide a valuable way to increase the impact of one’s research. Why denigrate their value implicitly by not counting such funding?

I worry that the emphasis on numbers might be getting worse with the advent of online tools like Google Scholar and the Web of Science. While serving many wonderful purposes, these tools can have the unfortunate consequence of allowing anyone to determine a scholar’s citation frequency within a few minutes. Because of differences between publication rates by different kinds of scientists (experimental versus theoretical) and differences in the numbers of scientists in different fields (few study cosmology, while many study semiconductors), a comparison between citation counts makes little sense.

The Solution

What is to be done about this problem with numbers? First, one must affirm forcefully that the numbers have little value per se. “Bean counting” is a lazy person’s way to assess quality, and all of us must work harder to assess present or prospective faculty members. Only when faculty members closest to the individual’s discipline have established independent credibility in their assessment can we solve this problem. This assumes, of course, that such faculty members are conscientious in making a sustained critical effort to understand the content and impact of their junior colleagues’ research.

Some universities, recognizing the exaggerated emphasis on numbers, ask that external reviewers base their assessment of a person on the contribution contained within some small number of publications selected by the candidate. This approach to evaluation could go a long way toward fixing the problem I have outlined.

Academic assessments require a particular humility when carried out by those outside the candidate’s fields. Junior colleagues must be mentored and judged carefully, respectfully, and sympathetically. Doing so is easier said than done, but these investments of time and thought are critical to the health of our colleges and universities.

Milton W. Cole is a distinguished professor of physics at Pennsylvania State University. He serves on the advisory board of Penn State’s Laboratory for Public Scholarship and Democracy and is co-chair of the intercollege minor in civic and community engagement. His e-mail address is [email protected].