Changing Practices in Faculty Evaluation

Can better evaluation make a difference?
By J. Elizabeth Miller and Peter Seldin

Years ago, the process of faculty evaluation carried few or none of the sudden-death implications that characterize contemporary evaluation practices. But now, as the few to be chosen for promotion and tenure become fewer and faculty mobility decreases, the decision to promote or grant tenure can have an enormous impact on a professor’s career. At the same time, academic administrators are under growing pressure to render sound decisions in the face of higher operating costs, funding shortfalls, and the mounting threat posed by giant corporations that have moved into higher education. Worsening economic conditions have focused sharper attention on evaluation of faculty performance, with the result that faculty members are assessed through formalized, systematic methods.

This study was undertaken to determine whether contemporary methods of evaluation differ significantly from those previously used. For comparative purposes, base data were derived from our 2000 study, which concluded that meaningful evaluation of faculty performance was rare and that judgments frequently were based on information gathered in haphazard, even chaotic, fashion.

To ensure wide coverage, we sent questionnaires to the academic deans of a random sample of accredited four-year liberal arts colleges. Of 538 academic deans surveyed, 410 (76 percent) responded, an unusually high response rate. Many of the deans added their comments and attached committee reports. We read these materials carefully, and we include our impressions here.

Evaluating Overall Faculty Performance

In considering a professor for promotion in rank, tenure, or retention, academic deans today weigh a wide range of factors. Our questionnaire offered thirteen criteria for consideration, and table 1 summarizes the relative importance given by the deans to “major factors” in 2000 and 2010.

Table 1. Percentage of colleges that consider each criterion a “major factor” in evaluating overall faculty performance

Criterion* 2000 (n=506) 2010 (n=401)
Classroom teaching 97.5 99.3
Campus committee work 58.5 70.5
Student advising 64.2 69.1
Research 40.5 51.8
Publications 30.6 39.6
Length of service in rank 43.8 38.1
Public service 23.6 23
Activities in professional societies 19.9 20.1
Personal attributes 28.4 18
Supervision of graduate study 3.0 3.6
Supervision of honors program 3.0 3.6
Consultation 2.0 3.6
Competing job offers 3.0 0.7
*In descending order by 2010 scores  

Deans continue to regard classroom teaching as the most important index of overall faculty performance. But significant changes have occurred in other areas. The traditional measures of academic repute—research, publication, and professional society activity—have assumed new importance. The number of deans citing research as a major factor in overall faculty evaluation rose between 2000 and 2010 from 40.5 to 51.8 percent with an emphasis on publication increasing from 30.6 percent in 2000 to 39.6 percent in 2010.

Deans prize the visibility of published research and papers presented at professional meetings partly as a result of the economic stress being experienced by many institutions. A North Carolina dean wrote: “Most of our budget comes directly from the state legislature. They want faculty to publish, and present papers at professional meetings so our college stays visible.” A New York dean said, “High visibility is the name of the game today. It’s important to stay in the public eye.”

These remarks lend credence to the oft-heard observation that faculty members are paid to teach but are rewarded for their research and publication.

The importance of “staying in the public eye” is probably also reflected in the consideration some deans give to faculty members’ public service. Approximately one-quarter of the deans (about the same percentage as ten years earlier) view public service as a major factor in evaluating faculty performance. Colleges and universities appear to be encouraging faculty members to get involved in community and civic affairs.

Colleges also expect faculty members to get involved in on-campus activities. The percentage of deans citing campus committee work as a major factor in faculty evaluation rose from 58.5 to 70.5 between 2000 and 2010. This increase may reflect a trend toward decentralization and a broader sharing of the institution’s nonteaching load. In the same way, student advising rose from 64.2 to 69.1 percent during the ten-year period. Deans recognize the value of student advising as an outreach effort to keep students content and in school. As a California dean said, “Student advising is expected of every faculty member. It’s part of their teaching responsibilities.”

Length of service in rank still merits high, if somewhat diminished, importance (falling from 43.8 to 38.1 percent). Colleges relying on this factor presumably would argue that a positive correlation exists between the number of years in rank and the faculty member’s overall contribution to the institution. That argument is open to challenge by younger faculty members with fewer years of service but rapidly expanding reputations.

Personal attributes, an elusive phrase that for years has allowed some deans and department chairs to ease undesired faculty members out of jobs or to deny them tenure or promotion, has declined in importance. Its use as a major factor in evaluation dropped from 28.4 to 18 percent. This change suggests that fewer faculty members are being punished today for the wrong dress, wrong politics, or wrong friends. A Texas dean wrote, “We no longer expect conformity. Today, diversity is more highly valued.” 

Other criteria are viewed as relatively less significant by deans in evaluating faculty performance. These include supervision of graduate study, competing job offers, outside consultation, and supervision of an honors program.

In comparing the evaluation practices of overall faculty performance in 2000 with those in 2010, three significant trends emerge: (1) academic deans are almost unanimous in citing classroom performance as the most important index of overall faculty performance; (2) research, publication, campus committee work, and student advising have sharply gained in importance; and (3) length of service in rank is considered important, but has lost ground.

Evaluating Teaching Performance

Liberal arts institutions have long taken pride in the high caliber of teaching offered by their faculties, a fact borne out by the deans’ almost unanimous citation of classroom teaching as the most important factor in evaluating overall faculty performance. But how is teaching itself evaluated? What sources of information are used? 

Table 2 examines the sources of information and their frequency of use by deans in the 2000 and 2010 studies. Several significant changes emerge. For the ten-year period, four of the fifteen sources of information changed by at least 13 percent. Most important, all four changed in the same direction, with each becoming more widely used. This change indicates that the information-gathering process is becoming more structured and systematic and that colleges are reexamining and diversifying their approach to evaluating classroom teaching.

Table 2. Percentage of colleges that “always used” the source of information in evaluating faculty teaching performance

Information Source* 2000 (n=506) 2010 (n=401)
Systematic student ratings 88.1 94.2
Chair evaluation 70.4 79.1
Self-evaluation 58.7 67.6
Dean evaluation 64.9 67.6
Classroom visits 40.3 60.4
Committee evaluation 46 52.5
Course syllabi and exams 38.6 42.5
Colleagues? opinions 44 41
Scholarly research/publication s 26.9 28
Grade distribution 6.7 10.1
Alumni opinions 9 10.1
Informal student opinions 15.9 9.4
Long-term follow-up of students 6 7.2
Student exam performance 5 2.2
Enrollment in elective courses 1.5 0.1
*In descending order by 2010 scores

Student ratings continue to be the source of information most widely used to assess teaching. The use of written, formal student ratings increased from 88.1 to 94.2 percent over the ten-year period.

A dean in Texas wrote, “Students are the most accurate judge of teaching effectiveness.” Remarked a California dean, “Student views are given top priority here.” And a Massachusetts dean said, “Student ratings are crucial in evaluating teaching for tenure and promotion decisions.” Although student ratings are enjoying unprecedented popularity, not all deans support their use. Said a dean in South Carolina, “Student ratings have led directly to grade inflation. If you give high grades, students will reward you with high ratings.”

Chairs and deans continue to have a major—and increasing—impact. The number of deans citing the chair’s evaluation as always being used rose from 70.4 to 79.1 percent, and those citing the dean’s evaluation rose from 64.9 to 67.6 percent. Since administrators are such important sources of information, one might ask, how sound are their judgments? On what information do they rely?

One increasingly important source of information is self-evaluation. It has jumped in use from 58.7 to 67.6 percent over the ten-year period. Many academics—faculty members and administrators alike—believe that self-evaluation can provide insights into the values and beliefs that help shape course and instructional objectives and, in turn, contribute to classroom competency.

One supporter of self-evaluations, a Georgia dean, wrote, “Self-evaluation is the keystone of our evaluation system. Rooted in faculty teaching portfolios, it gives us insights that are not available anywhere else.” And an equally ardent dean from Colorado argued, “Self-evaluation is invaluable. It gives us the important values and attitudes that determine why faculty members teach as they do.”

The use of classroom visits rose dramatically between 2000 and 2010. These visits are now “always used” in personnel decisions by 60.4 percent of the colleges surveyed. That is a far cry from the 40.3 percent that always used them in the earlier study. The jump in use was a surprise, considering the commonly expressed faculty view that a classroom is the professor’s private domain.

A preponderance of academic deans take issue with that view. Typical was the comment of a Pennsylvania dean: “How can the evaluation process be done without formal classroom observation?” The dean from a Colorado college argued, “Classroom visitation is the only way to really know what’s going on behind the closed classroom door.”

A key question is this: on what or whom do deans (and chairs) depend for information? It appears that they rely, in part, on faculty committees: 52.5 percent of the deans in the 2010 study said they always use committee evaluation as a teaching evaluation source, compared with 46 percent in the earlier study.

The question persists: how do faculty committees arrive at their decisions? Are decisions based on solid, relevant information? It appears that the committees’ impressions of a professor’s classroom competence are partly based on the professor’s record of research and publication. This record was a source of information always used by 28 percent of the deans in the current study, a slight increase from the nearly 27 percent cited in 2000.

How relevant are research and publication to an assessment of classroom teaching? Arguably, if they provide insights into the professor’s teaching effectiveness, they can be useful measuring rods. However, the number of books, journal articles, and monographs offering such insights is quite modest.

Impressions of teaching competence also are derived, in part, from analysis of a professor’s course syllabi and examinations. Central to this approach is whether such instructional materials are current, relevant, and appropriate to the course outline. This approach has gained importance. It was cited by 42.5 percent of the deans in 2010, compared with 38.6 percent in 2000. The increased use of syllabi and examinations for assessing teaching performance is consistent with the trend toward more structured information gathering. A California dean put it this way: “Much can be learned about a professor’s views on teaching by a careful review of course content and objectives, exams, reading lists, and student-learning experiences.”

The remaining sources of information on teaching performance were infrequently used, but some shifts in emphasis are noteworthy. Deans placed more importance on grade distribution and less on informal student opinion and student exam performance in 2010 than they did in 2000.

In comparing the evaluation of teaching practices in 2000 and 2010, three significant trends emerge: (1) systematic student ratings have increased in use and, perhaps for the first time, provide input for personnel decisions at more than 90 percent of the colleges surveyed; (2) reliance on classroom observation and self-evaluation has sharply increased as part of a multisource system of assessment; and (3) department chairs and deans continue to play leading roles in evaluating teaching, but their domination has been lessened by the wider use of committee evaluation.

While it is clear that evaluation methods themselves are changing, how much these changes reflect improvement remains unresolved. More certain is the growing conviction among many administrators and faculty members alike that a direct outgrowth of improved evaluation practices will be improved teaching performance.

J. Elizabeth Miller is associate professor of family and child studies at Northern Illinois University. She has taught more than fifteen thousand students and mentored more than one hundred faculty in her thirty-year career. Her e-mail address is [email protected]. Peter Seldin is distinguished professor of management emeritus at Pace University. Having previously served as academic dean and department chair, he has been on both sides of the faculty evaluation issue during his long academic career. His e-mail address is [email protected].