The Whistleblower Effect

Faculty evaluations are not always put to good use.
By Mihran Aroian and Raymond Brown

Imagine the following scenario: You are teaching a course and give a writing assignment. With the deadline fast approaching, one student asks a classmate for a copy of her paper, makes a few revisions, and submits the assignment as his own original work. When you confront the student with evidence of plagiarism, he accepts responsibility for cheating. The second student, who provided the original paper, is shocked that she, too, would be accused of cheating. She simply e-mailed her friend a copy of her paper. She claims that it is not her fault that her friend, after changing a few words, submitted what is essentially the same paper.

The first student takes responsibility for his actions but is upset because you have brought his friend into the picture. The second student is upset because she believes she did nothing wrong. Neither of the students seems upset about the cheating; they are upset that they have been caught and face official sanctions. As it turns out, both students are members of the same student organization, as are other students in your class. How will this situation affect your end-of-semester evaluations?

Academic integrity is important in higher education, not only because it ensures the validity of grades and eliminates unfair advantages but also because academic dishonesty in college has been correlated to unethical business practices in the workplace. Business professor Randi l. Sims, echoing the findings of others in his field, concludes in the Journal of Education for Business that “subjects who engaged in behaviors considered severely dishonest during college were more likely to engage in behaviors considered severely dishonest at work. Similarly, subjects who engaged in mildly dishonest behaviors (or no dishonest behavior) during college were more likely to engage in only mildly dishonest behaviors (or no dishonest behavior) at work.” Providing students with a strong foundation in ethical decision making in preparation for entering the workforce has been stressed for many years and is critical in our global economy.

Student Evaluations of Teaching

In 2011 the Office of the Dean of Students at the University of Texas at Austin, which is responsible for the investigation and disposition of student conduct and academic violations, instituted a faculty-in-residence program to study ways of improving academic integrity on campus. As a part of this program, the faculty member in residence met with other faculty members, students, and administrators to gain a broad understanding of academic integrity on a campus with fifty thousand students. A recurring theme emerged: some faculty members were unwilling to report cases of academic dishonesty to the dean of students. They were concerned about the negative effect such action might have on their end-of-semester Course Instructor Survey (CIS), which may be used for promotions, tenure decisions, and merit reviews.

These faculty members believed that a “whistleblower effect” would affect student evaluations of teaching. Many viewed it as professionally advantageous not to report academic violations and believed that reporting students would lead to lower course-evaluation ratings that could impede professional advancement. This hypothesis is supported by the work of computer scientist Aditya Parameswaran, who maintained in an article in Teaching in Higher Education that approximately three times as many faculty members choose not to report cases of academic misconduct as those who do report them through the proper administrative channels.

The CIS is administered by UT’s Center for Teaching and Learning. Toward the end of each semester, students fill out the survey form to provide feedback on their instructors’ teaching. The CIS solicits responses on a number of different dimensions, but when used for evaluation purposes, the item that is typically given most weight asks the student to rate the instructor’s overall performance on a five-point scale that ranges from “very unsatisfactory” to “excellent.” The center’s website describes the CIS as a tool used to “provide faculty members with valuable student feedback about classroom experiences and teaching and learning approaches.” It goes on: “the surveys are also used to provide feedback to administrators about the effectiveness of a faculty member’s teaching and student rapport. This information, along with a teaching portfolio and peer review of course materials, is used in promotion and tenure decisions.”

Numerous researchers have examined the importance of student evaluations of instructors in tenure decisions, promotions, annual performance reviews, the hiring of new faculty, and the dispensation of teaching awards. Several published articles provide qualitative assessments of academic dishonesty and course evaluations. Texas Tech University professors Stacy l. Carter and Narrisra M. Punyanunt-Carter reported in the College Student Journal on a plagiarism study in which 276 college students rated various instructor responses to academic dishonesty. The authors noted that “college instructors may not realize the impact of their actions on their students until after they have imposed the action or after they have received their teaching evaluations from students.” Public policy scholar Laura Langbein, writing in the Economics of Education Review, described a consistent pattern that emerged in her study of 7,686 graduate and undergraduate courses: faculty who provided higher grades achieved higher student evaluation ratings. Langbein looked at grade inflation and teaching performance. The evidence she presented supports the general conclusion that grade inflation exists and increases teacher-evaluation scores. Other researchers have provided evidence that grades have increased while SAT scores have decreased. One can surmise that faculty members who report cheating will generally report overall lower grades, since any academic sanction will reduce the overall course grades.

CIS Study

Although student evaluations of instructors in higher education have been widely studied, to our knowledge, no published work quantitatively examines how reporting cheating and academic misconduct affects a professor’s student evaluations. We set out to study this topic at UT. Our original hypothesis was that a thorough analysis of the data would demonstrate no correlation between reporting cheating and earning lower teacher ratings. After a series of internal discussions, we decided to commit the time and resources necessary to test this hypothesis.

The process for gathering the data started with Student Judicial Services (SJS). Faculty members who officially reported academic misconduct to SJS spanned a wide range of disciplines and included individuals from the McCombs School of Business, the Cockrell School of Engineering, the College Of Natural Sciences, the College of Liberal Arts, the School of Nursing, and the College of Communications. In order to be included in our study, the student had to have met one on one either with the faculty member or with an SJS administrator to discuss and resolve a case of academic dishonesty. In either case, the student, the faculty member, and SJS had documented paperwork of the official sanctions.

After considering several research designs, we decided to use a statistical technique known as hierarchical linear modeling to compare instructors’ evaluations in a semester when they reported incidents of cheating to their evaluations in a semester when they taught the same class and did not report any cheating. This approach made intuitive sense: it is not unreasonable to assume that, all other things being equal, very popular instructors who are engaging will receive higher evaluations than other instructors. Conversely, professors who are demanding and unapproachable tend to receive lower-than-average ratings. Failing to account for this clustering or ratings within instructors could potentially distort the results.

We also included one hypothesized predictor—reporting students for cheating. Any faculty member who reported three or more cases of academic dishonesty in a given semester would be included. We did not count any cases submitted after the course evaluations were completed; if the instructor filed a case after the end of the semester, it was unlikely that the student would have been informed of an investigation prior to completing the CIS. If an instructor reported students for cheating in more than one semester, we used the most recent semester in which students were reported. Summer semesters were not considered.

The guideline for choosing the comparative sample was the same course taught by the same instructor in the same term in the closest year in which no students were reported for academic dishonesty. If the criterion for semester comparison could not be met, the semester closest in temporal proximity was used, excluding summer terms. For example, if instructor “Dr. X” of Chemistry 101 reported five students for academic dishonesty in spring 2011, then that course was suitable for analysis if an appropriate matching group could be identified.

Applying these criteria resulted in a sample of thirty-two instructors with a total of 8,940 students. A total of 4,832 students (54 percent) were classified as “reported” and 4,108 (46 percent) were classified as “not reported” for cheating. Students in the “reported” condition were not necessarily reported for cheating themselves but were enrolled in a class where at least three students were reported. Despite the fact that only classes in which academic dishonesty was reported qualified for this study, the sample covers a fairly broad section of the university. Courses in accounting and chemistry made up just over 40 percent of the total, but the remainder consisted of courses in a wide range of disciplines.

The results indicate a statistically significant negative effect for reporting students. The estimated mean rating for classes in which students were not reported for academic dishonesty was 3.95. The estimated mean for classes in which students were reported was 3.75, which is 0.20 points, or one-fifth of a standard deviation, lower. This finding suggests that reporting students for academic dishonesty before course-instructor surveys are filled out has a negative impact on overall course-instructor survey ratings. If these results can be generalized, a review of policy on the use of course surveys in the evaluation of instructors is needed.

The quandary is clear. On the one hand, instructors are expected to enforce academic integrity in the classroom. Institutions are interested in accurately measuring what a student has learned. Cheating obfuscates that process. Additionally, by not enforcing academic integrity, instructors implicitly condone cheating and undermine the objective of instilling in their students ethical values that they can carry with them into their lives beyond college. No institution wants students to continue cheating once they begin their careers, where the consequences of unethical behavior are often much more severe. On the other hand, however, course surveys often play a large role in the evaluation of faculty, and an instructor interested in advancing professionally would be well advised to ensure that his or her course ratings are as high as possible. Assistant professors are potentially harming their chances at gaining tenure if they enforce academic integrity in the classroom.

We believe that the lower faculty ratings result in part from student interactions. If a student is caught cheating, he or she is more likely to criticize the instructor in the company of peers. Comments such as “the professor failed me on the homework” (rather than “the professor gave me a zero for cheating”) can convince other students that somehow the professor is not treating students fairly. The end result—the grade—is the same, but students caught cheating will generally not want to tell their peers that they have done so. Placing the blame on the professor or some other external factor is easier than taking responsibility for their own academic misconduct.

Students who are sanctioned still tend not to accept full responsibility. During the 2010–11 academic year, Student Judicial Services participated in the National Assessment of Student Conduct Adjudication Process Project, which includes a web-based survey that measures student perceptions related to the adjudication process. UT was one of twenty-one higher education institutions to participate in this process. The participants in this survey were students who had completed the official adjudication process; on a self-reporting basis, 5.5 percent of UT students responded to the question, “What was the outcome of your case?” With “my case was dismissed.” In reality, only 0.15 percent of the cases were dismissed. During the 2010–11 academic year, only two of the 1,382 cases adjudicated by SJS were dismissed. During the 2009–10 academic year, only one of 1,314 cases was dismissed, yet students self-reported that 6.9 percent of the cases were dismissed.

Recommendations

An obvious, and unfortunate, conclusion that can be reached is that instructors who demand academic integrity in the classroom should not report cheating if they are concerned with extrinsic rewards associated with higher student evaluations. Alternatively, instructors could wait until after course evaluations have been completed before confronting students regarding academic misconduct. Neither of these options is acceptable. In the case of the latter, a student may continue to cheat during the course of the semester and not have an opportunity to take corrective action, which could lead to additional incidents of academic dishonesty.

A system should be developed that encourages instructors to uphold academic integrity in the classroom while not punishing them professionally for doing so. We make the following recommendations:

1. Encourage all faculty members to communicate expectations of academic integrity to their students. Unless all members of the faculty take academic integrity seriously, a university’s ability to change the behavior and culture campuswide will be difficult. Academic integrity can be improved only when all campus constituents work together.

2. Provide regular communication to faculty regarding how students cheat, how to deal with cheaters, how to confront students who cheat, what are best practices for the classroom, and how to reduce cheating.

3. Require students to complete an online academic integrity tutorial each academic year. This can be incorporated into learning management systems such as Moodle, Canvas, and Blackboard.

4. Encourage faculty members to invest a little time in a lot of different places to promote academic integrity in the classroom. Communicating expectations before a major graded assignment can reinforce the message.

5. Designate a point person in each department to deal with issues of academic integrity.

6. Consider regularly updating assignments and exams. If past exams and assignments have been uploaded to popular websites that students use, they may be tempted to copy the material.

7. Consider including a question on student evaluation forms about the instructor’s expectations and practices to ensure academic integrity in the classroom.

8. Ensure that every syllabus includes language that addresses academic integrity and the consequences of cheating.

9. Include academic-integrity training as a component of new faculty orientation.

10. Consider the effect on instructors who report academic misconduct when using student evaluations for merit review and promotions.

Our results are based on evaluations of thirty-two instructors at one university. While our findings are provocative, they are hardly definitive. If future research finds further evidence of this “whistleblower effect,” the need for a review of policy regarding the use of course-instructor surveys at institutions of higher learning will become more pressing.

Mihran Aroian is a lecturer in the Management Department of the McCombs School of Business at the University of Texas at Austin and was the first faculty member in residence in the Office of the Dean of Students, Student Judicial Services. His e-mail address is mihran.aroian@utexas.edu. Raymond Brown is a graduate student in the quantitative methods program at the University of Texas at Austin. His e-mail address is raymond.brown@utexas.edu.

Comments

More confidence can be placed in the results of an analysis when multiple methods yield the same results. In this case, another approach that could be applied is a meta-analysis of effect sizes. A meta-analysis statistically combines the results of several studies that address a shared research hypothesis. We applied this approach to the data associated with this analysis. First, we treated each instructor as a “study” and calculated an effect size estimate d (Cohen, 1988) for the difference between the mean CIS rating for the class where no student was reported and the class where students were reported. This yielded 32 effect size estimators as there were 32 instructors included in the study. We then applied formulas put forth by Hedges and Olkin (1985) for a fixed effect meta-analysis to obtain the weighted effect size and its associated standard deviation. Finally, this value was tested against zero using a basic Z test. The results are presented below:

Weighted Effect Size 0.21
Standard Deviation of Weighted Effect Size 0.02
Z statistic 9.81***

These results conform with those obtained using HLM and indicate there is a significant effect for reporting students for academic dishonesty. The weighted effect size is .21 and the Z test indicates this value is significantly different from 0. This indicates there is an overall effect for reporting of approximately one-fifth of a standard deviation.

References
Cohen, J. (1988). Statistical power analysis for the behavioral sciences 2nd editon. New York, New York. Academic Press.

Hedges, L.V., & Olkin, I. (1985). Statistical methods for meta-analysis. Orlando, FL: Academic Press.

Add new comment

We welcome your comments. See our commenting policy.

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Refresh Type the characters you see in this picture.
Type the characters you see in the picture; if you can't read them, submit the form and a new image will be generated. Not case sensitive.  Switch to audio verification.