January-February 2008

Investigating Your Own Teaching


Over the last decade or so, faculty in a wide range of disciplines have become more interested in and open to the scholarship of teaching and learning. That scholarship has taken many forms, including personal accounts of experimentation and change, descriptions of recommended practices, and quantitative research on students’ gains. In this article, we describe a particular type of pedagogical scholarship, “practitioner research,” in which faculty research their own teaching.

With support from a grant from the National Science Foundation’s Course, Curriculum, and Laboratory Improvement program, we recently led a practitioner-research project in which a group of ecologists who teach introductory courses in environmental science and ecology conducted research on their own teaching.1 The rapid evolution of these faculty members’ thinking about how they teach and whether their students are learning impressed us. We share information about our project here in the hope that others will consider this type of scholarship.

Faculty Teams

We selected fifteen faculty members from a range of schools— research universities, four-year colleges, and community colleges—to participate in the practitioner-research project. We wanted teachers with different backgrounds and experiences who were willing to “do research on their own teaching,” even if they were not quite sure what that meant. Some had taught for a while and others were new teachers, but all had some experience using nontraditional approaches to teaching in lecture or lab (such as having students develop experiments based on their own observations in a lab session or having them “turn to their neighbor” to discuss open-ended questions in lecture).

We brought this disparate and energetic group together for a workshop during the summer 2005 annual meeting of the Ecological Society of America (ESA). Our goal was to introduce the idea of practitioner research, get the faculty talking in groups about the types of questions they might be interested in addressing in their courses, and provide examples of research designs that others had used. We hoped that teams focusing on similar questions would emerge and that these teams would continue to work together during the academic year. Although the participants seemed to be terrific teachers, none had done anything like this before, so we were nervous about the workshop. Were our expectations too high?

The outcome was better than we could have imagined. During the workshop, groups of faculty who wanted to study similar aspects of their teaching naturally emerged. One team wanted to assess whether students’ attitudes about environmental issues changed as a result of environmental science courses. Another focused on students’ abilities to interpret and make graphs with ecological data.

The faculty developed different instruments to gauge student learning. Many gave identical tests at the beginning and the end of the semester. (These pretests and posttests were anonymous and not part of students’ grades.) The test developed by the first team included essay questions that focused on students’ environmental attitudes—for example, “Do humans have the right to modify the natural environment to suit their needs?” Teachers in the second team developed tests that asked students to interpret different types of graphs and make their own graphs with raw data.

During the academic year following the summer workshop, the faculty carried out classroom research that they developed in consultation with each other. They collected and analyzed various types of data on student performance, pondered the meanings of their findings, and revised their research approaches. The following summer, at the 2006 ESA meeting, they presented posters describing their research and how it had affected their teaching. At that time, we also interviewed them about what they had learned from their experiences (these interviews are the source of the faculty comments quoted in this article). The following year, they repeated their studies and wrote research papers that were published in Teaching Issues and Experiments in Ecology, a peer-reviewed electronic publication of the ESA (online at http://tiee.ecoed.net).

Environmental Attitudes

Two of the faculty members in the project, Robert Humston and Elena Ortiz-Barney, examined how their students’ environmental attitudes and values changed after taking an environmental science course. Although they were teaching very different groups of students— Humston was working with cadets at the Virginia Military Institute, and Ortiz-Barney was teaching community college students at Phoenix College—they shared the hope that students in their classes would become more thoughtful about environmental issues. In particular, the two faculty members assumed that teaching approaches that used case studies and required students to take an active role in the class would help students understand different sides of environmental debates.

To find out if and why students' beliefs about environmental topics were different after taking their courses, Humston and Ortiz-Barney compared pre- and post-course responses to two surveys. The first of these, an existing survey called “The New Ecological Paradigm,” had previously been used in numerous comparative studies, but many of its questions seemed outdated and simplistic. Therefore, Humston and Ortiz-Barney also designed their own survey, which focused on “stakeholder conflicts” in environmental issues. In this survey, students responded to statements like the following: “The cost of preventing species introductions—or removing invasive species—is too high to consider it.” A student who “strongly agrees” with such a statement would be seen as viewing the management of invasive species from a predominantly economic perspective and would score a 5 in the “economic priority” category (likewise, “strongly disagree” would score 1).

Results of the two surveys indicated statistically significant shifts in environmental perspectives among students at both institutions. By the end of the semester, students better appreciated the complexities of environmental issues such as overfishing, deforestation, and species reintroduction. Humston and Ortiz-Barney concluded that this shift reflected a decrease in anthropocentrism among students and an increase in awareness that natural resources can serve purposes other than those that require their exploitation. They felt confident in the value of the survey they designed because its findings compared well with those of the established, well researched survey.

Finding out why students’ views changed turned out to be much more difficult, though. Since both Humston and Ortiz-Barney actively engaged students through case studies, guided discussion, group work, and debates, they could not connect student gains to particular aspects of their teaching as they had hoped. Nonetheless, both learned from comments they elicited from their students. For example, Humston said that his students reported that “the effectiveness of the discussions was compromised when they got too unstructured.” As a result, he now “devote[s] more time [to] preparing” and is “quicker to redirect discussions away from unproductive tangents.” Ortiz-Barney also received more frank comments from students than in previous years, which made her think more deeply about dealing with controversial topics in class: “I’m actually more careful now in what I say, and I explicitly discuss bias, critical thinking, and the importance of evidence in my course. . . . I’ve also started surveying students in other courses on their opinions and attitudes toward science and hot-button issues.”

These two faculty members found that the classroom research confirmed their assumptions about changes in how students think about controversial environmental subjects. They did not identify a specific aspect of their courses that most influenced the students, which is actually a very challenging research question. Perhaps most important, their efforts resulted in more thoughtful feedback from students—especially in student responses to discussions about contentious subject matter, which in turn changed the faculty members’ behavior in the classroom.

Understanding Graphs

A second group of four faculty—Chris Picone of Fitchburg State University, Jen Rhode of Georgia College and State University, Laura Hyatt of Rider University, and Tim Parshall of Westfield State College—focused on students’ analytical skills. These faculty members were especially interested in students’ abilities to understand graphs. Graphing skills are fundamental not only to scientific understanding, but also to understanding everyday issues discussed in the media. As Picone put it, “If we want to train students to enter the world and evaluate empirical evidence correctly, and to see how [such evidence] can be abused, we need to train them to assess data in our courses. However, too many students are ‘math-phobic’ and freeze up at the sight of a graph.”

Although the four faculty members’ approaches varied somewhat, each tested students’ abilities before, during, and at the end of introductory level environmental science and ecology courses. The faculty also changed their teaching methods to emphasize graphing skills: they had students work in small groups on interpretation of real data in the lecture component of their classes, asked students to make their own graphs with spreadsheets in laboratory, and talked in class about the use of graphs in specialized scientific literature as well as newspapers. The results of these changes were eye-opening for the faculty who participated in the project. As Hyatt, one of the faculty participants, said,

I used to teach graphs at lightning speed, assuming that my science students knew what was going on with graphing or could pick it up quickly. I now know that this is absolutely not the case, and I’ve slowed way down. I ask students a lot more questions about what’s going on in their heads, and what the pictures tell them in words.

Picone also recognized that his assumptions about his students’ abilities were wrong: “Now I give more time to cover the basics of analytical and graphing skills, and I see the need to repeat, repeat, repeat some concepts. Scientists take for granted the ‘glasses’ through which we see the world, and our students often lack those glasses.”

The pretests and posttests reassured the teachers at all four institutions that students’ abilities to interpret graphs improved over the semester. For example, open-ended short-answer and multiple-choice questions were used to gauge students’ understanding of the patterns among variables on graphs. Prior to taking the courses, 25 to 60 percent of students could correctly describe the patterns among variables in a simple bar graph, but by the end, 75 to 90 percent could do so. With more complicated data, however, the findings were quite different. When asked to interpret graphs in which patterns were highly variable and less obvious, student responses at the beginning and end of the semester were essentially the same. In a paper published in volume 5 of Teaching Issues and Experiments in Ecology, the faculty comment on the implications of this finding for those who teach environmental subjects:

The ability to find patterns amid variable data is a difficult skill that deserves special attention in our courses, particularly because variation is the norm in ecological data. In introductory courses and textbooks, students get little exposure to noisy, highly variable data in graphs. . . . Ecological data, by contrast, are typically noisy because phenomena are influenced by multiple (and often unpredictable) independent factors such as climate. . . .Ecology and environmental science courses, perhaps more than other areas of biology or the physical sciences, provide valuable opportunities to practice working with these kinds of data sets.

Despite their different teaching situations, these faculty learned similar lessons. They all found that rather than give students “canned” data or “cookbook” activities, they could give students real data and ask them to figure out how to analyze and display those data. The faculty also discovered the importance of being clear about what students need to learn. This is particularly important when faculty work together as a team and need both to accurately measure student learning and to compare results with others.

Focus on Teaching

A core goal of a practitioner-research project like ours is for teachers to develop a much clearer understanding of what they really want their students to learn—and whether their teaching practice accomplishes those objectives. For example, professors often identify “better critical thinking” as a course goal. But when pressed for evidence of such learning, most will admit that they have none and are not quite sure how they would even get it. In the process of developing measures that provide feedback about students’ skills and knowledge, the teachers we worked with had to think carefully and precisely about what they most wanted their students to learn and how their teaching would stimulate that learning.

In the interviews, the faculty in our project repeatedly described earlier conceptions of their courses as fuzzy and unfocused. Now they were more aware of their own thinking about the course design. We believe that this awareness, or “metacognition,” is a key indicator of the success of this project. As Alan Griffith, one of our research practitioners, said, “Now that I can verbalize these ideas, I can apply them and communicate them to students as well as colleagues.”

When we describe this project to colleagues, we are often asked how researching one’s teaching is different from more traditional assessment. Although evaluation and practitioner research clearly overlap, there are some important differences between them. Most crucial is the fact that in practitioner research, the faculty member is the source of the questions addressed and of the particular investigative approaches used. The teachers in our project selected specific topics that greatly interested them. This allowed them to explore these questions in depth and obtain information directly related to their concerns.

A second difference between practitioner research and evaluation is the public nature of the former process. Our faculty worked in teams, meeting in several workshops and remaining in contact during the semester through conference calls and e-mail. As they verbalized their findings and frustrations, they provided invaluable support and help to each other. As with traditional scientific research, the faculty also presented their findings in a poster session at a meeting and submitted papers to journals.

Without a doubt, doing this type of investigation is very challenging for a faculty member. For practitioner research to be successful, teachers must work together on common questions, issues, and goals. If you are interested in such research, you should find a colleague or two with whom you can work. Trust is essential, because colleagues need to admit to failures and frustrations. In addition, we suggest that teachers start small with specific questions that they can address with the assessments they develop. Many instruments are available in the published pedagogical literature, as are good, practical research models. As in science, making new, more systematic observations can lead us to new questions and give us something concrete to talk about as our investigations move forward.

Note

1. Teaching Issues and Experiments in Ecology is supported through several grants from the National Science Foundation (DUE 0127388, DUE 0443714, and DUE 9952347).  Back to text

Charlene D’Avanzo is professor of ecology in the School of Natural Science at Hampshire College and lead principal investigator on the Teaching Issues and Experiments in Ecology project. Her e-mail address is cdavanzo@hampshire.edu. Deborah Morris is director of program development at the Educator Preparation Institute at Florida Community College at Jacksonville. Her e-mail address is damorris@fccj.edu.

Comment on this article.