Not long ago, I was in one of those now ubiquitous committee meetings on institutional assessment. As I listened to the speaker earnestly describe objectives and outcomes and a vast three-dimensional matrix filled with yet-to-be-obtained data, I cringed and sank lower in my seat. I hoped that my wincing would be mistaken for what it was fast becoming: a throbbing headache.
Let me be clear that, the title of this column notwithstanding, I am not against assessment per se. Indeed, in disciplines in which specific skills are necessary for success, those skills should be and usually are assessed directly within the discipline. Practical examinations in medical schools do this, for example. The bar exam does it for lawyers, the CPA exam for accountants. In disciplines that require a certain habit of mind rather than a specific set of skills, students’ abilities are also assessed within their disciplines, albeit less directly. These assessments are useful and necessary.
Furthermore, I am not against institutional goals. The idea that an institution has goals independent of the goals within the disciplines seems to me rather charming. These goals express the very essence of higher education, the idea that the whole is greater than the sum of its parts. When you go to college, you not only learn a particular discipline, you also get that most coveted of intangibles: an education. Institutions usually set themselves quite lofty goals, like “to produce engaged citizens” or “to create productive members of society.” These goals are often codified in a mission statement and as such intentionally made a bit vague. They are, in any case, certainly admirable.
No, what troubles me is the combination of assessment and institutional goals: institutional assessment. The idea that institutional goals of this type are even assessable is clearly absurd; in assessment parlance, they lack measurable outcomes. Thus the attempt to assess them invariably esults in reducing lofty aspirations to quantifiable outcomes. Unsurprisingly, the quantifiable outcomes usually miss the point of the lofty aspirations. Worse, they distract us from the fact that we once had lofty aspirations, and eventually we believe that the watered-down outcomes were our goals all along. We waste time and resources dumbing down formerly admirable goals to produce data that, ultimately, tell us nothing.
A friend and former colleague of mine believes that the current obsession with quantitative measures can be traced back to nineteenth-century empiricism. The once revolutionary idea that anything can be subjected to the scientific method and proven quantitatively is indeed seductive. Most people find it more difficult to argue with numbers than with words. But while scientific inquiry has led to many great advances, we as a culture have simply taken our obsession with numbers too far.
Don’t get me wrong. Checking regularly to be sure that curriculum and pedagogy are up to current standards is a great idea. Keeping track of student success rates and working to improve them should be priorities at every institution. But I am here to tell you, and I say this as a mathematician: not everything is quantifiable. It’s time to stop this nonsense and get on with the business of educating.
Kira Hamman is a mathematics instructor at the Mont Alto campus of Pennsylvania State University. Academe accepts submissions to this column. Write to firstname.lastname@example.org for guidelines. The opinions expressed in this column are those of the author and do not necessarily represent the policies of the AAUP.