|
« AAUP Homepage
|
From The General Secretary: No Undergraduate Left Behind?
By Ernst Benjamin
Washington-based higher education associations have derailed efforts by the Department of Education to require standardized outcomes assessment of undergraduate education similar to that imposed on the schools and have almost persuaded Congress to protect institutional autonomy. That’s the good news. The bad news is that two associations representing public colleges and universities, in seeking to fend off mandates, have adopted a “Voluntary System of Accountability” (VSA), while other national associations are adopting other outcomes measures. Moreover, as I write, the regional accrediting associations have encouraged Congress to allow accreditors to mandate specific assessment measures.
Externally mandated assessment measures are unlikely to benefit from faculty collaboration or review. But an even greater danger arises from the extent to which assessment is based on a common set of numerical indicators. Institutions and accreditors had minimized the damage of outcomes measures by allowing each college to define its own approach. The new “voluntary” schemes allow for greater variation than the Department of Education proposal, but publication of standardized scores will still foster standardized teaching and learning.
The current system of class-by-class, instructor-by-instructor grading provides an outcomes measure consistent with educational diversity. Critics complain that grading is varied and inflated and so fails to provide accountability. But we know that—despite the diversity of schools, students, and teachers—high school grades are better predictors of college success than the SAT or ACT. Moreover, Department of Education research has shown that college grade inflation is a myth. What is the evidence that standardized measures of college performance will be more accurate than grades at measuring student learning?
Similarly, regional accreditation provides intensive case studies of qualitative institutional performance. What it lacks is transparency. Publication of accreditation reports would better educate the public than outcomes measures. Publication would also permit the informed scrutiny that would gradually improve the reliability and validity of accreditation. Conversely, secret reports have discredited accreditation and fostered the demand for outcomes measures.
In designing their voluntary proposals, associations are also dropping some useful “input” measures. For example, the VSA data would include student demographics and funding but not faculty mix (full versus part time and graduate assistants), indicators of which types of faculty teach students which levels of instruction, or instructional expenditures. The widely maligned U.S. News and World Report rankings actually rest on better faculty data, including faculty mix and salaries.
The associations’ outcomes assessment proposals often include value-added measures based on “critical thinking,” essay writing, or a “performance task.” It is possible to improve student scores on such tests through training. Training to the test, rather than education, may be what many colleges and universities will be forced to do, just as many K–12 schools are now doing.
Outcomes measures rarely make sense as a basis for institutional funding. Cutting funding to “underperforming” schools simply worsens their performance, and few policy makers have been willing to “reward” underperforming institutions. Nor would outcomes scores assist students and parents trying to find a fit between a prospective student and a specific institution. Some of the proposed process data and surveys could help, especially measures of student engagement. Other proposed measures—such as capstone programs including senior essays, seminars, and community or vocational projects—do contribute to education but do not lend themselves to reliable comparative scores, and, like measures of engagement, speak more to process than to outcomes.
The bottom line is that the most widely promoted measure is graduation rates. These rates vary so broadly with student demographics, student employment, and institutional characteristics that only a complex regression model could be used to validly interpret the results. Journalists and legislators are not going to wade through alternative regression models. They are going to observe of colleges and universities, as they do of K–12 schools, that one scored 62 percent and another 66 percent. Moreover, the easiest way to increase graduation rate scores is to select students who are more able, better prepared, and so economically advantaged as not to be required to work or stop out. So, with the new voluntary outcomes measures, will institutions really continue to admit and then better educate at-risk undergraduates, or will tens of thousands of these at-risk students be left behind by a system obsessed with outcomes scores?
|