The Academic Dilemma of Data-Driven Decision-Making

Colleges and universities face rising demands for data.
By Patricia McGuire

A recent story in Inside Higher Ed about a proposed reorganization of faculty positions at Gettysburg College quoted an adjunct professor who said, “The new administration enjoys looking at the ‘analytics’ without tak­ing into account the human side of running a college.”

That quotation echoed concerns I heard from faculty members thirty-five years ago during my own first encounter with academic data as a newly minted president at Trinity College (now Trinity Washington University). When I took office in 1989, I quickly learned that despite higher education’s centuries-long devotion to scientific methods and rational analysis, the idea of using data for planning and assessment was not high on anyone’s list of essen­tial management practices. At that time, Trinity was a case study in the consequences of the “data-free” environment that was the reality for too many small private colleges struggling to keep up with the massive changes sweeping through higher education at the end of the twentieth century.

By the end of the 1980s, as a consequence of the rapid transition to coeducation at previously male colleges and the large shift away from private colleges and toward state universities, Trinity’s enrollment had dwindled to about three hundred full-time traditional-aged students in our historic undergraduate women’s college, the program that most faculty members and alumnae saw as the “real” Trinity. I had been a board member prior to my appointment as president, but in my very first week on the job I was stunned to learn many things I should have known as a trustee, includ­ing the fact that Trinity’s adult studies and graduate programs were keeping this venerable institution afloat. Why didn’t the trustees know this? The few data the board received were not disaggregated by program, and what few reports the board did receive were muddled by disagreements among various administrators about whether the data were accurate. At that time, most personnel at Trinity did not have access to computers, and most data were collected and analyzed manually.

The board mandated a strategic planning process, starting with developing a response to an accreditation concern that the spread of academic majors (about fifty majors and minors) was too broad for the small size of the student population and available faculty; a large number of underenrolled major courses required spending more on overloads and adjuncts, contribut­ing to a growing deficit. The accreditors were urging Trinity to reorganize extensively to avoid potential closure.

I started with what I thought was a very simple issue: the number of students enrolled in each major program and how those data compared with national trend data for majors. Report in hand, I asked to meet with the Curriculum Committee, which was chaired by a senior historian who was also the academic dean; other senior faculty from history, English, and art were also on the committee. All had been my teach­ers when I was a student at Trinity barely fifteen years previously.

In that age before PowerPoint, I had some hand-drawn charts showing that, over a period of years, the single-largest major program at Trinity had become business administration, supplanting the dominance of history and English from earlier periods by a very large margin. As I presented the data, one of my beloved former teachers on the committee grew increasingly agitated and asked, bitterly, “Why are you telling us this? We thought you were appointed to restore the liberal arts, not ruin them!”

I replied that I was merely presenting the data on major enrollments that had occurred prior to my appointment. Stepping onto thinner ice, I pointed out in another chart that Trinity’s trend lines for major programs were consistent with trends cited in data from the National Center for Education Statistics (NCES) that showed that majors like history and English were declining while business, health care, and education were all growing nationally.

“Enough!” hissed the academic dean, giving me a cold stare full of disappointment in her former student. “We are not interested in trends. Trinity is not ‘trendy’!”

Stymied by the Curriculum Committee and pres­sured by the board to adopt a policy requiring course enrollment of at least ten students, I took the data directly to the full faculty—including financial data showing the cost per student per program. Contro­versy ensued. Several years passed. Finally, three years after starting the fraught discussion, the faculty voted to reduce the spread of majors from fifty to sixteen. The list of majors that faculty members chose to keep in 1992 was virtually the same as a list I made privately in 1989, but they had to come to the same conclusions on their own time.

Academic versus Managerial Culture

Tensions between the managerial culture of university administration and the intellectual culture of aca­demic life have long been a part of the enterprise of higher education, often leading to conflicts over goals and objectives as well as over methods and measure­ments for pursuing our work together and assessing outcomes. While everyone may agree that we all want what’s best for our students and the institution, the question of what’s best often devolves into struggles over how we measure success and who has the right to declare victory. A faculty member may claim success in seeing the five students in her major program gradu­ate, while a president may see failure in a major that has gone on for years with only five students enrolled.

Data often define the battleground when aca­demic and administrative interests clash, and not just enrollment data. The real struggle is between investment-return calculations on the business side and insistence on the academic side that qualitative and humanistic considerations should be present, if not paramount. Regardless of enrollment numbers, faculty members will argue, a true liberal arts core requires robust institutional support for disciplines like languages, humanities, arts, and sciences, even if major enrollments are small.

Faculty members sometimes also claim that more students would enroll “if only” the administration focused its admissions efforts on advertising their program more robustly, while the enrollment chief will reply that market data indicate that no amount of advertising will improve enrollment in the program. Meanwhile, the federal government wants to know if those five students are employed in jobs that will guar­antee their ability to pay off their student loans, and the accreditors want to know if the students learned what they need to know to succeed in their chosen professions.

The Rise of Data

In the last two decades, the data battles have grown exponentially as a result of changing expectations and data-collection practices at the federal and state levels, in accreditation, and even among private agencies, including major foundations.

“Data-driven decision-making” and its twin, “evidence-based practices,” became prominent phrases in educational circles in the early 2000s. While higher education had varying approaches to the develop­ment and use of internal collegiate data as early as the nineteenth century, circumstances at the start of the twenty-first century fueled the rapid expansion of data collection and analysis, with far-reaching conse­quences for colleges and universities. (The Association for Institutional Research: The First 50 Years provides an excellent historical record of the development of the field of institutional research.) Spurred in part by the desire of educational reformers to improve K–12 outcomes, leading to the 2001 No Child Left Behind Act, the concepts quickly moved into federal oversight of higher education as lawmakers and the public demanded greater accountability for collegiate outcomes, particularly in light of the sharply rising cost of a college education. This movement, in turn, drove accreditation agencies to adopt more aggressive oversight practices largely summarized in detailed evi­dentiary expectations for assessment data on academic programs, learning outcomes, student services, and institutional effectiveness.

Major foundations, notably the Bill & Melinda Gates Foundation and similar funders that are deeply engaged with educational reform and accountability movements, also helped to drive the collegiate edu­cational data movement. Decades earlier, in response to growing pressure from state and federal agencies, accreditors, and other foundations, the Carnegie Foun­dation for the Advancement of Teaching worked with elite institutions to develop their institutional research potential and even funded a grant to the American Council on Education in 1956 to expand the emphasis on data collection in colleges and universities.

Governing boards increasingly expect presidents to adopt data-driven management practices with mea­surable results that are to be reported to the board at regular intervals. Some of the data requirements are self-imposed measurements for strategic planning, budget and personnel analysis, facilities utilization, and similar operational functions. But boards also hold presidents accountable for compliance, and the most onerous data requirements are in response to the expansive data demands of federal and state agen­cies, accreditation agencies both comprehensive and specialized, foundations and other funders, banks and credit-rating agencies, media sources like U.S. News & World Report, and even the National Collegiate Athletic Association.

The Behemoths of Data Demands

While data demands emanate from many sources, institutions spend the most time responding to the extensive data demands of three behemoths: the federal government, accreditors, and lenders. The following brief discussion touches on just a few of the data requirements that faculty members may find particularly burdensome.

US Department of Education: IPEDS

The Integrated Postsecondary Education Data System (IPEDS) is the mother of all higher education data systems, and the size and complexity of the data col­lected grow with each new regulatory interpretation. Compliance with the myriad data reports for IPEDS is required for a college or university to participate in the Title IV federal financial aid programs. The NCES oversees the IPEDS program.

IPEDS collects annual data on enrollment and demographics, financial aid, cost of attendance, finances, faculty and administrative personnel, reten­tion, and graduation rates, among other categories of data. The data are publicly available through the NCES system and on the federal College Scorecard. Many researchers and media sources present the higher education datasets in different reports. For example, the Georgetown University Center on Educa­tion and the Workforce publishes many reports that rely on IPEDS data, including a large report on the return on investment across time for every institu­tion, and, more recently, a report with the scathing title Buyer Beware: First-Year Earnings and Debt for 37,000 College Majors at 4,400 Institutions. And the AAUP, of course, draws on IPEDS data in its Annual Report on the Economic Status of the Profession and other research reports.

While most of the IPEDS data issues are left to institutional research officers, several new regulatory requirements have the potential to cause a good deal of concern for deans, program chairs, and faculty members. The new rules on “Financial Value Trans­parency and Gainful Employment” create financial measurements to assess and report debt levels of graduates and earnings for every academic program at every institution of higher education starting in 2024–25. Much of this information is already avail­able through the NCES database and on the College Scorecard, and it is the basis for the Buyer Beware report. Future data will go even farther, using a “debt-to-earnings ratio” (a formula already used in existing gainful employment regulations that assess degrees in for-profit institutions and certificate programs in all institutions based on an analysis of whether the annual federal loan repayment burden is equal to or less than 8 percent of annual earnings) and an “earnings premium test” (which measures whether at least half of the graduates of an academic program have higher earnings than a typical high school gradu­ate in the state labor force who never attended college) to promote “financial value transparency” for every academic program.

In the September 2023 release of the regulations, the US Department of Education made it clear that higher education provides many social and economic benefits, including higher tax revenues: “The improve­ments in productivity and earnings lead to increases in tax revenues from higher earnings and lower rates of reliance on social safety net programs. These down­stream increases in net revenue to the Government can be so large that public investments in higher education, including those that Congress established in Title IV [of the Higher Education Act] . . . more than pay for themselves.”

Having established its premise that the federal government expects a high financial return on its invest­ment in student financial aid, the document goes on to say, “Too many programs fail to increase graduates’ wages, having little or even negative effects on gradu­ates’ earnings. At the same time, too many programs charge much higher tuition than similar programs with comparable outcomes, leading students to borrow much more than they would have needed had they chosen a more affordable program.” Then the docu­ment states the real reason why the rule is necessary: “Financing the costs of postsecondary education and training with Federal student loans creates significant risk for borrowers and the Federal Government (as well as taxpayers). In particular, if students’ earnings after college are low, then they are likely to face difficulty in repaying their loans and will be more likely to default.”

In short, since the federal government is investing billions of taxpayer dollars to subsidize institutions of higher education and our programs, it wants to make sure that it will recoup this investment both in loan repayments and in higher tax revenues. And, influenced heavily by a decade of “data-driven decision-making” and “evidence-based practices,” the government has determined that the data at a broad institutional level are insufficient to protect its invest­ment interests. The real data, in the view of the NCES data experts, occur at the level of academic programs where student choices influence their lifetime earn­ings—at least according to this regulatory structure.

These rules raise significant concerns for institu­tions and faculty members, particularly those that carry a large share of the educational mission to prepare students for careers that may not be lucrative (teaching, social work, counseling, fine arts) but that are essential to a healthy, functional society. Col­leges and universities cannot control the economics of wages in the larger society—we would cheer lustily if teachers were paid at least as much as petroleum engineers!—but the new regulations seem to imply that, somehow, the institutions may be at fault for low wages of graduates of some programs. As well, the data do not take into account the real effects of gender-based and racial discrimination in employment, equity wage gaps, and the fact that minority-serving institutions enroll students who are more severely affected by discrimination in employment than elite universities whose graduates fare quite well on most wage surveys.

Consequently, the new regulations raise the specter of program closures for several reasons, including the fact that the data can discourage students from enroll­ment, as well as the danger that some institutions may choose to focus only on programs that produce higher wage returns.

What can faculty members do about this kind of federal data collection and dissemination? First, they should know the data about their own programs and their graduates. Faculty members should not hesitate to provide a response to federal data that establishes a more complete picture of how the program serves the community, how the graduates choose their career pathways, and why the discipline is essential for a healthy society. The faculty should work with the institutional research office, career services, the alumni office, admissions, and other relevant departments to be sure that the whole story of the program is accu­rately portrayed, not just a few data points.

Accreditation

Accreditation, both institutional and specialized, imposes an enormous burden of data collection and analysis to provide evidence of fulfillment of standards that are intended to support public accountability for higher education. Faculty members are deeply involved in accreditation work, particu­larly for program reviews and learning outcomes assessment.

Over the three decades that I have been involved in higher education administration, in addition to over­seeing many accreditation reports and visits at Trinity, I have been an accreditation team member or chair for about twenty-five institutions. I have seen the increas­ing focus on evidence-based assessment practices, sometimes to the chagrin of faculty members who feel that the somewhat mechanical demands of accredita­tion assessment practices ignore the humanistic nature of teaching and may even infringe on their freedom to teach and assess students as they deem appropriate. Faculty members in the liberal arts seem to struggle with the perceived constraints of some assessment expectations more than those in some of the profes­sional programs where specialized accreditation imposes even more stringent requirements on course syllabi, data collection, and analysis.

Originally, accreditation was a largely voluntary peer-review process in which visitors from similar institutions worked with institutions under review to facilitate continuous quality improvement. The expe­rience, while academically rigorous and thorough, depended heavily on the expertise of the reviewers, who had some flexibility in making qualitative judgments and offering constructive feedback to institutions.

The nature of accreditation began to change when the federal government linked access to Title IV financial aid to institutional accreditation; with that linkage, the accrediting agencies themselves had to gain federal approval. Across the last two decades, with the rise of the accountability movement in education and the demand for greater applica­tion of data-driven assessment, the accreditors faced increasing criticism for not being tough enough on some institutions. Consequently, the Department of Education imposed more stringent requirements on the accreditors who, in turn, passed the federal demands along to institutions through new accredita­tion standards and increasingly complex evidentiary expectations. The federal demand for greater account­ability, discussed above in relation to IPEDS and financial transparency, also drives accreditation today. Gone are the days when the institutional report could offer lovely qualitative descriptions of programs and outcomes without presenting hard data to support the claims of teaching and service effectiveness.

Faculty members complain about the growth of administrative bureaucracy, but some of that “bloat” is a consequence of the significant demands of compli­ance. The faculty must participate in the assessment work of accreditation, but at most colleges and univer­sities, data collection and analysis occur in institutional research offices, where staff work with deans and program chairs, steering committees, and accreditation work groups. With tens of millions of federal dollars at stake for most colleges and universities, doing the assessment work of accreditation well is not optional. The best approach for faculty members is to see accred­itation work as an opportunity to strengthen their programs, to tell a great story about their effectiveness in teaching and innovation, and to make sure that their institutional leaders also have the data and analyses necessary to tell those stories with confidence.

Banks and Credit Rating Agencies

Most colleges and universities today carry some debt to finance construction or projects of significance for the institutions. Securing and managing debt requires a great deal of data collection and analysis in ways that can mirror the stringency of accreditation. Whether the institution is taking out a private loan from a bank or seeking a credit rating from Moody’s or Standard & Poor’s for a public issue, the lender will deeply scrutinize all institutional operations, academic as well as nonacademic, and will take a hard look at the revenue streams that will underwrite the loans or bonds.

Some institutions with already slim margins are able to secure loans only by agreeing to covenants that may require a certain level of cash reserves and fulfillment of debt-service ratios reported annually. Managing debt is not a one-time process; the reports are due regularly across the lifetime of the loan, which could span many decades.

Enrollments in 2023 are shrinking, and lenders are asking even more questions about the ability of institutions to manage their existing debt; new debt may have even more requirements. At the same time, colleges and universities are pressed to keep moderniz­ing facilities and the technology infrastructure to stay attractive to shrinking or changing student markets. Taking on debt is essential to keeping the university’s infrastructure current, but it also imposes constraints that can affect academic budgets if revenues fall short of expectations.

Faculty members may feel somewhat removed from the analytics of debt management, but given the financial challenges that many institutions face today, they need to be both aware of and informed about the key elements of debt that put pressure on the oper­ating budget. Presidents and chief financial officers should take the time to educate faculty members about the overall financial picture of the institution, includ­ing debt obligations; sharing such information should be routine, not just something that happens during a fiscal crisis. While each case may be different, when institutions start looking to cut programs and per­sonnel, there could well be a debt-management issue underneath the “strategic reorganization.”

A Data-Driven Institution

Today, after some reorganization in the 1990s, Trinity has become an immensely data-driven institution. Our largest major programs are in nursing and other health professions, psychology, business, education, and biology. The liberal arts disciplines remain central to the strong general education program that is required of all students in all majors. Trinity in 2023 is both a predominantly Black institution and a Hispanic-serving institution, a major transformation from the days when the majority of our students were white Catholic middle-class women. Our students depend heavily on Pell Grants, federal and state support, and generous scholarships provided by many benefactors.

Across the three decades since the academic dean warned me against being “trendy,” we have learned that telling our story through clear and compelling data helps to keep the institution strong and focused. Because Trinity serves a majority population of low-income students of color, most of whom are first-generation college students, we know we must pay close and careful attention to those key data points that are particularly challenging for this population: persistence and completion, net tuition, cost of atten­dance, debt levels, and postgraduate earnings.

At the same time, we have also learned that ran­dom factoids alone are a real disservice to our students and faculty members, so we try to provide context to help our public constituencies understand how our mission makes interpreting our data quite different from data use at larger, wealthier institutions. As we anticipate a new era of artificial intelligence driving even larger and more complex datasets, we need to be advocates for justice in the use and interpretation of data whose measures might be rooted in deeply biased historic patterns and practices—for example, the IPEDS graduation rate that assumes that today’s students attend college in the same way that students attended in the 1960s, or the current approach to earnings data that ignores gender-based and racial discrimination in the workplace.

Faculty members are right to insist on a human-centered approach to data and their use in decision-making. This does not mean rejecting the use of data wholesale but rather understanding the people and populations that are the subjects of the data col­lection and making sure that their stories, interests, and needs are central considerations in our institu­tional data-based decision-making.

Patricia McGuire has been the presi­dent of Trinity Washington University in Washington, DC, since 1989. Her email address is [email protected].