Bad Data Are Not Better Than No Data

A guide for radically inserting ourselves in decisions about educational technologies.
By Martha Fay Burtis and Jesse Stommel

Educators shouldn’t sit back and passively embrace any technology. The purpose of education is exactly to ask hard questions. This is especially true of technologies that become ubiquitous at our institutions, when colleges and universities purchase tools that outsource basic functions to external, for-profit companies. In a 2021 piece for Hybrid Pedagogy, “Counter-friction to Stop the Machine: The Endgame for Instructional Design,” we wrote about “critical instructional design,” thinking specifically about the models and tools for course design that so many colleges and universities have tacitly accepted as good and necessary. We argue, “Our investment in technology must be critical, careful, cautious; the code inscribed in our tools must be in conversation with our mission.” But the project of critical instructional design needs to be focused on an even larger frame than the course or the assignment; we need to be asking these hard questions about the very structures of our educational institutions.

Ultimately, educational institutions are collections (ideally, communities) of people. But increasingly, technology companies are treating educational institutions as conglomerations of data, reducing the human teachers, staff, and students to bits and binary. Too many of these companies are more interested in selling solutions to problems of data than they are in genuinely supporting the people represented by those data.

The companies' (and university administrations') presumptions seem to be that extensive data collection is necessarily good (or benign), that “data-informed decision-making” is the best kind of decision-making, that bad data are better than no data, and that outside firms are better at collecting and parsing institutional data than the institutions themselves.

The Example of EAB

Several years ago, we found ourselves working in a division of teaching and learning technologies at a regional public university in the United States. In our roles, we were responsible for helping to develop faculty and student digital literacies. This work took many shapes, including faculty development in the form of workshops and colloquia, one-on-one support for instructional design and curricular development, and tutoring of and consultation with students working with and through technology. In all of our partnerships, we emphasized the need for developing a critical approach to the adoption and use of technology. Our goal was to build confidence among faculty members and students that they could make deliberate and intentional choices about what technologies they chose to use, how they chose to implement those technologies, and, ultimately, what they chose to share about themselves through this work.

One of the flagship programs we administered, Domain of One’s Own, was premised on the notion that every member of our learning community needed and deserved the opportunity to understand the web and the networked world in which we live as a complicated but emergent space that they could actively contribute to and build, rather than as a technological juggernaut over which they had no control and that was owned by faceless corporations, publishers, and social media platforms. Toward the start of one semester, we were invited to participate in a number of meetings with a new vendor, EAB (formerly the Educational Advisory Board), to review a student-success platform that included a mobile app for students and a customer relationship management (CRM) tool for faculty. We entered into these conversations believing we were there to offer our own insight and expertise regarding the use of technologies at our university, not just in the classroom but throughout the institution, in particular to ensure that the values we were espousing as part of our students’ curricular experiences were echoed in their experiences with the tools the university provided to help ensure their success.

Instead, we discovered that the choice about adopting this platform had already been made, and there was little opportunity to engage meaningfully with EAB’s representatives about the misalignments we observed. When we explained that a tool for gathering student data through mechanisms that felt opaque and obscure sat in direct opposition to our pedagogical approach to technology, representatives assured us that students at other institutions never raised questions about the use of their data, so we needn’t worry. When we objected to the fact that students’ profile pages in the CRM were dominated by a data dashboard that displayed (in extra-large font), the number of Ds or Fs a student has received, the number of courses they had repeated, the number of times they had withdrawn from a course, and their overall GPA, we were met with silence. We saw the design of this system as dehumanizing students by reducing them to (failure-focused) data points and grade-point averages; EAB saw this feature as a way of making clear that these data were the most useful things we could or should know about a student.

Questions to Ask

What we found most disconcerting about those meetings with EAB representatives was that core questions about the philosophies underlying the products and the data they were collecting were considered to have been asked and answered before we even showed up in the room. Most of the conversation instrumentalized our work as technologists and teachers by focusing on the how of tracking and retention rather than the why. Driven by both careful skepticism and genuine curiosity, we were interested in deeper questions, but the vendor never took our concerns seriously enough to explore them.

There are questions we wish we could have asked and ones we think anyone should ask when a tool like this is being considered for adoption. While our story thus far has been about EAB, our hope is that faculty, staff, and students can (and will) ask the following questions about any technology their institution might purchase, adopt, or compel staff and faculty to use.

Why are we outsourcing this function to an external for-profit company? What is the gap we need to fill at our own institution that can’t be filled by us?

The problem with outsourcing is not just the often massive amount of money institutions spend when they enter into these contracts but also the way these deals are made at the expense of the institution’s own experts and practitioners. Turnitin, for example, has been adopted by “over 16,000 academic institutions, publishers, and corporations,” according to its own website, in spite of the fact that many experts in composition (and entire composition programs) have pushed back on the narrative that plagiarism is on the rise, have pointed out that plagiarism is usually unintentional, and have critiqued the way policing student writing with plagiarism-detection tools short-circuits more meaningful approaches to writing pedagogy. When we adopt a tool, especially one used across an entire institution, do we adequately consult the folks at our own institution who are best poised to offer perspective on the tool or its approach? Too often, the answer is “no.” Too many times, we ourselves have been consulted in these situations only after a purchase has been made or a partnership has been forged.

When a technology has become ubiquitously adopted, how does (or can) an institution productively resist it?

One of the most notable ironies about the adoption of EAB by colleges and universities to address issues of student success is how pervasive its presence is throughout the higher education sector. Thousands of institutions contract with EAB to provide advice on strategy, marketing, and enrollment; student success; data analysis; and diversity, equity, and inclusion. As these colleges and universities seek solutions to boost enrollment, increase student success and persistence, and close the graduation gap, they are each turning to the same company for answers. What does it mean that a single company has become the default provider of these solutions? What does it mean that EAB’s research is built upon the data that each of our institutions is contributing through our partnership with the company? It becomes extraordinarily difficult, if not impossible, to resist a tool after its adoption has become this widespread.

What data are worth collecting? What data should we never collect?

As we mentioned, we were especially concerned in our conversations with EAB about how GPAs and grade data, in general, were framed in relation to student success, retention, and equity. The company’s representatives, and even other people in the room, didn’t seem willing to thoroughly inspect the assumption that grades are a proxy for success or the ways that our culture of grades and assessment in education might have a direct (and directly negative) effect on retention. Additionally, algorithmic retention software commonly undermines student privacy by collecting what we would consider personal data, like what events students are “tapping” into, when they purchase their books for a course, and even how they move around campus. Some data, when they are collected and retained, can be a direct security risk to students; as we saw with students protected under the Deferred Action for Childhood Arrivals policy, data collected with good intentions at one point can later be weaponized.

Beyond beautiful charts and graphs, what is the actual origin story of the data that are being collected and analyzed? What alternative stories could be told?

Anyone who has sat in on a presentation from EAB knows that the company has no shortage of well-designed charts and graphs to tell us what is happening in our sector of higher education. These graphs might warn of the impending demographic cliff, the return on investment of a college degree, or the regional outlook for higher education across the country. Any of us who have worked closely with datasets knows that the stories told by companies like EAB are dependent upon both how the data have been collected and how they have been analyzed. In fact, each of our institutions offers classes that teach our students this viewpoint and the skills necessary to analyze data ethically and appropriately. However, we often find that audiences in an EAB presentation are willing to accept what is being shared as gospel based upon these slick graphs and charts. Are we exercising due diligence and developing our own in-house expertise and understanding concerning how EAB and other businesses collect and analyze the data that support the stories they tell?

What are we assuming about the problems that this company is helping us solve? What potential role does the company play in constructing that problem or narrating it into existence? Was this a problem before the company told us it was?

EAB’s narratives are used to sell us services and tools. It isn’t surprising that a company would make a case for why a customer should adopt its products, but EAB’s research arm is huge. Its online directory of research experts lists almost four hundred individuals. At face value, it would appear that EAB has invested significant resources in building a hugely persuasive research narrative that will drive sales of its products. It is worth taking a step back, asking more questions, and conducting independent analysis into the problems that EAB identifies through its research and purportedly solves with its products. Are these the actual problems our colleges and universities are facing? Are there other ways of interpreting the data that would lead us down a different path? Is it easier to simply accept EAB’s narrative and purchase its solutions than to build our own in-house capacity to understand our needs and find or build solutions?

Who is in the room for decisions around technology adoption?

Far too often, students are entirely left out of conversations about data collection and processes where technologies are considered for adoption. And that exclusion frequently extends to faculty members and academic staff as well. At an institution where one of us teaches, the registrar adopted and paid for a restrictive tool for building and maintaining syllabi without fully considering the ways the syllabi tool might be in direct conflict with the pedagogies of the faculty members designing those syllabi. At an educational institution, even seemingly bureaucratic tools are necessarily pedagogical tools and can restructure or actively harm the relationships between teachers and students or can exacerbate competitive relationships between students.

How do we resolve the friction that emerges when our institutions adopt tools that operate in opposition to our pedagogical values?

Within a single institution, there are bound to be differences of opinion about how we do our work and how we embody our values. A diversity of pedagogical approaches in the classroom and a diversity of strategic approaches within university leadership may exist. But at the heart of our institutions, we should have a mission and vision with which all of our work, including the adoption of technologies and tools, aligns. If we aren’t careful and considerate when working with ed-tech vendors and outside consultants, we run the risk of creating internal friction between our mission and the tools we use to enact that mission.

What are the deeper issues that need to be addressed? Are there ways the technology might actually be distracting us from dealing directly with those issues?

Issues around enrollment and retention are often connected to much deeper concerns that companies like EAB are ill-equipped to address. EAB is a software company. It sells “solutions,” including a product called “moon shot for equity,” which undermines the complexity of the questions and research around equity and inclusion, presenting equity as a set of “best practices” that can be conveniently ordered off a corporate menu. When an institution outsources something as critical as equity to EAB or Quality Matters, another ed-tech company that markets products in this area, it divests itself of the much more difficult work necessary to build and maintain its own culture of equity and accessibility.

Critical Pedagogy

While we believe the questions above can serve as a framework for interrogating the tools adopted to administer the work of colleges and universities, we also believe these questions should be at the heart of our engagement with any technology at our institutions. Any technologies our institutions adopt, from learning management systems to video meeting spaces to student information systems to customer relationship management platforms, are “educational technologies.” When we adopt these tools at any level, we are making pedagogical decisions, not just technological ones. Instead of considering the tools and technologies we explicitly use in the classroom as “academic” and regarding systems like EAB as “administrative,” we need to acknowledge that our students are learning about the world and their place in it, and about how technology is mediating this relationship, whenever they interact with a digital tool on our campuses. When we flip the script in this way, it becomes clear that we can’t afford a critical approach to technology in one context (the classroom) and not in the other (everywhere else). All of the technologies we adopt make up a digital landscape that our students must learn to navigate for both practical and pedagogical purposes.

In Education for Critical Consciousness, Paulo Freire writes, “It is not possible to teach methods without problematizing the whole structure in which these methods will be used.” Just as we believe critical pedagogy can powerfully frame our relationship with technology in the classroom, we also believe it can help us more deeply understand how we use technology for admissions, student success, faculty evaluations, and student information. At the heart of critical pedagogy is a recognition and interrogation of power. As faculty members, we engage with power through our pedagogy, in particular when we actively foster relationships with students in the classroom. Power also lives within our colleges and universities at a larger scale, embedded within the relationships that shape and structure our institutions. Critical pedagogy calls on us to identify, interrogate, and deconstruct the power at work in all of these relationships. Power matters to our pedagogy because it both mediates and disrupts the human activities of teaching and learning. Particularly when unobserved, it meddles in the work of education; it acts as curriculum, sitting alongside the work that we believe we are doing in the classroom. Without a critical approach, power is an unspoken and unchallenged disruptor.

Applying a critical pedagogical approach to our adoption of technology across colleges and universities means recognizing and challenging how these tools also inscribe power relationships. How does a student-success platform’s (literal) “centering” of particular student data reinforce traditional power dynamics between faculty members and students? How does collecting personal data and information from students without making them fully aware of what’s being collected and how the data are being used affect our relationships with students? When we adopt systems that reduce our diversity efforts to checkboxes and “best practices,” what message are we sending to our students about how we value them and their complex identities? We must resist the push to give these technologies a “pass” when it comes to these difficult and critical questions just because they are administrative, managerial, or supposedly necessary.

Tenets

A critical approach to technology (and the data it traffics in) requires that we reject the tacit presumption that a tool is necessary, and it requires that we interrogate all of the technologies at our institutions that have become ubiquitous to the point of being invisible or ostensibly beyond reproach. Just because we decide to support diversity and equity, or to increase access or retention, doesn’t mean we have to accept the narrative about these things sold to us by a company like EAB. A desire to make courses easier to navigate and to better support student learning doesn’t mean we have to accept that learning management systems are an appropriate solution (or even a solution at all). The fact that most educational institutions already use tools that actively rank students against one another doesn’t mean they should.

Instead, we must leave our assumptions at the door in order to better align what we say we do as institutions (our values) and what we actually do (our practices). We end here with a few tenets and points of entry, a primer for the work of critical pedagogy:

  1. We need to consider and teach all of our students, no matter their backgrounds or challenges, with approaches that are antiracist, feminist, queer, and antiableist. As we approach technology and data collection, we need to ask directly, “How will this tool, how will the collection of these data, affect our most marginalized students?”
  2. We must recognize that all technologies have pedagogies hard coded into them. No tool is agnostic or neutral in its use. Educators need to be directly involved in the adoption of any technology used at our institutions.
  3. Data collection should be disabled, not enabled, by default. If we can’t answer why or what for, we shouldn’t be considering whether or how.
  4. Just because technological tools like the learning management system, Turnitin, or EAB are becoming increasingly ubiquitous at our institutions doesn’t mean they’re needed or inevitable. Whatever the result, a critical approach to educational technology starts with our asking hard questions of new tools, and especially of the tools we’ve taken for granted.
  5. We need to center our local contexts within our pedagogies and decision-making around technology. Who are our students? What do they need to be successful? What challenges do they face? What potential dangers to them does a technology or its data collection present?
  6. Student support staff are teachers. Librarians are teachers. Advisers are teachers. All of us, including academic staff, tenured and tenure-track faculty members, faculty members on contingent appointments, and especially students, should be integral to any decision-making around technology at our institutions. Collaboration across divides needs to be encouraged and actively supported.
  7. Schools are not prisons. Policing students is not teaching. Surveillance technologies, ones that strip us of our humanity, have no place at our educational institutions. For example, someone at the institution might think they have a good reason for purchasing a remote proctoring tool, but the reason isn’t good enough, and we need checks and balances in place that stop the adoption of abusive technology.
  8. Human beings aren’t data points, students are humans first, the work of teaching can’t be done by an algorithm, we can’t do more with less, equity isn’t a checkbox, building local community isn’t something we can outsource, supporting struggling students isn’t a product, and the humans at our institutions are the best possible investment.

Martha Fay Burtis is associate director of the Open Learning and Teaching Collaboratory at Plymouth State University and has worked for over twenty years at the intersection of higher education, digital technology, and pedagogical innovation. She can be found online at http://www.marthaburtis.net, and her email is [email protected]. Jesse Stommel is currently a faculty member in the writing program at the University of Denver. He is also cofounder of Hybrid Pedagogy: The Journal of Critical Digital Pedagogy. He can be found online at http://www.jessestommel.com, and his email address is [email protected].

Photo by Canva Pro.