Every Exam They Take, We’ll Be Watching Them
Software tools can help reduce cheating in online learning—but at what cost to student privacy and equity?
By Aditya Johri
For the first few years I taught online, I continued to hold final exams in person: Students were required to come to campus on a designated day at a certain time. I checked their IDs and monitored the exam to ensure there was no cheating. In-person administration also helped prevent the exam from ending up online. Although an honor code exists at my institution, infringements are common and the burden of proof is too high to pursue anything but the most egregious violations.
However, over time I realized that students needed flexibility in the exam schedule. The timing didn’t work for many, and nontraditional students who worked full-time or had childcare responsibilities suffered disproportionately.
As a solution, my colleagues suggested using software that “locks” the students’ computers—literally takes over their machines—so that the only screen they can work on is the exam. The students are also required to turn on their video and show their ID and a view of their environment, then take the exam while staying on camera. Faculty who had used the system told me this technology reduced cheating drastically. After spending some time getting trained on the system and some trial and error, I was able to deploy it successfully. The interface to watch the recordings was easy to use and it did what it purported to do: provide a quick way to record and then review videos of online exam-taking.
When I first used the system, I was in awe of the technology and how well it worked. Even though I felt uncomfortable watching students in their private spaces, like a voyeur, I justified it as a strategy I had deployed for their convenience. But my attitude toward using the system changed once I went over the recordings, especially ones that were earmarked by the system as problematic. Almost every such instance was of a male student with a darker skin color. A student coughing, a poster in the background, or a lag due to lower bandwidth—simple deviations from some standard metric—also caused the algorithm to designate the exam as problematic.
These issues continued over the semesters. To me, they signaled a systematic issue with the algorithm and the data used to develop it. My search for information about the software proved fruitless—the company offered no transparency. Also lacking was information about what happens to the data, the limits on its use and reuse, and answers to when, if ever, it is deleted.
On Reddit, I found that students were equally concerned about how the software worked, their privacy, the potential long-term use of their data, and faculty members’ uncritical reliance on the software for making decisions about cheating. Some students had also posted potential solutions for circumventing the system, underscoring that those who want to cheat and have the intelligence to devise new solutions will do so. Petitions also appeared to stop the use of monitoring software.
A recent push toward unbundling services in academia has increasingly outsourced solutions, especially technology, to external firms. It is unclear, though, who makes the decisions to purchase and deploy these systems. Is there a committee that looks at the ethics of the technology we use? Shouldn’t there be?
As higher education institutions, we increasingly preach diversity, equity, inclusion, and anti-racism, but then we fail to consider whether those values are ingrained across all functions. Why are we so willing to give up our data and privacy and start surveilling our students for convenience? I recognize that decisions about technology services are difficult—especially in fast-changing times such as the COVID-19 pandemic—but that’s even more reason to create guidelines that ensure products meet a minimum standard beyond ease of use, cost, and security.
I have been told it is legal for higher education institutions to work with vendors, as handing over the data to them does not violate any laws and is protected by contracts. But this is about more than legality. Unless a comprehensive culture is built, a piecemeal approach will benefit the few, and it won’t be equitable.
It is in our actions that students see what we value most. Our institutions need to develop ways to examine technology contracts with an eye towards algorithmic transparency. We have the expertise on our faculty and among our students for assessing these solutions, so why are we so reluctant to use it? In engineering and computing, we discuss and write about technology ethics—why, then, are we so reluctant to practice them?
Aditya Johri is a professor of information sciences and technology at George Mason University.
Image Courtesy of Creative Services/George Mason University