Authors: Alan Fask, Fred Englander, Zhaobo Wang
First author’s institution: Fairleigh Dickinson University
Journal: Journal of Academic Ethics 12:101-112 (2014)
For most of my college experience, my instructors gave in-class exams. However, one instructor didn’t want to devote any class time to exams and instead, decided to make all the exams online. For each exam, we had to login to the course management system, open the test, and once the test was opened, we have one hour and fifteen minutes to complete the exam. The counter couldn’t be paused and there was no way to come back to try again. While we could use our textbook and notes, having to find somewhere where I wouldn’t be interrupted and the internet wouldn’t randomly disconnect for that time period was a challenge. Needless the say, the exams were more stressful than normal in-class exams were.
Given the current pandemic and the suspension of in-person classes at most universities, many students will now find themselves in similar predicaments. With universities urging students to leave campus, many students may not have access to stable internet or even a place to avoid distraction for a few hours. Reasonably, they may be concerned how taking an online exam may negatively affect their course grade.
On the other hand, instructors may be worried about students cheating on the exams. After all, how easy would it be for students to use outside resources and gain an unfair advantage?
Today’s paper tries to address both of these perspectives, revealing that both are valid. Students score worse on online exams but also appear more likely to use outside resources to perform better.
To see if students were more likely to cheat or do worse on online exams, the authors selected two sections of an introductory statistics course at a private university in the northeastern United States. Both of the sections had 22 students enrolled and covered the same material, used the same lectures, and used the same homeworks and exams. By comparing the enrolled students’ GPAs, class attendance, homework grades, midterm grades, and SAT scores, the authors determined that the students were very similar between the two sections.
During the final week of the course, instructors told the students that some of them would be taking their exam online instead of in-person. In addition, all students would be required to take a practice test similar to the final exam and in the same format they would take the final exam. The practice test occurred 3 days before the actual final. In addition, the practice test would only be for participation credit and would not be graded for accuracy.
Since the practice test grades were not based on accuracy, the authors assumed any cheating would be minimal. Therefore, comparing practice test scores would allow the authors to estimate the effect of taking an exam online instead of in-person and comparing the final exam scores would allow the authors to estimate any additional differences in scores as benefits from cheating.
To compare the practice test scores and control for differences in prior preparation (which would likely be reflected in students’ GPAs), the authors ran a linear regression model to predict the practice test scores for all students. The coefficient of the variable signifying whether the student took the practice test online or in-class was around -14, meaning that students who took the practice exam online did 14 points worse than students who took the practice exam in class. That is, taking the exam online did actually hurt student’s performance.
The authors then ran a linear regression on the final exam scores, but found the opposite result. This time, students taking the exam online scored 10 points on average than students who took the exam in-person.
Since the authors didn’t monitor the students taking the online exams or use any self-reporting measure, there is no way to know if the students were actually cheating or if the difference could be explained any other way. For example, perhaps the students who took the online tests studied more for the final after doing worse on the practice test or students put different levels of effort into the practice exam based on whether they were in the classroom or not. In addition, study habits of the students may play a role. The authors assumed that students were in “finals-mode” a few days before the final exam which may not have been the case.
Returning to the original point, this study suggests students are rightfully concerned about taking online exams and negative effects on their grades. Additionally, instructors are rightfully concerned that students are more likely to cheat on online exams. Overall, it seems that the argument leans in the instructor’s favor with students more likely to gain an advantage from taking online exams.
How these results would generalize to proctored online exams is still an open question. For example, some instructors are using Zoom to monitor their students while taking exams or using lockdown browsers to prevent students from using Google to search for answers while taking online exams. While these may limit students unfairly earning higher grades, they do not combat students earning lower grades due to the format. Given everything happening in the world now, it’s likely that the disadvantage of taking exams online is likely higher than in the original study due to increased disruptions based on where the student is residing and stress. Thus, while instructors should be concerned about students unfairly earning higher grades due to cheating, instructors also need to be concerned about students unfairly earning lower grades due to circumstances outside their control.
I am a postdoc in education data science at the University of Michigan and the founder of PERbites. I’m interested in applying data science techniques to analyze educational datasets and improve higher education for all students