Title: Will my student evaluations decrease if I adopt an active learning instructional strategy?
Authors: Charles Henderson, Raquib Khan, Melissa Dancy
First authors’ institution: Western Michigan University
Journal: American Journal of Physics 86, 934 (2018) [Closed Access]
Active learning has been shown to result in better learning outcomes than traditional instructional techniques like lecturing. Yet, many STEM courses remain traditionally taught even though many instructors know about active learning strategies and are interested in implementing these strategies more often in their courses. Prior work has established that faculty are reluctant to use active learning strategies due to concerns about the value of active learning, the amount of time to learn about active learning and redesign their course, the amount of ongoing preparation required, the ability to still cover all the necessary content, and student resistant to active learning. As student evaluations are the main or only way instructors are evaluated, performance on student evaluations is important to instructors and student resistance may lead to decreased performance on student evaluations. However, the claim that active learning actually results in decreased performance on student evaluations has never been tested. The goal of today’s paper is to see if there is any evidence in support of this claim.
Of course, before determining how using active learning in the classroom affects student evaluations, it is important to understand some of the problems with students evaluations. Multiple studies (such as this one and this one) have found that students evaluations do not actually measure learning. In fact, student evaluations are correlated with characteristics unassociated with teaching, such as the race, gender, and attractiveness of the instructor. In fact, one study even found that providing cookies to students during class resulted in better evaluations. Yet, despite the many problems with student evaluations, many institutions rely on them for making tenure and promotion decisions, meaning instructors need to be concerned with their performance on student evaluations.
To investigate any possible effects of active learning on student evaluations, the researchers collected survey data from participants in the Physics and Astronomy New Faculty Workshop, which introduces first and second year faculty to active learning strategies and promotes replacing lectures with active learning and student-student interactions. The researchers sent out a web form to anyone who had participated in the New Faculty Workshop between 1996 and 2013, corresponding to 1306 faculty. Of these, 547 faculty responded to the survey and 431 of their responses were usable for this study. Most of the respondents were male (71%) and taught an introductory, calculus-based physics course.
So what did the researchers find? First, almost all of the respondents (over 90%) indicated that they used some type of non-lecturing, active learning strategies. The most common response was that the instructor used a “mostly traditional approach with some alternative features” (~60%) followed by using a “mostly alternative approach with some traditional features” (~30%). When asked to estimate the percentage of class time spent on lecturing, the faculty in this study reported an average of 56% of class time being devoted to lecture compared to 75% of class time spent on lecture that had been found in a previous study of more typical STEM faculty (that is, faculty having not receiving training on active learning), suggesting that faculty who participated in the New Faculty Workshop do use active learning at a higher rate than a typical faculty member.
When looking at the self-reported change in student evaluations, 48% of the faculty reported increases in scores on their student evaluations, 20% reported decreases, and 32% reported no changes, meaning most instructors did not receive worse student evaluations as a result of implementing active learning in their courses. However, the fact that 20% of the instructors did receive lower scores on their evaluations may still be of concern to faculty. The researchers then looked into the degree of active learning used in the course as measured by the amount of active learning used. They found that when the percentage of class devoted to lecturing fell between 20% and 60%, the most likely outcome was an increase of scores on student evaluations. However, if the amount of lecturing was over 60%, the most likely outcome was no change in evaluations, likely because a course with more than 60% of class time devoted to lecturing does not seem very different from a course with 100% of class time devoted to lecturing and hence there would be no reason to expect a change in student evaluations.
Next, the researchers looked into the amount students complained about active learning courses. If the amount of complaints about the teaching style was related to the amount of lecturing, maybe that could explain why between 20-60% of class time spent lecturing seemed to be the ideal range for getting better student evaluations. When asked to rate how much they agree with the prompt that many students complained when they integrated active learning techniques into their course, most faculty (77%) disagreed with the claim and only 23% agreed. The researchers found that complaints were more likely when the amount of lecturing dropped below 20%.
To further understand changes in student evaluations, the researchers asked the faculty participants to respond to the prompt “explain briefly why you feel that your ratings on end-of-semester student evaluations have ever been impacted by your use of interactive instructional techniques.” Of the faculty who had reported increased scores on their student evaluations, the most common explanations on why they felt their scores increased (based on comments on their evaluations), 30% said students believe active learning helps them learn better, 29% said that students find active learning classes enjoyable, 24% said students like to interact with other students and 21% said students like to use technology. Conversely, of the faculty who reported decreases scores on student evaluations, the most common reasons for the lower ratings were students not feeling like they are being “taught” when instructors use active learning (reported by 43% of faculty with decreased scores), students do not want to work actively during class time (37%), students do not know what to expect in an active learning class (17%), and students do not like to interact with other students and/or with the instructor (11%). Finally, the results are not controlled for departmental norms. A single active learning course in otherwise traditional teaching department may be less welcomed by students than an active learning course in department that has integrated active learning into multiple courses.
So what can we take away from this paper? First, the most common result of implementing some type of active learning was improved student evaluations and that increase was most likely to occur when the amount of lecturing was between 20% and 60% of class time. However, some instructors who lectured less than 20% of class still reported increase student evaluations, meaning instructors don’t need to spent at 20% of class lecturing to see improvements in their student evaluations. In fact, based on the reasons for why some instructors experienced a decrease in their evaluations, it appears that making sure students understand the benefits of active learning and how the class will function are more important for getting better evaluations. Overall, the claim that using active learning instead of lecturing will result in worse student evaluations seems to be unfounded and that explaining the benefits and expectations of an active learning classroom is essential to avoiding decreased scores on student evaluations.
I am a physics and computational mathematics, science, and engineering PhD student at Michigan State University and the founder of PERbites. I’m interested in applying machine learning to analyze educational datasets and am currently studying the physics graduate school admissions process.