Paper or Pla– Electronic For Conceptual Inventories

Title: Participation and performance on paper- and computer-based low-stakes assessments
Authors: Jayson M. Nissen, Manher Jariwala, Eleanor W. Close, and Ben Van Dusen
First author’s institution: California State University-Chico
Journal: International Journal of STEM Education 5:21 (2018)


Conceptual Inventories have become standard tools that physics educators and researchers use to evaluate what students learned during a course. However, instructors face several barriers in using conceptual inventories in their classroom such as having to determine which assessment is the best fit for their class, taking  time to administer and grade the assessment, and encountering difficulties when interpreting the results. For these reasons, platforms such as Physport and LASSO have been developed to help faculty administer assessments and analyze the results online. However, conducting the assessments outside of class may lead to lower participation rates and scores than if they were conducted in class.

Previous work has been mixed on whether administering the assessments in class or online makes a difference. One study found that participation rates were anywhere between 8% and 27% lower on the online assessments than the in-class assessments while preliminary work for this paper found that some courses had participation rates that were comparable regardless of how the assessment was administered and other courses showed significant differences. These studies also found that if the instructor offered course credit or more reminders for taking the online assessment, the difference in participation rates between online administration and in-class administration decreased.

In terms of performance, the results are again mixed. Prior work has found that students perform equally well on high-stakes exams (think GRE or course exams) regardless of whether they are administered online or in person but the study did not look at low-stakes assessments like conceptual inventories. Finally, a study of students in an astronomy class showed that students actually did better when a research based assessment was administered online than in class. Given these studies, the authors of today’s paper were interested in seeing if there were differences in participation when the assessment was administered online or in-class and if instructor’s practices could reduce the difference as well as if there were any differences in performance between the two administration methods.

To determine if there were any differences in the two administration methods, the researchers collected conceptual inventory and attitudinal survey data from about 1300 students in 25 sections of three introductory courses (algebra- and calculus-based) over two semesters at a large regional public university in the United States. The researchers used stratified random sampling to assign the students to one of two groups. The first group completed the pre and post test for the conceptual inventory online though the LASSO portal and the pre and post test for the attitudinal survey in class while the second group did the opposite. The experimental design is shown in figure 1.

Paper electronic fig 1
Figure 1: Experiment design. CBT means computer-based test and represents the online administration. PPT is paper and pencil test and represents the in-class administration. (Fig 1 in paper)

The conceptual inventory was then scored on a scale of 0 to 100% correct while the attitudinal survey was scored as 0 to 100% aligned with an expert’s views

Next, the researchers rated instructors on a scale of 0 to 4 based on the number of practices they used that have been shown to increase student participation on online assessments (giving multiple in class reminders, giving multiple email reminders, offering credit for completing the pretest, and offering credit for completing the post-test).

Finally the researchers used Hierarchical Linear Modeling to model student participation and student performance on the assessments. Hierarchical Linear Modeling is similar to ordinary regression except that the model is able to incorporate nested structures such as experimental conditions which are nested under students which are nested under course section.

So what did they find? First, the researchers found that participation rates were higher for the assessments administered in class than they were for the assessments administered online and the rates were consistent with previous results in the literature. The researchers observed small differences in participation rated based on gender with women being more likely to participate then men but the results were not statistically significant, as had also been observed in previous work. The researchers also found that higher grade students were more likely to participate than lower grade students.

The researchers then built hierarchical linear models to describe the data and found that, perhaps unsurprisingly, as instructors used more of the recommended practices, the participation rate increased. When the instructor used all four recommended practices, the participation rate on online-administered assessments was similar to in-class administered assessments. These models are shown graphically in figure 2.

Paper electronic fig 3
Figure 2: Predicted participation rate on conceptual inventory based on the number of recommended practices the course instructor used. CBT is online administration, PPT is in-class administration. (Fig 3 in paper)

Next, the researchers added final course grade to the model and found that students who earned an A through a D were more likely to participate on the post-tests than students who earned an F in the course were. This trend became stronger as the instructor used more of the recommended practices and is shown in figure 3.

Paper electronic fig 4
Figure 3: Predicted participation rate based on final course grade and number of recommended practices the course instructor used. CBT is online administration, PPT is in-class administration.  The PPT pre-test participation rate was between 96% and 100% so it is not shown on a graph. (Fig 4 in paper)

Finally, the researchers found that there were no reliable or consistent differences in scores on the conceptual inventories administered online or in class; these are shown in figure 4. The differences in scores ranged between -2.1% to 2.2%.

Paper electronic fig 5
Figure 4: Scores on the conceptual inventories and attitudinal survey in three courses that were either administered online (CBT) or in-class (PPT). There is no consistent different in scores based on how the assessment was administered. (Fig 5 in paper).

So what can we takeaway from this paper? First, it appears that how the assessment is administered (online or in class) does not appear to affect performance or participation, provided the instructor provides sufficient credit and reminders to complete the assessment. However, the researchers’ models of participation suggest that regardless of how the assessments are administered, higher grade students are more likely to complete the assessment than lower grade students are, meaning the results may not accurately represent the students in the course. Nevertheless, instructors can be confident that using an online platform to simplify the administration and incorporation of research based assessments in their course will produce results consistent with what they could have obtained by using assessments on paper administered in class.

Figures used under Creative Commons Attribution 4.0 International License.

Leave a Reply

Your email address will not be published. Required fields are marked *