Site icon PERbites

Does prior training in a VR environment help students learn better from VR based instruction?

Title: Using virtual reality in electrostatics instruction: The impact of training

Authors: C. D. Porter, J. R. H. Smith, E. M. Stagar, A. Simmons, M. Nieberding, C. M. Orban, J. Brown, and A. Ayers

First Author’s Institution: The Ohio State University, Columbus, Ohio, USA

Journal: Phys. Rev. Phys. Educ. Res. 16, 020119 (2020)


Physics routinely deals with situations rooted in a 3D environment, such as the electric fields of charge distributions. But most instruction relies on 2d medium such as paper and 2D graphics on a computer screen. An interesting physics education research question is whether instructional materials that use 3D media for inherently 3D situations lead to better learning compared to 2D media, on specific learning goals. One 3D medium that has gained popularity recently is Virtual Reality (VR).

A previous study on using VR to aide learning of electrostatics found that VR didn’t produce results that were different from other media. At the same time, the study also found that students who reported that they frequently play video games learned better from VR than from other media. One way to interpret this effect is that, students who play video games frequently have prior experience in an active 3D environment and this leads to better learning from a VR environment. In the paper we discuss here the authors explore this possible explanation. Specifically, they ask whether preliminary training in VR on a topic unrelated to electrostatics would help students, frequent gamers and others, to perform well on an electrostatics assessment.


The researchers conducted the experiment with students enrolled in a large introductory calculus-based course on electromagnetism. Students could participate in the experiment or do another activity to gain some points that counted towards their grades. 281 signed up for the experiment and 279 of them participated in the experiment.

The VR environment used in this experiment was created as an Android application in conjunction with Google Cardboard. This kept the cost low and so the researchers could use a large number of participants in the experiment.

The researchers randomly assigned students to one of two groups. One group did not receive any training in VR (untrained group) where as the other group received a short training in doing cognitive tasks in a VR environment (pre-trained group). The training consisted of rotating a house in 3D and performing some counting on the rotated image and identifying direction of angular momentum of a toy propeller aircraft in different orientations. The students completed these tasks at their own pace in about 4 minutes.

Both groups then took a pre-test on a computer consisting of questions on electrostatics. Both groups then received some instruction on 3D electric fields of different charge distributions in a VR environment. The students answered some questions during the lessons. The authors designed these questions to keep the students engaged with the material.

After the lessons in VR, both groups took a post test on a computer consisting of 11 multiple-choice questions and a post test in the VR environment consisting of 10 questions. These together form the post test for the experiment.

The students were also asked to specify whether they frequently played videos games or not and if they did, did they play 2D or 3D games. The authors wanted to look at results for 2D and 3D games separately. In the analysis they found that 2D and 3D didn’t matter, so in the results they present the differences between the self-reported classes “gamers” (having frequently played video games) and “non-gamers” (not having frequently played video games). 179 students classified themselves as gamers and the remaining 100 classified themselves as non-gamers.


The difference between pre-trained group and untrained group in pre-test and in post-test (post+post in-VR) assessments are shown in Figure 1 (figure 4 from the paper).

Figure 1: Differences between pre-test and post-test scores for both pre-trained and untrained groups (figure 4 from paper)

We can see that the differences are small. We can also see that in the post-test the pre-trained group has a slightly higher performance than the untrained group. The difference is small (Cohen’s d=0.24; d=0.4 is considered to be a substantial difference) and statistically significant (p=0.014). It is interesting to note that the post-test scores of the untrained group went down from pre-test scores. In fact, the authors add, if we take all students together, the results suggest the students didn’t improve at all!

Thus this study supports the view that, on average, VR doesn’t help learning.

Are there differences between gamers and non-gamers? Figure 2 below (figure 5 from the paper) show the results in terms of gain in the post-test compared to pre-test, and also divides the results based on whether students were classified as gamers and non-gamers.

Figure 2: Gain between pre-test and post-test scores for pre-trained and untrained groups as well as for gamers and non-gamers (figure 5 from paper).

First thing to note is that the gains are very small and we can’t make strong conclusions. Three groups, pre-trained gamers, pre-trained non-gamers and untrained non-gamers, have a small positive gain where as the fourth group, untrained gamers, has a small negative gain. We would have expected untrained gamers to have learned more than untrained non-gamers due to their familiarity with virtual environments, yet that wasn’t the case. The authors note that, as discussed in discussions following figure 4 below, this may be due to ceiling effects.

One issue with the above two analyses is that they combine scores from different types of assessments. Investigating the performance of different groups on each of the 4 kinds of assessments might provide a better picture.

First we compare pre-trained and untrained groups on each of the 4 assessments. In figure 3 (figure 6 from paper) we see the average score for pre-trained and untrained groups for the 4 assessments.

Figure 3: Performance difference between pre-trained and untrained groups across 4 assessments (figure 6 from paper).

The pre-test scores for both pre-trained and untrained groups are similar and so are their scores in both post and post in-VR tests. But untrained group has a much lower score on questions asked during the lessons (mid-VR assessment). This seems to suggest that VR has an effect in so far as it familiarizes students with the VR environment but doesn’t provide much in additional learning. That is, the pre-trained students’ score didn’t change much in mid-VR score since they had already received exposure to VR during the training phase. The untrained group suffered in mid-VR since they were encountering VR for the first time, but in the post in-VR quiz their scores went up since they received exposure to VR during their lessons. The authors note that the pre-test questions and post questions are similar except for simple changes such as change in rotations and translations and so change in difficulty level of questions is not the cause of this effect.

The figure also shows students performed better on the post in-VR questions than the post (in computer) questions. The authors note that we can’t read much into this since the questions are not of comparable difficulties and the media are different resulting in different style of questions and different ways in which questions are posed.

Now we do the same for gamers and non-gamers. Figure 4 below (figure 7 from paper) shows the performance difference between pre-trained and untrained groups across 4 assessments, further divided into gamers and non-gamers.

Figure 4: Performance difference between pre-trained and untrained groups across 4 assessments, further divided into gamers and non-gamers (figure 7 from paper).

Here we see that in general, gamers (blue) do have a tendency to perform better than non-gamers (red) on the post test. Thus the paper confirms the correlation reported in the earlier study.

Also note that gamers have a significantly higher score in pre-test than non-gamers. That is, in this sample of students, the gamers perform better on electrostatics assessment than non-gamers before receiving any instruction. This can lead to ceiling effects: gamers already have high performance and hence their performance after instruction is likely to go down. This might explain the strange result we saw in figure 2 where untrained gamers had a small negative gain and pre-trained gamers had a smaller positive gain than pre-trained non-gamers.

We also see that the only significant difference in performance between pre-trained (solid) and untrained (dashed) groups are for the mid-VR assessment, as we already saw in figure 3. And as previously noted, the fact that all groups perform similarly on the post in-VR test can’t be taken too seriously. The authors note that a pool of independently validated assessment questions may help investigate this observation.


We can summarize the main findings from the paper as below.

As the authors note, an independently verified assessment tool would make this study more robust. In addition, such a tool could enable us to investigate questions such as whether the increase in post in-VR compared to post (in computer) is relevant or not. Moreover, the effects are also small and repetition of this study is needed to ensure that any observed effects, for example the large difference in mid-VR score between pre-trained and untrained group, remain valid.

Figures used under Creative Commons Attribution 4.0 International. Header image used under Attribution 2.0 Generic (CC BY 2.0) from Flicker user K. W. Barrett.

Exit mobile version