The Role of Multiple Choice Tests in the Online Classroom
If we know that multiple choice exams aren't the best gauge of deep learning, then why use them at all? Before we throw that baby out with the bathwater, let's consider some other reasons why you might continue to use them.
Brooke Shriner
AdjunctWorld.com
This week, my online abnormal psychology summer class is finishing up their mid-term exams. It is a 50-question, multiple-choice, open book test that they have had a week to complete. It is worth 5% of their final grade. That’s right – the big “mid-term exam” is only worth 5% of their final grade. Why?
I myself am of the opinion that multiple choice exams aren’t the best assessment of student learning and are too often compromised by test-taking anxiety. Instead, 54% of their grade is based on weekly forum discussions and classmate responding, 36% is based on 3 written assignments/essays where students actively and creatively apply the material to a specific situation or something in their own lives, and a mere 10% is based on multiple choice exams (5% each for the mid term and final).
I am of the opinion that active, engaged discussion forum participation (inspired by a learning objective-based prompt I pose each week that requests definition and application of the concept) and essay writing that emphasizes synthesis and application are much better assessments of how a student is learning in my class (and those forms of assessment are much less influenced by cheating!).
Along the same vein, Inside Higher Ed recently published an article titled Online Education and Authentic Assessment. The author was responding to a commonly posed question, "How do you prevent cheating on exams in the online environment?" The author's response also implored a shift in thinking about traditional college assessment - don't use exams, use authentic assessments. Authentic assessments, per Wiggins (1999 as cited in Harrison 2020) are "Engaging and worthy problems or questions of importance, in which students must use knowledge to fashion performances effectively and creatively. The tasks are either replicas of or analogous to the kinds of problems faced by adult citizens and consumers or professionals in the field" (para. 8).
An example of an authentic assessment that I use in my Personality Psychology course is to have students plot themselves on each of the five factors in the Five Factor Model of Personality. Then, they are asked to plot the personality of someone they tend to butt heads with on the five factors as well. When they see this information laid before them visually, they are asked to explain why, based on this theory, that they and this person butt heads. They are then asked to consider how this person may have developed this constellation of personality traits based on two other theories of personality. Thus, I as the instructor am able to see that a student "gets" these three theories based on real world application of them. I find this form of assessment much more telling than answers to multiple choice questions.
The downside is obvious for the instructor - essays and longer authentic assignments are much more time consuming to evaluate and "grade" than exams, which usually are graded automatically by the LMS, unless the exam includes short essays. To help with this, I suggest that one authentic assignment cover multiple learning objectives (in the example I gave above, one assignment covered three major theories, comprising nearly half of the class' content).
I also encourage online instructors to see feedback on these assignments as teaching. As online instructors, many of us do not formally lecture. Our teaching happens in the context of discussion. Authentic assignment feedback is another avenue for discussion, except this time 1:1 with the student. It's not extra work, then, it is the work. Why use tests at all?
The primary reasons I still use the mid-term and final are two-fold. One, I want to make sure all students comb through the entire textbook (even if its just looking for answers) to make sure the more nuanced information we may not have gotten to in the discussion is covered. And two, the MC test is a good measure for students to gauge their own learning. A self-check so to speak. So, I set them up to succeed. All questions are taken from the textbook, they have a week to complete it, they can take the exam over multiple days, and if an otherwise good, active, and engaged critically thinking student doesn’t do as well “at test-taking” – well, their grade is not unfairly penalized.
Mueller (2018) said it well:
...if I had to choose a chauffeur from between someone who passed the driving portion of the driver's license test but failed the written portion or someone who failed the driving portion and passed the written portion, I would choose the driver who most directly demonstrated the ability to drive, that is, the one who passed the driving portion of the test. However, I would prefer a driver who passed both portions. I would feel more comfortable knowing that my chauffeur had a good knowledge base about driving (which might best be assessed in a traditional manner) and was able to apply that knowledge in a real context (which could be demonstrated through an authentic assessment) (para. 14).
I’m certain many of you feel the same way just as I am certain this philosophy doesn’t necessarily apply to all disciplines. I can see multiple-choice test taking as an important aspect of other classes.