A defence of multiple choice questions

                                 The  cognitive levels of Bloom's taxonomy

                                 The  cognitive levels of Bloom's taxonomy

Multiple choice questions (MCQs) often get a bad rap; they’re seen as simply a testing tool, and not a very good one at that because they’re easily outsmarted and cater to lower order thinking. This is often unfortunately true, but in this post I’ll argue that it is not the fault of the question type.

Sure, we can’t climb the ‘Create’ rung of Bloom’s pyramid, but well-written MCQs are an effective way to test students at all other cognitive levels (remember, understand, apply, analyse and evaluate). They also provide access to real-time analytics, auto-marking and automated feedback.

It’s all to do with instructional design: how the questions are worded, what they test and at which point of the learning process the question and feedback are presented. A basic foundation in instructional design is rapidly becoming a required skill for 21st century teachers, but fear not — it’s not as scary as it sounds and we’re here to help!

In the first part of this post, I'll cover writing multiple choice questions in general and then I'll show some examples that test and teach to higher order thinking.

But first, let's take a look at the parts that make up MCQs.

Anatomy of a MCQ

A multiple choice question consists of two parts: the stem and the responses. The stem is the question text and the responses are the various answer options students can choose. Responses are further broken down into the correct answer and distractors: logical and plausible misconceptions of the best answer.

In Stile, MCQs now allow you to explain the misconceptions behind each distractor when writing the question; students will then automatically receive these explanations upon submission as immediate feedback. To find out more about automated feedback, check out last week’s blog post.

The six most common problems with MCQ responses

Students will try to answer multiple choice questions without actually knowing the answer, purely by going on other cues, such as:

  1. clearly implausible distractors: these often include extreme words like ‘always’ or ‘never'
  2. three short distractors and one long response: the long response is usually the best answer
  3. ‘all of the above’: students can recognise that more than one option is correct without understanding the rest
  4. ‘none of the above’: this doesn’t test what students know — they’re only able to show that the correct response is missing
  5. overlap between responses: make all responses mutually exclusive and as unambiguous as possible
  6. finding the password: If one response contains a key term (eg. ‘because of Newton’s second law’), students don’t need to understand its meaning to recognise its probable correctness
A good example of how not to do it! Vague language in the stem, non-parallel responses, extreme language and all/none of the above!

A good example of how not to do it! Vague language in the stem, non-parallel responses, extreme language and all/none of the above!

I try to avoid these wherever possible to ensure I'm testing students’ subject knowledge rather than their ability to 'game' poorly written questions. 

Getting good distractors

The hardest part about writing good MCQs is that all distractors need to be plausible and preferably of equal length. The correct answer is right in front of our students’ eyes; they should be able to find it, but it shouldn’t leap out at them immediately.

One way to find plausible distractors is by going through written answer questions set in the past and analysing students’ answers. If we see a wrong answer that consistently pops up in students’ submissions, we’ve got a great candidate for a convincing distractor. 

MCQs should ideally meet these four criteria:

  • clarity: avoid vague words like ‘usually’ or ‘may be' and ensure the difference between responses is obvious

  • consistency: keep responses at roughly equal length
  • accuracy: stick to a single, clearly-defined problem per question and outline it in the question type
  • organisation: write questions in a logical order

Also for higher order thinking

While it's more challenging to write questions that develop and test higher-order thinking, it's certainly possible, as long as there's sufficient context.  Here are some examples for higher-order thinking questions in Stile:

An example of how you can combine two multiple choice questions to test analytical thinking, complete with automated feedback

An example of how you can combine two multiple choice questions to test analytical thinking, complete with automated feedback

Here's an example for 'evaluate', together with automated feedback.

Here's an English literature example:

Insights

MCQs have the added benefit of giving us real-time analytics in Class insights: a fantastic visual breakdown of our students understanding of the concepts tested. This also helps us refine our own question writing because if my questions are consistently all green, I'm probably not challenging my students enough.

A fairly good balance of red and green. students are getting the majority right but are still challenged overall

A fairly good balance of red and green. students are getting the majority right but are still challenged overall

Conversely, if I see a lot of red (well over half in a question bar), it’s a good indicator that I’ve either not covered the material well enough or, more likely, that the question needs to be rewritten because it's ambiguous or unclear.

Wrap-up

That's it for the basics. I hope this will help you with writing better multiple choice questions or that it's inspired you to try writing some if you previously shied away from them.

Stile Tip

As a general rule, using only one question type over and over again is a surefire way to disengage students.

Mix it up, season with rich media and keep them interested!

Sources

I used this slide deck by Carol A. Kominski at UNT Health Science Center, this fantastic toolkit by the John Hopkins Bloomberg School of Public Health as well as these sample questions by Kimberley Green of Washington State University as sources for the material presented in this post. This guide by Vanderbilt University's Center for Teaching  is great too.