Skip to content Skip to navigation

General Guidelines for Developing Multiple Choice Items

Tomorrow's Teaching and Learning

Message Number: 

There are nine primary guidelines for developing multiple-choice items (Gronlund & Linn, 1990; McMillan, 2001). Following these guidelines increases the validity and reliability of multiple-choice items that one might use for quizzes, homework assignments, and/or examinations.



The posting below gives some very excellent advice on how to construct good multiple choice questions. It is an excerpt from the  longer article, by Mary E. Piontek, Best Practices for Designing and Grading Exams, part of the Occasional Paper series Center for Research on Learning and Teaching (CRLT) []  at the University of Michigan.  Pointek is Evaluation Researcher at the Center. She has a Ph.D. in Measurement, Research, and Evaluation. The full article can be found at: ©Copyright 2008 The University of Michigan. Reprinted with permission.


Rick Reis

UP NEXT: Different Way to Think About Professional Development

Tomorrow's Teaching and Learning


 ---------------------------------------1,131 words --------------------------------------

 General Guidelines for Developing Multiple-Choice Items


Multiple-choice items have a number of advantages. First, multiple-choice items can measure various kinds of

knowledge, including students' understanding of terminology, facts, principles, methods, and procedures, as

well as their ability to apply, interpret, and justify. When carefully designed, multiple-choice items can assess higher-order thinking skills as shown in Example 1, (below)  in which students are required to generalize, analyze, and make inferences about data in a medical patient case.

Multiple-choice items are less ambiguous than short-answer items, thereby providing a more focused assessment of student knowledge. Multiple-choice items are superior to true-false items in several ways: on true-false items, students can receive credit for knowing that a statement is incorrect, without knowing what is correct. Multiple-choice items offer greater reliability than true-false items as the opportunity for guessing is reduced with the larger number of options. Finally, an instructor can diagnose misunderstanding by analyzing the incorrect options chosen by students.

A disadvantage of multiple-choice items is that they require developing incorrect, yet plausible, options that can be difficult for the instructor to create. In addition, multiple-choice questions do not allow instructors to measure students' ability to organize and present ideas. Finally, because it is much easier to create multiple-choice items that test recall and recognition rather than higher order thinking, multiple-choice exams run the risk of not assessing the deep learning that many instructors consider important (Gronlund & Linn, 1990; McMillan, 2001).


Example 1: A Series of Multiple-Choice Items That Assess Higher Order Thinking:

Patient WC was admitted for 3rd degree burns over 75% of his body. The attending physician asks you to start this patient on antibiotic therapy. Which one of the following is the best reason why WC would need

antibiotic prophylaxis?

a. His burn injuries have broken down the innate immunity that prevents microbial invasion.

b. His injuries have inhibited his cellular immunity.

c. His injuries have impaired antibody production.

d. His injuries have induced the bone marrow, thus activated immune system.

Two days later, WC's labs showed: WBC 18,000 cells/mm3; 75% neutrophils (20% band cells); 15% lymphocytes; 6% monocytes; 2% eosophils; and 2% basophils. Which one of the following best describes WC's lab results?

a. Leukocytosis with left shift

b. Normal neutrophil count with left shift

c. High eosinophil count in response to allergic reactions

d. High lymphocyte count due to activation of adaptive immunity

(Jeong Park, U-M College of Pharmacy, personal

communication, February 4, 2008)


Guidelines for developing multiple-choice items

There are nine primary guidelines for developing multiple-choice items (Gronlund & Linn, 1990; McMillan, 2001). Following these guidelines increases the validity and reliability of multiple-choice items that one might use for quizzes, homework assignments, and/or examinations.

The first four guidelines concern the item "stem," which poses the problem or question to which the choices refer.

1. Write the stem as a clearly described question, problem, or task.

2. Provide the information in the stem and keep the options as short as possible.

3. Include in the stem only the information needed to make the problem clear and specific.

The stem of the question should communicate the nature of the task to the students and present a clear problem or concept. The stem of the question should provide only information that is relevant to the problem or concept, and the options (distractors) should be succinct.

4. Avoid the use of negatives in the stem (use only when you are measuring whether the respondent knows the

exception to a rule or can detect errors).

You can word most concepts in positive terms and thus avoid the possibility that students will overlook terms of "no, not, or least" and choose an incorrect option not because they lack the knowledge of the concept but because they have misread the stated question. Italicizing, capitalizing, using bold-face, or underlying the negative term makes it less likely to be overlooked.

The remaining five guidelines concern the choices from which students select their answer.

5. Have ONLY one correct answer.

Make certain that the item has one correct answer.  Multiple-choice items usually have at least three incorrect options (distractors).

6. Write the correct response with no irrelevant clues.  A common mistake when designing multiple-choice questions is to write the correct option with more elaboration or detail, using more words, or using general terminology rather than technical terminology.

7. Write the distractors to be plausible yet clearly wrong. An important, and sometimes difficult to achieve, aspect of multiple-choice items is ensuring that the incorrect choices (distractors) appear to be possibly correct. Distractors are best created using common errors or misunderstandings about the concept being assessed, and making them homogeneous in content and parallel in form and grammar.

8. Avoid using "all of the above," "none of the above," or other special distractors (use only when an answer can

be classified as unequivocally correct or incorrect).

All of the above and none of the above are often added as answer options to multiple-choice items. This technique requires the student to read all of the options and might increase the difficulty of the items, but too often the use of these phrases is inappropriate. None of the above should be restricted to items of factual knowledge with absolute standards of correctness. It is inappropriate for questions where students are asked to select "the best" answer. All of the above is awkward in that many students will choose it if they can identify at least one of the other options as correct and therefore assume all of the choices are correct - thereby obtaining a correct answer based on partial knowledge of the concept/content (Gronlund & Linn, 1990).

9. Use each alternative as the correct answer about the same number of times.

Check to see whether option "a" is correct about the same number of times as option "b" or "c" or "d" across the instrument. It can be surprising to find that one has created an exam in which the choice "a" is correct 90% of the time. Students quickly find such patterns and increase their chances of "correct guessing" by selecting that answer option by default.


Checklist for Writing Multiple-Choice Items

* Is the stem stated as clearly, directly, and simply as possible?

* Is the problem self-contained in the stem?

* Is the stem stated positively?

* Is there only one correct answer?

* Are all the alternatives parallel with respect to grammatical structure, length, and complexity?

* Are irrelevant clues avoided?

* Are the options short?

* Are complex options avoided?

* Are options placed in logical order?

* Are the distractors plausible to students who do not know the correct answer?

* Are correct answers spread equally among all the choices?

(McMillan, 2001, p. 150)


References (for full article)

Brown, F. G. (1983). Principles of educational and psychological testings(3rd ed.). New York: Holt, Rinehart

and Winston.

Cashin, W. E. (1987). Improving essay tests. Idea Paper, No. 17. Manhattan, KS: Center for Faculty Evaluation and

Development, Kansas State University.

Critical thinking rubric. (2008). Dobson, NC: Surry Community College.

Grading systems. (1991, April). For Your Consideration, No. 10. Chapel Hill, NC: Center for Teaching and Learning,

University of North Carolina at Chapel Hill.

Gronlund, N. E., & Linn, R. L. (1990). Measurement and evaluation in teaching (6th ed.). New York: Macmillan

Publishing Company.

Halpern, D. H., & Hakel, M. D. (2003). Applying the science of learning to the university and beyond. Change,

35(4), 37-41.

Isaac, S., & Michael, W. B. (1990). Handbook in research and evaluation.San Diego, CA: EdITS Publishers.

McKeachie, W. J. , & Svinicki, M. D. (2006). Assessing, testing, and evaluating: Grading is not the most important

function. In McKeachie's teaching tips: Strategies,research, and theory for college and university teachers (12th ed., pp. 74-86). Boston: Houghton Mifflin Company.

McMillan, J. H. (2001). Classroom assessment:

Principles and practice for effective instruction. Boston:

Allyn and Bacon.

Seymour, E., & Hewitt, N. M. (1997). Talking about leaving: Why undergraduates leave the sciences. Boulder, CO: Westview Press.

Svinicki, M. D. (1998). Helping students understand grades. College Teaching, 46(3), 101-105.

Svinicki, M. D. (1999a). Evaluating and grading students. In Teachers and students: A sourcebook for UT-Austin faculty(pp. 1-14). Austin, TX: Center for Teaching Effectiveness, University of Texas at Austin.

Svinicki, M. D. (1999b). Some pertinent questions about grading. In Teachers and students: A sourcebook for UT-Austin faculty(pp. 1-2). Austin, TX: Center for Teaching Effectiveness, University of Texas at Austin.

Thorndike, R. M. (1997). Measurement and evaluation in psychology and education. Upper Saddle River, NJ: Prentice-Hall, Inc.

Wiggins, G. P. (1998). Educative assessment: Designing assessments to inform and improve student performance. San Francisco: Jossey-Bass Publishers.

Worthen, B. R., Borg, W. R., & White, K. R. (1993). Measurement and evaluation in the schools. New York:

Longman. Writing and grading essay questions.(1990, September). For Your Consideration, No. 7. Chapel Hill, NC: Center for Teaching and Learning, University of North Carolina at Chapel Hill.