Assessment is a crucial part of education, regardless of subject or level. We usually think of assessments as tools for measuring how much a student has learned, but with the right post-exam data, they can be so much more, even serving as a learning tool themselves. But not all assessments are created equally; a poorly written exam or exam item may skew results, giving instructors a false sense of student learning.

Effective exam items provide an accurate demonstration of what students know, and they also support fair and equitable testing.

To get the most out of your assessments, it’s important to write well-constructed exam items with every student in mind and then test item efficacy. Here are a few tips to write exam items that give you a true picture of how well your students are learning the material and how well your teaching strategies are working.

What Item Types Fit Your Objectives?

There are two general categories of exam items: objective items and subjective items. Objective test items have a clear correct answer; item types include multiple choice, true/false, short answer, and fill-in-the-blank items. Subjective items on the other hand may have a range of correct answers. Answers to subjective questions often involve persuasive/defensible arguent or present various options for in-depth discernment. Test items like these usually come in the form of long answers, essays, or performance-based evaluations.

According to the Eberly Center for Teaching Excellence and Educational Innovation at Carnegie Mellon University, “There is no single best type of exam question: the important thing is that the questions reflect your learning objectives.” It is the educator’s place to determine whether a subjective or objective test item will better align with their learning objectives.

If you want students to explain the symbolism in a literary text, subjective-based questions like short answers and essays are usually best. Objective test items are great if you want to make sure your students can recall facts or choose the best argument to support a thesis. If you want your students to match medical terms to their definitions? A matching task, which is an objective item, may be your best bet. No matter the subject, it is imperative to ensure the question types serve the intended learning objectives.

Create Assessment Items with Bloom’s Taxonomy in Mind

As you consider exam items, and whether you’re going to use objective or subjective items, it’s important to keep cognitive complexity in mind. Bloom’s Taxonomy can help with this. Bloom’s consists of six levels of cognitive understanding. From the lowest to highest order, these are:

•Knowledge

•Comprehension

•Application

•Analysis

•Synthesis

•Evaluation

As you move up the ladder from “Knowledge” to “Evaluation,” there is a gradual shift from objective to subjective exam items. If students are new to the concepts you’re teaching, it’s often best to focus on the initial three levels with objective items and set an appropriate knowledge foundation. As students progress through a course or program, you can start to assess the top three levels of cognition with subjective exam items to determine higher-order thinking or capability.

You might assess students’ grasp of the “knowledge” level with a multiple-choice question about the date of a significant period in history. Whereas testing students’ skills in “evaluation” may look like a persuasive essay prompting students to argue and support their stance on a topic with no one correct position such as interpretation of metaphors in written works.

Consider Word Choice and Cultural Bias

As exam creators, we may sometimes write an item that is difficult for students to understand. After writing an item, ask yourself if the question or statement could be written more clearly. Are there double negatives? Have you used a passive voice construction? Are you attempting to teach the concept in the question stem itself? Often, the more concise the item is, the better. If possible, do not use absolutes such as “never” and “always.” We’re writing questions, not riddles; it is best practice to test the students’ knowledge, not how well they read. The point is to focus on student knowledge acquisition and effectively convey the point of the question.

Avoid idioms and colloquialisms that may not be clear to international students. Questions containing regional references demonstrate bias. Also consider references that may exclude certain racial groups or members of a lower socio-economic groups. For instance, an item that refers to a regional sport may not be as clear to these groups as a sport with international reach. Of course, there are exceptions, but a good exam item is not meant to find exceptions.

Make Sure Your Exam Items Reliably Assess Concept Mastery

Using psychometrics, specific and widely accepted statistical measures of exam data, you can test the reliability of your exam and items. One way to measure exam reliability through psychometrics is the item Difficulty Index, or p-value. Simply put, what percentage of exam-takers answered a specific question correctly? If the p-value is low, the item may be too difficult. If the p-value is high, the item may be too easy. However, this data point alone is not a strong measure of reliability and should be used in context with other psychometric measures. If your difficult question has a high Discrimination Index and Point Biserial values, you can more confidently say that only the higher-order thinkers answered correctly, while the lower-performers did not. A high corresponding Point Biserial value also tells you that generally, students performing well on this item, albeit difficult, performed well on the overall exam. When psychometrics are used together, you are able to gain a solid holistic picture of item performance and whether your question was well written.

Psychometric analysis measures include:

•Difficulty (p-value)

•Discrimination Index

•Upper and Lower Difficulty Indexes

•Point Biserial Correlation Coefficient

•Kuder-Richardson Formula 20

Create Effective Exam Items with Digital Assessment

The above strategies for writing and optimizing exam items are by no means exhaustive but considering these as you create your exams will improve your questions immensely. By delivering assessments with a data-driven, digital exam platform, instructors, exam creators, and programs can use the results of carefully created exams to improve learning outcomes, teaching strategies, retention rates, and more.

‍

Posted 
Nov 29, 2022
 in 
Exam Science
 category

More from 

Exam Science

 category

View All

Join Our Newsletter and Get the Latest
Posts to Your Inbox

No spam ever. Read our Privacy Policy
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.