The Modified Angoff method is the most basic form of the criterion based setting standards perhaps due to relatively simple process of determining the cut-off points. Judges in this method are expected to review each test item and the passing score is computed from an estimate of the probability of a borderline candidate answering each item correctly. It is a very straightforward procedure and only requires simple calculations.
Note that the Modified Angoff method is the cut score standard endorsed and executed by ExamDeveloper software.
The passing score is computed from an estimate of the probability of a borderline candidate answering each item correctly. After a discussion and consensus of the characteristics of a borderline candidate, each judge makes an independent assessment of the probability that a borderline candidate will answer the item correctly for each item. The judges’ assessments of an item are averaged to determine the probability of a correct response for that item. Then, each probability assigned to an item on the exam form is averaged to obtain the pass point. The benefit of the Angoff is that it has held up in court, is relatively straightforward, and does not require exam data.
Before ratings are assigned by any judges, it is very important to discuss the concept of minimum competence. Since the basis of difficulty using this method relies on “borderline” candidates, the committee must identify the minimum level of skill required in order to be considered competent on the job. These are candidates who would barely pass the examination.
This characterization of minimally competent candidates is then applied to a rater’s judgment. For each question item on a test an exam form, judges must ask themselves a simple question: Out of one hundred minimally competent people, how many would answer the question item correctly? It is important to be realistic in this regard. Some judges may have an unrealistic view of how many candidates they think should answer a particular item correctly.
How to Assign Angoff Ratings
- Read the question and answer it using your own knowledge and experience.
- Check your answer and evaluate if your answer is correct or incorrect.
- Think about the logic that you used to answer the question.
- Will the minimally competent candidate will employ the same logic?
- Consider if the wording or structure of the question provides clues to candidates who are not knowledgeable.
- Estimate how many people considered minimally competent would get the question correct.
Raters’ scores on a typical scale will range from zero to 100. The higher the number, the more candidates they estimate would answer the item correctly. The lower the number, the less candidates they expect would get it right, thus the more difficult.
It should be suggested to raters that the scale be modified to the range 25-90. There are several reasons for this. First, if the item is a four-option multiple choice format, the most common number of options, there is a 25 percent chance that the correct answer will be chosen if the candidate simply guesses. Second, if greater than 90 percent of candidates are expected to answer the item correctly, then it has no real use on an exam because it will not differentiate from those candidates who know the information against those who do not. So, if your raters suggest a number higher than 90 for an item, it should be replaced with a more meaningful item on an exam.
We have also listed the suggested ranges for items with three up to six response options:
Here is a simple but effective visual aid on how the procedure works. You can see that five judges (header x axis) each rate ten items on a test form (y axis). In the far right column, the mean score on the individual items are tabulated. Then, in the far right hand corner cell, a mean of the mean scores is then determined to be a 75.
Benefits of the Modified Angoff Method
The disadvantages to the Angoff method involve the panel of subject matter experts using it. If the SMEs do not have a sound familiarity with the statistics involved, error can be introduced. However, this can be true for any method. Further, as the method initially rates individual questionsitems, SMEs may get sidetracked by these individual ratings rather than the overall performance of examinees candidates on the exam.
The Traditional Angoff Method
Instead of rating each item and providing a rating from 0-100, judges review each item to answer the question: “would a borderline candidate be able to answer this item correctly?” The items they should answer correctly are assigned a 1 = yes, and the items they should not answer be able to answer correctly are assigned a 0 = no. The pass point is then calculated by averaging the scores. Some regard the traditional Angoff as much easier than estimating the proportion correct as used in the Modified Angoff.
Here are the characteristics of the traditional Angoff method, aka “Yes/No” Method