Hello! I am creating e-learning modules for a fire-safety audit and training organization. The learners are safety inspectors. Which question types are suitable for Level 1 assessment and which ones for Level 2? The learners will be rated on a rating of 1-5. Post clearing Level 2, they are eligible for a certificate. Looking forward for your valuable inputs!
Hey Violet. Jumping into this a bit late. Hope your query is resolved by now. Scenario-based question types work well for level 2. As it’s a fire safety audit firm, real-world scenario-based assessments must be used to gauge the understanding of the learners. Hope this helps!
Thanks for the detailed explanation Paul!
Level 1 question types are mainly to check understanding and recall of concepts. MCQs work well here.
For level 2, real-world situations and case-studies-based questions help to assess the application of the concepts learned. This is what I have been following .
I remember with one of my clients, when he had requested all his Knowledge checks as MCQs, the learners were found to be clicking the answers randomly even without reading them. So its important to have a proper mix of question types with concepts assessed in multiple question forms.
As for your query-
Lower-order thinking skills can be assessed using multiple-choice questions or matching questions.
Scenario-based questions and gamification-type assessments can be used for higher-order thinking skills.
Hope this helps!
The poster refers to the Kirkpatrick training evaluation model.
Level 1 is the reaction to the course, where you can ask learners what they thought of the training, what they liked, what they didn’t like, etc. Sometimes referred to in classroom training as a smile sheet, the feedback from this level is captured to improve the delivery of training. It can capture things like the environment, the refreshments served etc. Not much stock is put into the level, as you can imagine. I usually investigate if level 1 evaluations are available from the LMS, and seldom do I build it into training.
Level 2 evaluation checks for the immediate retention of what was learned in the lesson. Simply put, this is your quiz at the end of a learning module. The course is considered relatively successful if the learner achieves a passing grade. Any question slides in Adobe Captivate are suitable for this type of evaluation except the rating scale or Likert. On a personal note, level 2 evaluations are the only Kirkpatrick evaluations I include in eLearning. Everything else is outside of Adobe Captivate (for me).
Level 3 evaluations are not typically done in an elearning module and can be performed manually or automatically by an LMS. They check to see if the learner performs the procedures learned once they return to the job site. For example, you might observe your students 30 days after training to see if the training has influenced on-the-job training. Adobe Learning Manager LMS can send a Level 3 evaluation to managers of students 30 days after training to ask if the student’s behaviours have changed after training.
Level 4 evaluations are the results of the organization. These could be financial, or they could be safety records and so on. Level 4, especially when the results are financial, is difficult to prove because so many things can be attributed to success, not just training.
Sorry but I don’t know the meaning of ‘Level 1’ and ‘Level 2’, although I have a lot of experience with assessments. I never consider MCQ or the other types of questions being suited for a full assessment at all, it can be part of the assessment.
I would opt for scenario-based assessments if everything needs to be online, blended assessments impossible. Another possibilities: questions in hotspots on 360° video, short video clips followed by questions.
You can also have Knowledge Check slides in overlay for interactive video, but those are not scored by default.
You must be logged in to post a comment.