For every training, there is a framework of:

  1. Demonstrate the skill (either via description, photos, videos or other forms of media)
  2. Have the learner do sufficient guided practice of the skill (on a physical, virtual or hybrid simulator) to acquire the skill
  3. Observe the guided practice, and give targeted coaching type feedback and meaningful metrics to allow improvement and to short circuit practicing incorrect skills
  4. Use the self assessment to define a clear “threshold” beyond which the learner may graduate to first clinical cases. This should include defining process for transitioning to clinical practice (e.g. simply start and continue the practice clinically, start with certain types of cases, start under proctored observation, start with the same assessment metrics applied to the clinical environment, etc)

Types of self-assessment include:

  • Pure knowledge-based assessment. Self-administered exam questions that the students can use to self-quiz and study. Important concepts for doing this well are in the intricacies of question design that promotes understanding that can be adapted to support strong clinical decision-making in clinical environments that are relevant for the learner rather than rote memorization of one set of answers that are only true under one set of circumstances.
  • Virtual reality assessment and feedback. The student performs some task in a virtual world and as the input they are doing is entirely accessible as data, the data are analyzed and feedback is given to the student to improve their performance on the virtual task. This is a minefield for feedback, as if the virtual task does not translate well into actual real-world clinical performance, the danger of teaching “anti-skills” (or skills that need to be unlearned when the learner transitions to clinical application) is great.
  • Hybrid assessment. The learner uses and practices on a physical model that produces a data stream. This could be a video that is analyzed by software to give guidance, or sensors embedded in the simulator itself that are processed to give feedback. In addition to the challenges of giving good actionable feedback, the data capture and analysis of this can be challenging. (many of the toolboxes we have created are targeted at this mode of assessment)
  • Pure physical assessment. The learner uses and practices on a physical model, and all of the assessment/feedback is produced by observable/characterizable physical changes in the model itself. This puts high stress on the fidelity and repeatability of the models, as there are very few opportunities to “close the loop” and make sure that the feedback is consistent from learner to learner.
Cookies help us deliver our services. By using our services, you agree to our use of cookies.