skip to main content
ACER
Unpacking ‘programmatic assessment’
Image © Shutterstock.com/vectorfusionart

Unpacking ‘programmatic assessment’

Research 4 minute read

A programmatic model of assessment can provide a more complete understanding of trainees’ knowledge and skills, but key aspects of the approach are often misunderstood.

The role of exams in specialist medical training is changing as colleges embrace more holistic approaches to assessment.

Traditionally, high-stakes examinations have dominated postgraduate medical education. But examinations are like studio portraits in a family photo album: high quality and clear but unable to tell the whole story of family life on their own.

‘Programmatic assessment’, which has been described as a mixture of training, assessment and supporting activities, offers a way to fill in the rest of the album to gain a more complete picture of a trainee’s knowledge and skills.

Under a programmatic approach all assessment should provide meaningful information that can be read in conjunction with a multitude of high- and low-stakes data points. High-stakes decisions regarding trainee progression are made based on the accumulation of multiple lower-stakes data points, such as work-based assessments, logbooks, supervisor feedback and other relevant tasks, alongside exams.

While medical colleges already gather this type of information on trainee learning, too often it is aggregated to a simplistic outcome such as pass or fail, or reduced to a single number which is then used to made decisions on progression. Rather than just determining whether trainees have passed or failed specific assessments, progression discussions should focus on overall performance against different domains or proficiencies.

For example, performance in a pathology objective structured clinical examination should be read in conjunction with the demonstration of pathology knowledge from written examinations and work-based assessments. A careful review of data will then determine areas of strength and weakness and result in the formulation of plans for future learning and assessment.

This diagnostic feedback is a crucial part of programmatic assessment, which makes a distinction between assessment of learning – a traditional test of how much someone has learned – in favour of assessment for learning, where the nature of the assessment tasks promotes learning in itself. For this to occur, feedback to those being assessed is essential.

Good quality feedback is crucial to trainee learning and progression, and hinges on rich information that has been triangulated across all assessment formats. Many of these elements are already in place within postgraduate medical education, which emphasises learning in the workplace. For many specialist medical colleges, achieving a programmatic assessment approach simply requires the connection of these elements, rather than a major overhaul of curriculum and assessment.

To successfully implement a programmatic model of assessment, attention needs to be given to how each assessment is used and weighted in the triangulation of data across formats; how they are being used to provide feedback to trainees; how they promote learning; and ultimately how they contribute to the making of progression decisions.

Find out more:
ACER has been providing reviews and guidance on assessment reform for specialist medical colleges that deliver postgraduate training in Australia and New Zealand. ACER has also worked with the Medical Deans of Australia and New Zealand and members of the UK’s Medical Schools Council – Assessment Alliance to implement common assessments across medical schools. For further information about ACER’s capabilities in medical assessment, contact highereducation@acer.org

Read the full article:
‘When I say … programmatic assessment in postgraduate medical education’ by Jacob Pearce and David Prideaux, was published by the Association for the Study of Medical Education journal, Medical Education, in August 2019.

Subscribe to the Discover newsletter

Privacy policy