skip to main content
Architects model best practice in assessment

Architects model best practice in assessment

Research 3 minute read

A high-stakes registration exam for architects will be more transparent, consistent and fair, following a review led by ACER.

ACER has completed an end-to-end review of all processes associated with the design, compilation, delivery and reporting of the Architects Accreditation Council of Australia (AACA) National Examination Paper.

Held twice each year, the exam is the second component of a three-part process called the Architectural Practice Examination (APE). The APE assesses candidates against the National Standard of Competency for Architects. It is used by the architect registration boards around Australia to determine eligibility for registration as an architect.

AACA CEO Kate Doyle said the review was instigated by the architect registration boards, who were ‘conscious of the need to assure themselves, the profession and the general public that the primary assessment prior to registration as an architect meets best practice.’

The review involved benchmarking AACA’s current exam processes against the ACER’s own test development process, which has been proven effective for high-stakes testing in school, tertiary education and professional contexts.

In conducting the review, ACER researchers were able to draw on their experience working with a number of specialist colleges in medicine to evaluate the quality of assessment and data collection processes, and develop fit-for-purpose assessment practices.

Ultimately, the results of high-stakes assessments must be transparent, consistent, fair and defensible. Important factors in the design and delivery of assessments used for professional accreditation are to:

  • Define the knowledge and skills expected of a minimally competent candidate.
  • Ensure that the pass mark for the exam is set at a threshold that identifies a minimally competent candidate.
  • Monitor the difficulty of exams over time and adjust pass marks accordingly to ensure consistency.

ACER’s psychometric analysis of AACA’s 2016 exam papers indicated that the number of questions in each exam was small in comparison with similar assessments and that the practice of deducting one score point for an incorrect answer invited undesired behaviour in candidates.

Recommendations for the 2018 exam as a result of ACER’s review included:

  • Increase the number of exam questions in order to assess more domains of the National Standards of Competency for Architects.
  • Revise the scoring model so that incorrect answers are no longer penalised.
  • Provide comprehensive feedback to all candidates showing how they performed on each question and in relation to the exam cohort.
  • Deliver the test online rather than on paper in order to provide timely feedback and increase test security.

ACER then assisted AACA to develop the revised exam before trialling it in late 2017.

Ms Doyle said that ACER’s work has led to improved consistency and transparency and an exam that is in line with best practice assessment of professional knowledge in other vocations.

Find out more:
To learn more about ACER's assessment review and improvement services contact Alistair Macleod, Manager of Vocational, Adult and Workplace Education Assessment Services. For further information about ACER's psychometric research and data analysis capabilities contact Clare Ozolins, Research Fellow, Assessment and Psychometric Research.

Subscribe to the Discover newsletter

Privacy policy