skip to main content
Horses for courses: Choice in adult LLN assessments
Image © Shutterstock/Cheryl Ann Quigley

Horses for courses: Choice in adult LLN assessments

Research 10 minute read

Educators working with adult learners need good assessment tools if they are to identify the most appropriate courses for those undertaking training, as Jim Spithill and Philippa McLean explain.

Horses for courses: Choice in adult LLN assessments

Once upon a time pre-training Language, Literacy and Numeracy (LLN) assessments involved an informal chat with the prospective learner, then placing them in a two- or four-hour per week class, the choice of which may have been made on the basis of available childcare, or in an evening class for those busy during the day.  However, assessment has moved on.  There is now a considerable investment of resources in developing and choosing contextualised pre-training assessment tools, developing knowledge of each learner around accepted frameworks that specify levels of performance, having the learner undertake a range of tasks to provide sufficient evidence and then rating them against a framework or scale.  So what are some of the key features of pre-training assessments and how do you make a decision about choosing an appropriate LLN assessment tool and mode of delivery? 

Types of pre-training assessment

There are two specific types of adult LLN pre-training assessment: online computer adaptive assessment and one-to-one face-to-face assessments.

Online computer adaptive assessment (CAT) typically begins by presenting the learner with a small number of items of medium difficulty. The test algorithm successively reviews the learner's responses, such that the level of difficulty of each item is adapted in light of the previous responses: a correct response to a medium difficulty item means the learner is then presented with a harder item, and so on. The process results in the learner being presented with items that are a good match with their level of ability and for which they have about 50 per cent chance of giving the correct response. This means that the overall test results give good information about what the learner can or cannot do, which is more informative than a learner getting all the test items correct (because the test is too easy) or incorrect (because it is too hard). Typically a CAT assessment can locate the learner on the relevant scale after they’ve answered 20 or 30 items, so it is time-efficient, but a CAT assessment requires a very large number of items to be written and trialled, which increases the up-front development costs.

Recently developed online assessments like the Core Skills Profile for Adults (CSPA) and the Adult Online Writing Assessment (OWA) provide an efficient, valid and reliable method for assessing the stages of development of adult learners. The CSPA has been developed to align to the five performance levels of the Australian Core Skills Framework (ACSF), addressing reading, numeracy, writing, abstract reasoning and mechanical reasoning. The OWA can be delivered in conjunction with the CSPA to assess the capability of adult learners to cope with study and workplace demands.

One-to-one and face-to-face assessment has proven to be a popular way of assessing as it allows the assessor and learner to develop some rapport, and to discuss strengths, weaknesses and future goals, typically taking about an hour. The assessor and learner are able to choose appropriate tasks from an assessment tool that will provide pertinent information relating to the aim of the assessment.  In this type of assessment core skills like reading and numeracy can be assessed together, by choosing relevant, authentic stimuli and questions.  This form of assessment also allows the assessor to observe how the learner goes about the task and how well they are able to follow instructions. Support can be provided to ensure the learner does not drown in the assessment process, for example, where reading difficulties interfere with numeracy performance.  An initial conversation where the learner is encouraged to reflect on their LLN skills provides some informal information about the learner’s skill levels and assists with choosing appropriate tasks that will elicit more formal evidence of skill level. The assessment concludes with a discussion around future steps to address a gap and a pathway to meeting the learner’s goals.

Both types of pre-training assessments provide a wider choice in selecting what is appropriate for the particular learner and context, by identifying the strengths and knowledge or skill gaps of the learner and providing important information to develop an individual learning plan. Both provide the opportunity to focus LLN development on skills that are specifically needed by learners to build their LLN performance for personal, workplace or further study needs.  This is a great improvement on the scatter-gun approach of simply placing a learner in a general LLN class where all learners undertake the same LLN material whether or not it is relevant to their needs.

A pre-training assessment of reading

To show how pre-training assessments can vary, let’s consider the following stimulus and the types of questions that could be developed.

Slips, trips and falls – Health and Aged Care Services

Each year, thousands of Australians suffer an injury as a result of a slip, trip or fall. In Queensland alone 13,000 workers suffer from this type of injury, costing Queensland businesses more than 256 000 lost work days and over $60 million in workers compensation payments. In addition to these costs, there are financial, physical and emotional costs for the injured worker and their family. A workplace injury can affect a worker’s wellbeing by restricting their usual home and leisure activities. A basic understanding of what causes a slip, trip or fall can help prevent these incidents occurring.

Slips, trips and falls can happen in any workplace. They may occur in a kitchen, cold rooms, loading docks, factories and hospitals. More serious slips or trips, together with the resulting falls, may result in:

  • sprains or strains
  • broken bones when trying to ‘break the fall’
  • a back injury due to the sudden and forceful impact during a fall
  • burns if it occurs near hot surfaces or if the person is handling hot fluids
  • cuts if it occurs near sharp objects.

There are various factors that contribute to the risk of a slip, trip or fall. Slips usually occur when there is a loss of grip between the shoe and the floor. This commonly occurs when there is a contaminant between the shoe and the floor. Trips occur when a person’s foot hits a low obstacle in their path, causing a loss of balance. Often, the obstacle is not easily visible or noticed. Special attention needs to be paid to the following aspects of a workplace:

  • floor surfaces and floor cleaning
  • lighting
  • footwear
  • the layout
  • attitudes to safety.

© Commonwealth of Australia 2013 Developed by Precision Consultancy

An online assessment could ask a range questions at around level 3 of the ACSF. These might address:

  • the key message of the text
  • the main causes of slips, trips and falls
  • a vocabulary question around the meaning of ‘contaminant’.

These items would be rigorously examined and trialled using Item Response Theory methodology so that ACSF levels can be established that do not rely on the subjective decisions of an individual assessor.

One-to-one and face-to-face assessments could use the stimulus to ask a range of questions such as ‘Explain in your own words what you think this text is about’.  Paragraph one lends itself to the co-assessment of reading and some numeracy thus saving time in the assessment.  Co-assessment of core skills is not possible with online testing.

The two assessment modes can also address different focus areas in the ACSF.  Text navigation and comprehension strategies lend themselves to online assessment while an assessor in a face-to-face interview might ask the learner to ‘Read me the paragraph that outlines the extent of the problem in Queensland’. This would allow the assessor to elicit evidence of decoding and fluency, and syntax and language patterns.  Providing access to different types of assessment enables the assessment of a wide range of focus areas and enables assessors to elicit specific information about a learner’s strengths and knowledge or skill gaps. To guide the assessor it is common practice to provide a rubric or marking guide or direct the assessor to the ACSF.

Assessment decisions – is that ACSF level 2 or level 3?

The ACSF provides a comprehensive set of benchmarks to guide an assessor in making decisions about the learner’s level of performance, but this requires some knowledge of the ACSF and an understanding of how levels of support, familiarity of context and the interplay of text and task complexity all impact on assessment decisions.  There is no getting away from the fact that there is a degree of subjective judgement that has to be made and this is key to one-to-one face-to-face assessments.

Kaye Bailey, Compliance and Reporting Administrator at Wyndham Community and Education Centre (CEC) in Melbourne’s west, says Wyndham CEC uses online assessments as part of the enrolment process. ‘An online assessment like CSPA gives us an objective assessment of applicant’s literacy and numeracy. The results are available immediately, and these can be discussed with the applicant, and their parents, if applicable,’ Bailey says. ‘We also find the results of the CSPA test are helpful in evaluating suitability when students are considering applying for other courses.’

Pros and cons of face-to-face and online assessments at a glance

There are several pros and cons to consider when you’re deciding whether to use one-to-one face-to-face assessments or online computer adaptive assessments.


One-to-one face-to-face assessment

Online computer adaptive assessment


  • trainer/institution can assess skills that they value highly for the course or purpose
  • carefully and objectively aligned with relevant framework
  • learner is presented with items that are a good match with their level of ability
  • non-valid items are discarded via trialling and psychometric review processes


  • subject to some variation in interpretations by different assessors or by the same assessor at different times or with different learners
  • all items sit on the same scale so learner location on the scale or its sub-scales is determined consistently


  • allows assessor to treat candidates as individuals, such as in the level of support provided by the assessor
  • the supportive environment can be a particular advantage for low level, reluctant learners
  • consistent format for all users
  • timing can be flexible
  • relies on a level of digital literacy skill


  • the assessor sees authentic evidence of the learner's performance
  • objective standards for all stakeholders


  • delivery time for the learner is quicker as co-assessment of core skills is possible
  • shorter time and work demands for assessor and institution
  • handles high volumes of users

Breadth of coverage

  • allows assessment of all core skills; enables learner self-reflection, and discussion of ‘where to next’
  • particularly suited to certain core skills like reading and numeracy


  • low materials cost
  • high per-user cost: delivery, marking, reporting, moderation and validation
  • high development cost
  • high equipment cost (but generally available platforms are used)
  • low per-user cost

Assessor role

  • personal engagement with the learner in a respectful dialogue encourages a more holistic assessment
  • selects appropriate online tools at an appropriate starting level


About the authors

Jim Spithill is a Research Fellow in assessment and reporting at the Australian Council for Educational Research (ACER). Philippa McLean is a Director at Escalier McLean Consulting  < > This article draws on the paper they presented at ACER’s National Adult Language, Literacy and Numeracy Assessment Conference on 1-2 May in Melbourne.

Subscribe to the Discover newsletter

Privacy policy