Finding the right AI for the right job – it’s still about the evidenceACER news 9 Jan 2024 6 minute read
Australian schools have a new national framework for using artificial intelligence (AI). With the AI `goldrush’ in full swing, we look at what educators need to know before adopting a new program to support teaching.
When e-Write – an online student assessment – was first offered more than a decade ago, explorations of how artificial intelligence (AI) could advance education involved swarm intelligence and clarifying misunderstandings.
Since 2012, AI development has snowballed, but Jarrod Hingston, Director of School and Early Childhood Education Services at the Australian Council for Educational Research, is keenly aware that its use in education is still controversial.
Dr Hingston oversees ACER’s eWrite program, developed to provide high-quality feedback on student writing in a timely way that supports skill progression.
Writing assessment usually relies on a cycle of evaluating, teaching for improvement, and then assessing whether a student has improved before they can move on.
The e-Write program draws on the automated Intellimetric system to score the performance of 9–15-year-olds in narrative, descriptive, reporting and persuasive writing, providing instant feedback.
Acknowledging concerns that AI devalues human interaction, Dr Hingston says the algorithms used in eWrite are based on the responses of thousands of human assessors.
‘The efficacy of the AI application is paramount in us using it,’ he says ‘and the best feedback I could point to, is we’re now seeing 170,000 or more online sittings annually.’
Dr Hingston believes there’s a growing confidence in, and acceptance of, automated essay scoring, as well as awareness of its benefits and limitations.
He believes that, generally, an understanding of what training an AI application has been given, in order to do a specific task, is a critical first step in gaining confidence in its use.
' Educators are quickly learning about the pitfalls of using general AI applications to create assessments or evaluate students’ responses,’ Dr Hingston says.
‘While AI can be used for these purposes, the effectiveness can only ever be as good as the training that the AI algorithms have received.’
It's never been more important to be in the know
At the start of this term, Australia schools will be expected to adhere to the Australian Framework for Generative Artificial Intelligence in Schools.
Developed by a taskforce of jurisdictions and agencies, its introduction describes it as an ‘aspirational’ guide to understanding, using and responding to AI in ways that benefit students, schools and society.
While it will help school leaders position their thinking across 6 areas – teaching and learning, wellbeing, transparency, fairness, accountability and privacy and safety – direction on what constitutes a quality AI program exists only within these parameters.
With a survey of Australian teachers showing that 38% feel nervous about AI in the classrooms, and the proliferation of bots and AI use by tertiary students leading to a description of this era as the AI goldrush, the need for detailed information has never been higher.
UNESCO’s Guidance for generative AI in education and research provides another reference for educators, with a look at ‘potential but unproven uses’ of AI – including the co-design of curriculum or courses, chatbot teaching assistance, facilitating project-based learning and supporting learners with special needs.
It covers appropriate domains of knowledge, requirements for users, required human methods and example prompts. It also recommends human verification and supervision in a number of areas to ‘check appropriateness’ and stay ‘alert to the risk of misinformation’.
A case of user beware
Dr Hingston warns: ‘There’s a risk in using AI without knowing how it has been developed and where the AI is drawing its assertions from.
‘Often AI programs will claim to provide assessment materials or evaluate responses; however they don’t necessarily provide evidence about the validity of materials or judgements.
‘In fact, educators might actually be misled by incorrect assertions produced by the AI, thereby having a negative impact on teaching and learning.’
Dr Hingston advises educators to seek information on how a program has been created. When developing e-Write, for example, ACER worked with an expert partner, Vantage Labs, and ensured that substantial human experience was used in training the AI application.
‘The dedicated, automated essay-scoring application for e-Write is based on thousands of high-quality examples of scoring by human examiners of the exact writing task that is being assessed,’ Dr Hingston says.
‘We therefore know what is being evaluated and how. It also provides a clear framework for ongoing quality and validation checks.’
It’s also important to know what happens when something occurs out-of-left field, Dr Hingston says. In the case of e-Write, a written exercise that appears ‘off-topic’ will be returned as ‘unscored’ and then assessed by a human marker, providing teachers with an opportunity to re-examine the work, or talk to the student about the response.
Where to now?
As the range of uses for AI in education is growing, so are the number of case studies now available to those looking to go deeper.
For example, this podcast looks at how and why a Victorian school introduced an AI learning coach, while in Italy, the AI4Citizen pilot supported a program addressing gaps between education and employment.
Dr Hingston notes that while AI developments may seem hard to keep up with, principles in education, such as establishing an evidence base, are more enduring and can anchor decisions about its use.
‘The way we learn is complex and that’s why ACER exists – to continually assess the value of new approaches and gather evidence to see where change can transform learning for the better.
‘In terms of where AI fits into education, I’d say it’s just another tool, but one that’s likely to stay on the radar for some time.’
Find out more: