The CSPA writing assessment contains two short writing tasks. The results from two tasks combined provide a more reliable picture of candidates’ writing.
Candidates are given an allocated time limit to complete their responses. They type their essay directly into the web-based system. The scripts are then marked using the latest Intellimetric® essay scoring technology.
The assessment uses:
- Advanced analysis techniques that provide instant diagnostic feedback on candidates' writing skills
- Reports that pinpoint writing strengths and areas of weakness
- A common reporting scale for both writing tasks.
Assessment criteria align directly to the Australian Core Skills Framework’s (ACSF) two writing indicators and their focus areas:
- Audience, purpose and meaning-making strategies
- Vocabulary, grammatical structure, and the conventions of writing
Performance features for the writing assessment include:
- Purpose and audience
- Quality of ideas
- Text cohesion
- Language choices
- Sentence structure
The following individual student report is available for CSPA Writing:
CSPA Student Writing Report
More about the assessment scale and scoring method
The CSPA online writing assessment scale describes the development of writing, in response to the two writing tasks in this assessment, by adult writers in a vocational setting. The scale provides a common metric for both tasks, and allows performance and growth to be described and monitored.
Scale scores are estimates of candidate ability in writing, as measured by this assessment. Scores are located on the measurement scale, and can be validly compared across candidates. The common marking criteria (marking guide) and common candidates completing both writing tasks are the two features of the development of the CSPA writing scale that allow this comparison. In all, approximately 300 scripts were used to develop the scale and to 'train' the online marking system.
During the development phase of the assessment, each category of the marking guide was illustrated and described by work samples and commentaries, exemplifying the qualities of writing expected for each category. Because the categories are expressed as a scale score and mapped to the measurement scale, writing quality at points along the scale can be described. It is these descriptions that are mapped to the ACSF and in this way, level boundaries can be drawn on the measurement scale and candidate performance expressed in terms of the ACSF levels.
The reliability of automated essay scoring
Extensive research into the use of computer based marking of student writing has been undertaken. Computer-based marking has been proven to be as, if not more consistent than traditional hand scoring. View the article 'An Overview of Automated Scoring of Essays' (external link) recently published in the Journal of Technology, Learning and Assessment.