FAQs

show all

General

It is designed specifically for international school populations

This means that it is a test that is not targeted to any one national or cultural group: the test material is eclectic, drawing on many cultural and national sources; it has been designed with the knowledge that the students who will sit the test come from many cultural, social and linguistic groups, and that the curricula they have been exposed to are diverse. Another related feature is that the language in the assessments is chosen with an international population in mind, in the knowledge that at least half of the test-takers have first languages other than English. Although the reading and writing tests inevitably depend on language proficiency (inseparable from the domains), the mathematical literacy and scientific literacy tests attempt to use language that will be accessible to non-English speaking background students as well as English speaking background students.

It is based on an internationally endorsed concept and assessment frameworks

The ISA is based on the construct and frameworks of the Organisation for Economic Cooperation and Development's Programme for International Student Assessment (OECD PISA). PISA was developed as a measure of 15-year-old student performance in reading, mathematical literacy and scientific literacy. The PISA test concepts were created by teams of international experts in these fields. The ISA has taken those concepts and applied them to parallel and younger age groups. Some PISA tasks have been used in the ISA so that the ISA can be linked statistically to the PISA results. Thus ISA results are comparable with results from the PISA study and we are able to provide schools with data that is comparable to the PISA country data. However, it is not correct to describe ISA results as 'PISA scores'. Like PISA, the ISA assesses more complex higher-order thinking skills because it includes open questions in mathematical literacy, reading and scientific literacy that require students to construct a response, for example, to explain their reasoning, to find evidence or to justify their opinion.

It measures growth over time for both individuals and school populations

Many standardised tests provide normative information and may also offer criterion-based reports. Because of the methodology used to analyse the assessment data for the ISA, we are also able to construct scales that are stable over time, and across year levels. This means that the results reported for students in grade 3 are comparable with the results reported for the same students when they reach grade 5 and thus the learning growth of the cohort can be monitored. Similarly, one year's grade 3 results are comparable with next year's grade 3 results, so that a school can monitor program development at a particular grade level and across the school.

We have conducted an audit of the PYP and MYP and in general our finding was that many educational principles are common to the IB programs and the ISA, notably the emphases on empowering and encouraging students to become life-long learners, on reflective thinking and on responsibility towards oneself, the community and the wider world. More narrowly, the basic principles of assessment enunciated by the primary and middle years IB programs and by the ISA are also in accord, with priority given to meaningful reporting to parents, the provision of useful information about the needs of individual students, and the provision of feedback to improve teaching and learning.

There is no formal ISA curriculum, but all ISA questions are written by former or current teachers with an understanding of and a ‘feel’ for grade-appropriateness. When thinking about appropriate content, they refer to a number of sources. In mathematical literacy for example, these include the following key documents: IB PYP and MYP programmes, TIMSS Assessment Framework and the NCTM Curriculum Focal Points. The ISA is not a tightly curriculum-based test. The two writing tests are the same for all the primary grades. The reading skills assessed are generic skills. The mathematics content for each grade level should be well known. The ISA is testing students' ability to apply their understanding of mathematics rather than their knowledge of recently taught curriculum. The tasks are designed to draw on knowledge, skills and understandings that students at the target grade/year level would typically have been exposed to. Maturity and the ability to synthesise knowledge and understanding are skills that allow students to perform well on the ISA. 

The ISA testing cohort is based on the student's grade, rather than on the student's age. In any survey across countries, comparing student cohorts is a complex issue. Some studies choose an age-based criterion and others a grade-based criterion, and still others a 'number of years at school' criterion. There are advantages and disadvantages of using each method. For the ISA we have chosen a grade-based criterion. We are using UNESCO's International Standard Classification of Education (ISCED-97) classification of grades. What we call grade 3 / third grade / Year 3 is the third year of ISCED level 1; what we call grade 7 / seventh grade / Year 7 is the seventh year, counting from the beginning of ISCED level 1. Please note that the grade levels are not based on Australian year levels or the Australian school year: indeed, the great majority of schools participating in the ISA follow the northern hemisphere academic year.

We ask schools to submit the 'named grade' for the assessments: that is, children in the year or grade called 'three' should do the grade 3 test, regardless of their age; children in the year or grade called 'five' should do the grade 5 test. The exception to this is children in British-style schools. The AGE of children in British style schools is approximately the same, on average, as that of children in the grade 'below' them. In the light of this, we recommend that children in British-style schools are administered the assessment of the grade name BELOW their Year level. That is, Year 4 children should sit the grade 3 tests; Year 5 children should sit the grade 4 test etc.

Our analyses show that the age of the students is correlated with performance for the youngest grade level, but that age decreases in relevance as the students become older. What is more important is the number of years at school. By grade 7 there is negligible difference in terms of the age of students.

In the paper ISA, we assume that students will take all four parts (mathematical literacy, reading and two writing tasks), and we charge on that basis. A school may elect to do one, two, three or four parts of the test but the charge is the same, regardless. In the online ISA, students may take science separately, mathematical literacy and reading only or two writing tasks only.

Schools have the option of administering the ISA in either September OR February OR May. Our policy is not to allow testing at the same school on different occasions, since the material is identical. This avoids test security being compromised. The ability to compare results over time would also be adversely affected.

For schools on a northern hemisphere academic calendar (around 95 per cent of ISA schools), if your school's students are continuing into the following year, then the results from the May assessment can be used to inform a student's new teacher about his or her stage of development at the beginning of the new academic year. The September administration provides teachers with feedback about his or her students within the first half of the same academic year (it is thus strongly formative in its intentions); and the February administration, in the second half of the same academic year.

The paper and online ISA tests are essentially the same. They include the same writing tasks and many of the same multiple-choice and open questions in reading, mathematical literacy and scientific literacy. The online tests contain a small number of questions rendered differently to take advantage of the interactivity made possible by online delivery.

The web-based test delivery interface has an in-built monitoring system, whereby students are alerted if they leave the test interface or attempt certain actions (such as copying and pasting from outside the interface into the answer space). In general, regarding the issue of opening different tabs or browsers during the assessment, it is undesirable that students do this – teachers should strictly supervise students to address this issue. It is, however, worth noting that the ISA assesses understanding rather than any kind of rote learning or recall. The assessment is designed so that any information needed to answer is given in the material, so that what we are assessing is student's ability to think critically about, and display their understanding of the material they are given. Because this is the case, using an external web page is likely to be of no help to students, even if they did manage to access one. 

  • Minimum screen resolution of 1024x768
  • DSL or cable internet connection minimum 56 kbps
  • Compatible computers and devices
    • PC / laptop
    • Apple Mac
    • iPad (external keyboard recommended)
    • Other tablet devices (external keyboard recommended)
  • Compatible browsers
    • Google Chrome
    • Mozilla Firefox
    • Safari
    • Internet Explorer Version 11 and above

To see if your computer meets the minimum technical requirements, take the browser exam here and it will run a diagnostic for you. It will confirm your download speed and recommend how many students you can test at a given time with your technical set up.

show all

Reporting

All reports use quantitative data, although some have qualitative data as well (the individual student reports). View sample reports here. The ISA Interactive Diagnostic Report gives more detailed analyses in quantitative and descriptive form.

The kinds of quantitative comparisons we provide may be different to those you have seen on other standardised tests: we report in terms of a 'scale score' (a standardised score) which is applicable across grade levels and from one year to the next. We provide comparisons of the individual student; the individual class, and the individual school with the ISA reference norm (more than 64 000 participating ISA students in 78 countries), other 'like' schools (based on the proportion of students of English speaking background in the (aggregated) tested group in each school), boys, girls and English- and non-English-speaking subgroups.

The advantage of using scale scores rather than raw scores or percentage reporting is that the scale makes it possible to compare results from different tests, as long as they are measuring the same variable (that is, the same collection of skills, knowledge and understanding). Using scale scores, we can compare the performance of students in different grades in the same year, of a particular grade from one year to the next, or of the same students from one year to the next.

The ISA Reference Norm is a large cohort of more than 64 000 ISA participants in 78 countries. Each school’s aggregated results are compared on the school report with the results of all schools in the reference norm (and with sub-groups from the same large group). In addition, our methodology allows us to provide standardised scale scores for individuals, classes and schools, which are stable over time and across year levels.

In the ISA reporting suite we also provide the National Comparisons Report to schools with Grade 8, 9 or 10 participants. Here, the reference group is a random sample of 4500 15-year-old students from each of the OECD Programme for International Student Assessment (PISA)-participating countries. The samples in each country are very carefully drawn to represent the entire school population of 15-year-olds in that country, in all school sectors (except international schools). Grade 8, 9 and 10 cohorts from schools taking the ISA are compared in their performance in reading, mathematical literacy and scientific literacy with these large national cohorts. PISA is administered every three years. The National Norms report is currently based on the PISA 2012 administration. Sixty-five countries participated in the 2012 PISA data collection with a total of around 510 000 students, making up close to 90 per cent of the world economy.

Unfortunately (from the ISA's point of view) only 15-year-olds are assessed in PISA. While we know that the Grade 9s and 10s (and possibly 8s) doing the ISA are roughly comparable in age to this group, it is not valid to make comparisons with the other grades that take the ISA (Grades 3, 4, 5, 6 and 7). There is more detail about the National Comparions Report in the Guide to Reports that schools receive with their results.

The ISA is not conceived of as a high-stakes assessment for students. Major decisions about students' futures will not be made on the basis of the results. Nor would the program be 'high stakes' for schools. No league tables will ever be published by ACER. ACER will not release the test results to any central agency that could use them to hold schools accountable in any way, far less to threaten or punish schools, in the way that standardised testing programs are used in some jurisdictions.

The ISA is an achievement test for students in international schools. It is designed to serve a number of purposes for schools, enabling them to:

  • measure individual students' achievement in order to reflect on and address strengths and weaknesses;
  • monitor an individual's or a cohort's progress over time;
  • evaluate instructional programs against objective evidence of student performance, to identify gaps in student understanding, and to measure growth in learning, between grade levels and from year to year within one grade level;
  • compare subgroup performance (for example, girls and boys; students from different language backgrounds) to see where there may be unexpected results and try to understand them; and
  • provide normative data in relation to selected populations to 'see how we are doing'.

It is important to interpret the results of this assessment together with other information about students' achievement provided by on-going classroom assessment and perhaps other external assessments. It is not the purpose of the ISA to provide a final grade for a student at the conclusion of their academic year. Student results are confidential and are released to the school only.

The short answer is yes, the ISA provides data that can be used to assess the value a school adds relative to other schools. The ISA scale scores allow comparison of students' scores over time regardless of which ISA test the students have been administered. Significant differences in changes to the mean scale scores of a group (with a constant or similar membership) over time relative to the changes for the whole ISA population are likely to indicate the effect of school input or value-adding. For example, if the change in ISA mean scale score from Grade 5 to Grade 7 is 80 scale scores for all ISA students and the mean score for your current Grade 7 students has increased from their previous Grade 5 mean score by 110 scale scores, this is likely to suggest your school has significantly added value in this aspect of the curriculum between Grade 5 and Grade 7. Value-added interpretations need to be done with caution for individual students and need to be qualified where the membership of a group, being tracked over time, has changed substantially.

We do not publish marking rubrics as such. ISA markers work with a highly detailed rubric and accompanying sample scripts and the scoring guides are not available for individual schools. The report forms carry summary descriptors that relate exactly to the scoring rubric. The results are reported with descriptive information about what the figures mean, both in terms of a 'described proficiency scale' (a general description of the level of achievement for each of mathematical literacy, reading and writing), and a detailed description of the skills tested and achieved for each question in math and reading, and for each of the component scores in writing.

The marking guides/rubrics/rating scales developed for the ISA writing are the result of more than 20 years of theoretical and empirical research into writing assessment at ACER and elsewhere. They are based on the principles of developmental assessment, which conceives of the student as developing in each different area of learning along an underlying continuum: assessments, in this model, are conceived of as occasions to gather evidence about the point (or more likely the region) on a continuum that the student has reached.

Regarding the choice of criteria for the ISA, there were several considerations, including coverage of the key features of writing (a balance between technical and ideational features) and providing some diagnostic information. Our aim was to give a reasonably broad yet diagnostically useful picture of a student's achievement.

The rubrics and guides used in operational marking are significantly more detailed than those that are published on the class reports. Each score - in a domain as complex as writing - is inevitably an on-balance judgment; and the descriptors are deliberately designed NOT to give a sense of 'checking off' features of writing. Every student receives scores from two markers: one for narrative/reflective and one for exposition/argument writing.

The best way to inform parents and others about the content of the ISA tests is to show sample items that give a flavour of the testing focus and format. Another means of informing parents is to use the descriptors that are attached to the class report, which describe the skills and understandings assessed in each item. We realise that this is not as immediate as being able to see the actual tasks; on the other hand, the focus is less on the individual task than the kind of skills and understandings it represents. What we are interested in measuring is students' proficiency in a domain, represented by KINDS of questions, rather than the particular question that happened to be on this test.