The use of technology in assessment has become widespread. Initially, the gains in efficiency that result when scoring and reporting processes can be automated were seen as the primary advantage of this change of mode. It is increasingly being recognised, however, that an innovation of even greater importance is the potential of computer-delivered assessments to give previously inaccessible insights into learning processes.
Where assessments are computer-delivered, it is a simple matter to comprehensively capture human-computer interactions such as mouse clicks, the dragging and dropping of an item from one place to another or the selection of an item in a drop-down menu. It is possible to use these data to think about not just whether a problem was successfully completed, but how the problem was solved. It is now possible to observe the processes of problem solving more directly, rather than inferring them, and it is possible to do so unobtrusively.
However, interpretations of data from computer-delivered assessments must be consistent with the principles of valid and reliable measurement and grounded in theoretical frameworks for describing developing competence: a danger in the face of these opportunities is that the rapid growth of new kinds of data will preclude researchers from understanding them. While the burgeoning fields of educational data mining and learning analytics offer some techniques for the interpretation of process data, these fields are relatively young, and there is much to be done in exploring how these techniques might support measurement and thereby give us insight into ways to report to students and teachers on how to optimise learning.