X Share

Why don’t we just put our high stakes exams on screen?

Assessment  Education technology   digital literacyOn screen tests

At Cambridge International – and throughout Cambridge University Press & Assessment – we have an archive of thousands of assessments which have been developed using research evidence and quality assured through our principled approaches. So, given all the work which has gone into creating these assessments, why don’t we just put our existing assessments on screen? The pandemic has accelerated the take up of technology in education, so wouldn’t this be the obvious next step?

Migration of existing assessments to screen has value: our experiences of developing Cambridge IGCSE Progression Tests[1] (and GCSE Topic tests developed by our UK exam board, OCR) show that migration can improve accessibility, release teacher time, provide rich data and recognise the digital literacy of learners as well as build our organisational capabilities.

But simply migrating paper tests to screen – without changing the assessment models, the curricula or the teaching and learning – can miss opportunities and comes with risks.

Missed opportunities

While migration has its merits, this approach does not use technology to its full potential. By designing digital assessments from scratch – going back to the beginning and taking a transformational approach to assessment – we can:

  • reflect effective teaching and learning in the internet age
  • design curricula and assessments which develop skills and behaviours which better reflect real life and work
  • use technology to assess constructs in ways not possible with paper assessments
  • embed technology in teaching, learning and assessment to add value to the educational experience.

Risks of migrating paper tests to screen

Migrating paper tests to screen seems like a quick win for digital assessment, but it comes with three main risks:

1. On-screen tools designed to replicate pen and paper may be invalid

Migrated tests often include on-screen tools which allow learners to respond in ways as close as possible to how they respond on paper. For example, an equation editor may be used to allow them to write equations or a digital protractor that can be dragged across the screen to measure an angle. In some sense, this is like using a smartphone to send morse code: it’s possible, but why would you do it? What value does it add and how meaningful is it for learners? The purpose of morse code was to communicate immediately and securely – there are new and more effective ways of doing that.

Importantly, let’s think about the quality criteria we apply to our assessments: validity, reliability, and fairness. Is using technology to replicate how a learner would work on paper fair, reliable, and valid? Does it have a positive impact on teaching and learning? The risk is that using tools to replicate pen and paper adds unintended demands and so threatens validity.

Another threat to validity is that migrated questions sometimes don’t work because the way learners interact with them on screen is different from how they interact with them on paper. For example, candidates who work on screen are less likely to use jotting paper, so this changes the mental processes they use to get to their answers(2). The result can be that migrated assessments don’t cover the full range of content, item types or skills as the paper exam, or that what is assessed is subtly different. If on-screen and paper assessments don’t assess the same things, when they are expected to this can lead to wrong and unfair decisions.

2. Potential mismatch between learning and assessment

High stakes assessment is known to drive teaching and learning, and moving to digital assessment can have implications for both. A mismatch between the teaching and the assessment, including response types and the skills and behaviours assessed, puts validity and fairness at risk. For example, if learners are taught to construct graphs using paper methods and the assessment is on-screen then this could be unfair. But also, if learners are taught using an on-screen graphing tool, we need to understand how using the tool impacts on learners’ conceptual understanding.

3. Limitations to achieving comparability

We are interested in achieving comparability (verging on, obsessed by it) in educational assessment, yet this can be challenging between on-screen and paper-based tests. And requiring them to be comparable can hinder innovation. However, thinking digitally about assessment could help us reconceptualise comparability. For example, we could choose to think about comparability in a different way, like the currency an assessment gives learners to progress (rather than the demand of the questions) or in terms of the transferable skills learners develop (rather than the content coverage). At Cambridge, we are thinking about what comparability means in the digital context.

What the research says

Paper assessment has limitations which have influenced what is assessed by high stakes exams and how for decades. On-screen assessments also have limitations. But if we try and replicate paper exams by moving them on-screen, we are unnecessarily taking on the limitations of both paper AND screen and so are not delivering the potential opportunities from using technology.

By trying to migrate paper-based assessments to screen your focus becomes what technology can’t do. In contrast, a transformational approach embraces what the technology can do and how this can transform curricula, teaching and learning, and by extension, assessment.

We don’t think that the long-term future of digital assessment is lifting and shifting paper exams. We could migrate all of our existing high stakes assessments, but we know we would miss opportunities and create issues. That’s why at Cambridge University Press & Assessment we are taking two approaches to digital assessment – migratory and transformational. And we are working incrementally with schools in the UK and worldwide to make sure that what we develop works for them.

Our transformational strategy will require further development of our organisational capabilities, but the opportunities presented by digital are too great to miss. The assessments we are developing now, in line with our strategy, aim to transform how we assess students in the future.

To find out more about our strategy, read my blog “What do we mean by ‘digital’ and how does that impact assessment?”

References

[1] Cambridge IGCSE Progression Tests are in development and undergoing customer trials. They are a set of on-screen end-of-topic assessments for Cambridge IGCSE science subjects. Each test is made up of past paper content and adapted to suit on-screen item types and tools. [2] Johnson, M. & Green, S. (2006). On-Line Mathematics Assessment: The Impact of Mode on Performance and Question Answering Strategies. Journal of Technology, Learning, and Assessment, 4(5).
[3] Threlfall, J., Pool, P., Homer, M., & Swinnerton, B. (2007). Implicit aspects of paper and pencil mathematics assessment that come to light through the use of the computer. Educational Studies in Mathematics, 66(3), 335–348.
[4] Hughes, S., Green, C., & Greene, V. (2011). Report on current state of the art in formative and summative assessment in IBE in STM – Part II.
[5] Ofqual (2020). Online and on-screen assessment in high stakes, sessional qualifications. A review of the barriers to greater adoption and how these might be overcome. Ofqual/20/6723/1

A version of this article was first published on the Cambridge University Press & Assessment blog in January 2022.

Go back
X Share

Stay up to date

Subscribe to our blogs to recieve latest insights straight to your inbox