In this post, Professor Sian Bayne and Professor Tim Drysdale discuss the opportunities, as well as the risks, of assessment-related technologies. Sian is Professor of Digital Education, Moray House School of Education and Sport, and Tim is the Chair of Technology Enhanced Science Education, School of Engineering. This post is part of the Learning and Teaching Enhancement Theme: Assessment and Feedback Principles and Priorities.
Chosen well, learning technologies can support creativity, innovation and experimentation in assessment and feedback. However, the wider landscape of assessment-related technologies can be a rocky one, and it’s important to be aware of some of the risks and compromises it presents.
In this blog post, we’ll start by outlining some of the exciting opportunities that digital environments open up for assessment. Then we will give an overview of some of the trends that are shaping the future of assessment, and outline the risks associated with aspects of practice in this area. We provide links throughout to help you rapidly identify technologies that you want to investigate further, and to access some of the research that can help you ask the correct questions of them.
Digital opportunity
- Innovation
There is almost unlimited scope for bringing pedagogy and technology together to enable new forms of assessment and feedback. Doing so can help open up new, creative ways of representing academic knowledge that move students beyond the conventional modalities many disciplines often rely upon (for example, essays and reports). Here are a few examples:
-
- ECA, EFI, Moray House, and many others use the Miro collaborative drawing platform for supervision of projects, interactive group work, and student co-design of creative approaches to complex problems.
- The MSc Digital Education offers students the opportunity to develop assignments in multiple modes including video, audio, image, websites, digital exhibitions, visualisations and more. Some examples of student work are showcased on the MSc’s website.
- The School of Mathematics are developing robust peer assessments, and coherently organised quizzes, which embed all the material for a course, and provide a consistent and well-scaffolded experience for first years on a large scale course.
- Reflective learning for credit with SLICCs uses the Padlet collaborative platform to help students structure their work, while Jupyter notebooks allow students to have a consistent environment for creating mathematical work.
There are also more emergent technologies that can help guide students as they approach their assessment task. For example, bots can be designed to enable students to interact on demand with an automated presence (see the Teacherbot for a playful example), supplement the teacher function, and share hints, while student interactions with online exercises can be analysed to keep the teacher in the loop.
- Efficiency
There are also operational benefits associated with digital technology for assessment, for example, by speeding up existing tasks such as exam marking through automation – in appropriate circumstances – and convenience functions that are only possible in the digital domain. For example, creating randomised questions for each student can increase academic integrity. Quality assurance usually benefits too, through supporting custom auditing processes, criteria marking, and malleable rubrics. Incremental assessment is thought to help with over-assessment, but application can be problematic, and benefits from digital automation. A recent spin out from Imperial offers tailored feedback generated by AI (see below for a bit more on AI though).
Trends and risks
The opportunities described above can be contextualised by a series of disruptive trends in higher education assessment that are currently framing the debate on the future of digital assessment:
- Online providers of ‘contract cheating’ services (essay mills and other forms) continue to proliferate globally and remain widely available. These services focus almost entirely on ‘conventional’ assessment modalities (i.e., essays and other forms of written text). A Frontiers article provides an interesting review of ‘essay mill’ proliferation evidence.
- There is a widespread moral panic discourse around the issue in public and media forums, despite there being very little empirical evidence of the extent to which contract cheating services are actually used by students – digital technologies and the pandemic ‘pivot’ are perceived to have amplified opportunities for ‘cheating’, but we do not yet have research evidence either to confirm or rebut this perception. The Scottish Centre for Crime and Justice Research provide a helpful report on contract cheating.
- Partly as a result of the above, dependencies on data-extractive platforms (such as Turnitin) for policing plagiarism are culturally normalised; 98% of UK universities use Turnitin. These platforms directly profit from the panic discourse outlined above
- Rising student numbers and often unmanageable academic workloads create a context in which there is limited time for academics to gain the depth of familiarity with students’ writing styles needed to help them understand and avoid misconduct, or to provide one-to-one support for good academic practice
- New forms of AI/neural network technology, such as OpenAI’s Generative Pretrained Transformer 3 (GPT-3), are now able to generate text that is indistinguishable from human-generated language. This will require all sectors of education to re-think our dependency on written assessment practice. Mike Sharples provides an example and overview of the implications of these AI tools in a LSE blog post.
By understanding these trends and risks, it is possible to begin to develop approaches that mitigate some of the more worrying aspects of digital assessment, and to start to frame ideas for plagiarism and automation-resistant methods. These, in turn, can help make assessment more engaging for students, and more relevant. For example, you might ‘design-out’ plagiarism by developing creative approaches that require students to use novel forms of representation, such as image, video, presentation, or creation of digital artefacts. Or, you might use peer and tutor feedback and feedforward (perhaps enabled by the digital technologies described earlier in the post) to scaffold students’ writing and thinking skills toward their assessment. You might also work to create a teaching and peer support context in which students are engaged in – and motivated by – the assessment, rather than seeing it as a hurdle.
Plagiarism detection systems like Turnitin tend to be a default port-of-call when it comes to identifying and preventing academic misconduct. However, in the context of the wider education technology landscape, they present significant risks. Their functionality is not particularly good (this Nature World View article describes why), they tend to ‘cool’ innovation in assessment by their assumption that all assessment is text-based (Canzonetta & Kannan’s 2016 case study of Turnitin provides more insight on this point), and their routine-use structures-in distrust as the basis of the teacher-student relationship (the Manifesto for Teaching Online offers some further insight).
The business models adopted by the most commonly-used plagiarism detections systems are also problematic in that by requiring students to upload their work into their database (often without authentic, informed consent) they turn student intellectual property into profit for private companies which, in turn, are not accountable to universities. Turnitin, for example, currently has a ‘non-exclusive, royalty-free, perpetual, worldwide, irrevocable license’ to 1.4 billion student papers, and was sold in 2019 to a US media company for US$1.75 billion.
Perhaps even more worryingly, research has shown that plagiarism detection services favour native language speakers, and perpetuate bias which can actively label some students as plagiarists even when they are not – particularly international students. The algorithms driving these systems are often proprietary and not open to scrutiny by their users, so it can be difficult for researchers to unpick and challenge this aspect of the way they operate.
One approach to mitigating these issues could be to adopt open-source and academic-led alternatives to plagiarism detection. They would not need to turn a profit by monetising student data, and could be adapted by the academic community to improve the fit to institutional and cultural values. In turn, this would better ensure inclusivity, accessibility, privacy, and fairness as with other academic-led developments. There is limited development capacity in the academic sector, an issue noted by the UN, so greater reward is likely to be found in digital developments that reduce the need for plagiarism detection.
Conclusion
It is hard to argue against adopting technologies that open up new and exciting prospects for innovation in assessment, and also offer efficiencies in the way we work. By being aware of both the immense opportunities and the significant risks of using digital methods, we can ensure that the future of our assessment practice is creative, fit-for-purpose, engaging and ethical.
Sian Bayne
Sian is Professor of Digital Education at The University of Edinburgh, Director of Education at the Edinburgh Futures Institute, and Assistant Principal for Digital Education. She is the director of the Centre for Research in Digital Education, and teaches on the MSc in Digital Education at Edinburgh.
Tim Drysdale
Professor Timothy Drysdale is the Chair of Technology Enhanced Science Education in the School of Engineering, having joined The University of Edinburgh in August 2018. Immediately prior to that he was a Senior Lecturer in Engineering at the Open University, where he was the founding director and lead developer of the £3M openEngineering Laboratory. The openEngineering Laboratory is a large-scale online laboratory offering real-time interaction with teaching equipment via the web, for undergraduate engineering students, which has attracted educational awards from the Times Higher Education (Outstanding Digital Innovation, 2017), The Guardian (Teaching Excellence, 2018), Global Online Labs Consortium (Remote Experiment Award, 2018), and National Instruments (Engineering Impact Award or Education in Europe, Middle East, Asia Region 2018). He is now developing an entirely new approach to online laboratories to support a mixture of non-traditional online practical work activities across multiple campuses. His discipline background is in electronics and electromagnetics.
Staff and students at University of Edinburgh can use Jupyter notebooks via our Notable Service https://www.ed.ac.uk/information-services/learning-technology/noteable
which is also offered to institutions across the sector via the university commercial edtech arm: EDINA https://noteable.edina.ac.uk/
[…] T, (2022) Assessment, feedback and their digital futures. Teaching Matters.Available at: Assessment, feedback and their digital futures – Teaching Matters blog [Accessed 19th December […]