Spotlight on Alternative Assessment Methods: Alternatives to exams

In this ‘Spotlight on Alternative Assessment Methods’ post, Tim Fawns, Deputy Programme Director of the MSc in Clinical Education and part-time tutor on the MSc in Digital Education, and Jen Ross, co-director of the Centre for Research in Digital Education, discuss several design characteristics that can provide challenging and meaningful ways for students to demonstrate their knowledge and understanding at a distance.   

Care, trust and high-stakes online assessment

It may or may not be ok to turn an exam hall into a place of anxiety and distress for students. That is not a debate we need to have right now – or possibly for a long time to come. However, we urgently need to consider the impact of different online assessment approaches on students in their homes, where many must currently live, study, work, and care for others under very difficult circumstances. There may be technological means that allow universities to proceed with ‘business as usual’ in the form of closed-book, invigilated, time-limited examinations, but the moral and pedagogical justification for doing so needs much more scrutiny.

What’s so great about exams, anyway?

Exams have a place. Being able to think of information without referring to external sources is an important building block in the development of more sophisticated forms of knowledge and expertise. The forms of scenario-based problem solving that feature in well-written multiple choice questions constitute worthwhile learning. However, the requirement that students cannot use resources other than their brains to demonstrate this knowledge can be an obstacle to other important kinds of knowledge and learning. Exams favour cramming, and not the careful building of durable knowledge over time. It is through interacting with other people and materials that students learn, and their ability to make good use of these resources in learning new things is, arguably, more relevant to their higher education than their ability to rehearse previously learned content, since it is this that they will rely most heavily on as they move through different and evolving settings and contexts after graduation (Fawns & O’Shea, 2019). Have a look at your programme or institution-level outcomes (e.g. “graduate attributes”, “graduate capabilities”, “transferable skills”): it’s likely that these are much more oriented towards social, communicative, ethical, collaborative, creative capacities than they are about correctly giving pre-determined answers to abstract questions.

For teachers, exams also save time in marking by providing a clear structure for making judgements about students’ knowledge. This is counterbalanced by the time investment in designing and writing good exam questions, and standard-setting (though these tasks can sometimes be distributed amongst a group of colleagues, making each person’s time investment less onerous). In addition to this, there is a significant burden of time and resource associated with ensuring good academic practice in relation to exams – for example in preventing misconduct by building up large question banks for multiple choice questions, and closely monitoring the examination process. Again, these tasks can be distributed in a way that tends to obscure the time they take. But it is the previous point, about invigilation of exams, that we want to focus on here.

Remote invigilation: the worst of all worlds

In the move to remote teaching and assessment, institutions are increasingly turning to 3rd-party technological solutions for monitoring students during exams. Calls for remote or online invigilation are understandable. The fear is that, without some kind of monitoring, some students might cheat. This is the same justification given for many forms of more-and-less invasive monitoring of student activity and work (see also: plagiarism detection software).

Remote invigilation typically works by permitting 3rd party companies to access students’ computers, microphones and webcams to attempt to ensure the rules of exam-taking are followed. Students have no choice but to consent, they are unable to choose a different place or way to be examined, and they may have little opportunity to ask questions or understand the way the system works or what happens to the data collected. Further, one cannot invigilate an exam taken remotely in a student’s home without being draconian (see this article in the Washington Post where a student is forced to vomit at the desk she’s writing the exam because she’s not allowed a toilet break).

So: on top of the stress of taking a high-stakes exam (currently under extremely difficult conditions for many), students are surveilled in their own homes, by strangers and/or software, whose sole purpose is to catch them cheating.

Assessment, care and trust

Invigilation, and other measures to prevent cheating, start from a default position of lack of trust in students. Their use erodes the potential for building trusting relationships between students and staff (Ross and Macleod, 2018). This is not good for feedback, dialogue, and many of the elements educational scholars have highlighted as crucial for good quality education (Ajjawi et al., 2017; Carless, 2013).

We should also be careful of “coronawashing” the principles of privacy and data security that underpin the university’s approaches to digital education. Once we have given our students’ data to 3rd-party companies, we have lost control of it, and they will almost certainly be using it to further their profits at the expense of students (see Turnitin for an example of this). As some critics have pointed out (Morris & Stommel 2018), we should trust our students more than we trust these educational technology companies.

Going further, it is worth noting that, where the stress and stakes are high and acutely felt, particularly where outcomes are perceived to have a significant impact on career, cheating is a rational course of action. In fact, we might ask whether a particular kind of cheating constitutes a fundamentally bad practice, or whether it is only bad because it breaks a rule that would not apply in other contexts. For example, looking up information or talking to other people to try to answer a question would be fine or, indeed, encouraged, in most professional contexts.

Is it possible, then, to design online assessments that allow or even encourage the demonstration of professionally and academically valuable skills, and that start from a place of trust and support?

Alternative assessments at a distance

Assessment design with the following characteristics can support trusting relationships and care for student wellbeing, while providing challenging and meaningful ways for students to demonstrate their knowledge and understanding:

  • Non-anonymous, open book, over an extended period of time, and potentially-collaborative (students are allowed to use any available resources).
  • Requires significant intellectual input from every student.
  • Shows the learning process and provides rich opportunities for feedback (from peers and/or tutors).
  • Provides opportunities for creativity, personalisation, and contextualisation.
  • Covers the key aims / knowledge of the assessed course.
  • Is manageable for staff and students.

Most importantly, designing assessment tasks and questions that draw on their personal experiences, local environments and specific contexts gives students opportunities to help build an atmosphere of trust. They might be asked to apply concepts to something in their house or garden or local setting. They might be asked to select from a range of questions or options to suit their particular conditions, or to help design the assessments (which in itself can lead to rich learning). They might accompany textual responses with a photograph or video, which contextualises the work and lets them connect it to their own experiences. They might record a video presentation in response to a particular challenge. You might assess via a live conversation (this could even be done with groups of students) in which students are encouraged to articulate the reasons behind their understanding of particular concepts and how they apply to their context. The aim with all of these approaches is to adopt a trusting mindset and encourage students to give their work a sense of ownership and personal relevance.

For courses or programmes where there are important facts and concepts that underpin future practice and must be understood, we should recognise that knowledge acquisition is not important for its own sake but to provide a basis for active meaning-making (i.e. the skilful and developmental use of that knowledge). Having students articulate complex understandings, demonstrate how they think about a concept or problem, and relate it to a context that is meaningful for them, allows them to demonstrate their knowledge in much more nuanced ways, and also requires familiarity with their particular context (meaning that they will need to be engaged in the work).

Marking and giving feedback on this kind of work requires teachers to have a framework for making complex judgements about performance. Large courses, in particular, might require help from experienced online educators to plan the design and the ways in which students and staff need to be supported and new forms of assessment made workable in the context of time constraints. Supportive networks, where teachers can openly and honestly discuss potential approaches and concerns, will also be important.

References:

Ajjawi, R., Molloy, E., Bearman, M., & Rees, C. E. (2017). Scaling up assessment for learning in higher education. In D. Carless (Ed.), The Enabling Power of Assessment (Vol. 5, pp. 129–143). Singapore: Springer Nature. Find on DiscoverEd.

Carless, D. (2013). Trust and its role in facilitating dialogic feedback. In D. Boud & E. Molloy (Eds.), Feedback in higher and professional education (pp. 90–103). London: Routledge. Find on DiscoverEd.

Fawns, T., & O’Shea, C. (2018). Evaluative judgement of working practices: reconfiguring assessment to support student adaptability and agency across complex settings. Italian Journal of Educational Technology, 27(1). https://orcid.org/0000-0001-5014-2662

Morris, S. M. and Stommel, J. (2018) ‘A Guide for Resisting Edtech: the Case Against Turnitin’, in An Urgency of Teachers. Hybrid Pedagogy Inc.

Ross, J. and Macleod, H. (2018) ‘Surveillance, (dis)trust and teaching with plagiarism detection technology’, in. Networked Learning 2018, Zagreb.

Tim Fawns

Dr Tim Fawns is Deputy Programme Director of the MSc in Clinical Education and part-time tutor on the MSc in Digital Education. He is also the director of the international Edinburgh Summer School in Clinical Education. His main academic interests are in education, technology and memory.

Jen Ross

Dr Jen Ross is co-director of the Centre for Research in Digital Education, and deputy director of Research and Knowledge Exchange in the School of Education. Her online distance teaching includes the MSc in Digital Education (which she directed between 2012-15) and the E-learning and Digital Cultures MOOC. She currently leads the AHRC-funded ‘Artcasting’ project about digital engagement with art galleries. Her research interests include online distance education, digital cultural heritage learning, open education including Massive Open Online Courses (MOOCs), digital cultures and futures, and online reflective practices. Read more about her research and teaching at http://jenrossity.net.

One comment

Leave a Reply

Your email address will not be published. Required fields are marked *