Collegiate Commentary: Five Inquiries into Assessment and Feedback

Credit: pixabay, 3844328, CC0

In this extra post, we share with you the Collegiate Commentary from the latest Teaching Matters newsletter: Five Inquiries into Assessment and Feedback. In the Collegiate Commentary feature, we ask colleagues from other universities and institutions to provide a commentary on ‘Five things…’, and share their own learning and teaching reflections, resources or outputs on the same topic. In this newsletter, we welcome a joint commentary from Donna Hurford, University of Southern Denmark, and Andrew Read, Independent Educational Consultant.


Inquiry 1 [in the Newsletter] raises the questions ‘what is assessment?’ and ‘where is assessment?’ but perhaps we should start with ‘why is assessment?’ When we attempt to untangle assessment from learning, assessment becomes a ‘thing’, subject to institutional mechanisms, and, perhaps inevitably, detached from its relationship with learning. Internal quality assurance procedures can emphasise this disconnection by linking the provision of assessment to administrative (or bureaucratic) requirements: ‘has a range of assessment types been employed across the programme?’ and ‘does the course specification take account of formative assessment?’, etc. Institution-wide targets will have a similar impact: ‘has the percentage of students graduating with a good degree increased in the last two years?’ etc. Assessment treated like this acquires functional fixedness (Hurford and Read, 2022): the ‘why’ is about box-ticking.

Programme-level assessment could disrupt this. Dilly Fung (2017) advocates practical approaches to connected programme design. We would suggest that the questions that Fung poses to departments and programme teams (2017, p. 60) could be usefully adopted by university quality assurance teams when considering course and programme validation. But it is interesting that, within Fung’s arguably radical way of approaching programme design, the ultimate ‘why’ of formative feedback is to serve summative grading.

Inquiry 3 [“Do you really know a First when you see one?, or, The question of transparency”] voices concerns about transparency and our gut responses to students’ work. Without greater transparency we risk having gaps between teacher and student expectations. Such gaps are fertile ground for misunderstandings about course assessment and other learning activities. And with misunderstandings come biases such as expectation biases, “a weak presentation, just as I expected”, and the flat-packed IKEA bias, “they should have understood the assessment, I explained it in the course and it’s in the course handbook”. Benson’s (2016) ‘Cognitive bias cheat sheet’ and the ‘Cognitive Bias Codex’ provide useful insights into the range of cognitive biases which may affect our perceptions and judgements.

Phil Marston’s recommended conversations between teacher and students about course assessments and what a good one looks like can help reduce these expectation gaps. At the University of Southern Denmark (SDU), we offer a course for university teachers on helping students understand assessment which draws on Sadler’s (1989) legacy contribution to developing shared understandings of assessment. During the course, teachers are offered different approaches to co-developing assessment checklists or rubrics with the students, such as offering students a partially completed rubric and asking student groups to fill in the gaps. And if there isn’t the time or the will to co-create then the teacher, having developed the course assessment rubric, mixes up the criteria descriptors and invites the students to solve the rubric jigsaw.

By working collaboratively on re-organising the rubric’s contents, the students engage with the criteria descriptors’ syntax and query the meaning of ambiguous descriptors like for example: ‘reasonable’ or ‘solid’. To help negotiate ambiguous assessment language, students can next benefit from peer reviewing exemplars of course assessment using the course assessment rubric. By applying the rubric to authentic examples, students get insights into standards and quality: “oh, that’s what a good one can look like”. And it isn’t only the students who can benefit from this revelation. Whilst designing a rubric the teacher reflects on and articulates their understanding of standards and quality. These processes all take time, but by actively discussing and working with assessment through a course, there’s a better chance of reducing the gap between the teacher’s and the students’ expectations of a course assessment.

As discussed in Inquiry 5 [What are the digital possibilities of (and for) assessment and feedback?], the lockdown required a sudden shift to online teaching and learning, which brought its own opportunities and challenges. Oral exams are a common form of assessment in the Danish education system: students often submit an individual or group written assignment, followed by individual oral exams. However, student visibility in oral exams can trigger examiners’ confirmation biases. The orchestra audition study reveals how non-anonymised recruitment led to gender stereotyping, and how blind auditioning resulted in criteria-informed assessments and fairer recruitment (Goldin and Rouse, 2000).

Even if the examiner doesn’t recognise the student, there is the risk of first impression bias or the anchor bias (Myers, 2022), where for example, the examinee’s first response significantly influences the examiner’s expectations of their overall performance in the exam. Strategies for managing fair oral exams and reducing student anxiety include oral exam role plays, giving the students the opportunity to experience and prepare for the exam format, and implementing checklists for bias-aware oral exams (Hurford, 2020). Shifting oral exams online during the lockdown didn’t reduce the risk of anchor bias, but it was noticeable at SDU how many more teachers sought advice about managing online oral exams fairly and allaying student anxieties.

So, when thinking about what assessment and feedback could look like, why not picture a model without grades, a degree without classified honours. The ‘class system’, after all, is uniquely British and has only existed in this form since 1918 (Alderman, 14.10.2003). Shouldn’t universities devise less crude mechanisms for recognising the attainment of knowledge, understanding, or whatever the particular institution values?

One of the key challenges of acting on any piece of blue-sky thinking in higher education, if we put to one side the obstacles raised by institutional mechanics, is getting student buy-in. In this context, ‘buy-in’ is inevitably about bound up with ‘satisfaction’. Student-led feedback [Inquiry 2: What should feedback look like?] and assessment co-creation [Inquiry 4: What does it mean to centre students?] look great on paper – we would be 100% behind innovations such as these – but students’ expectations need to be carefully managed. How do you respond when a student tells you ‘I don’t want to design the assessment method – that’s what I’m paying the university to do’? This isn’t just a case of the student as consumer wanting their money’s worth. This is about the student having had an educational lifetime of being done to not done with.

‘Buy-in’ in this context is also about authenticity. In order to equip students to provide effective feedback or to co-create assessment activities, how do we avoid simply training students to duplicate the models of feedback and assessment that we already have in place? Perhaps we could consider embedding a thread of assessment and feedback design within programmes, building critically and creatively, to support students to reach thoughtful, genuinely learner-centred conclusions about what they can do.

Resources for addressing bias in teaching, learning and assessment:


photograph of the authorDonna Hurford

Donna Hurford is an Academic Developer at the University of Southern Denmark where she leads on the Lecturer Training programme, teaches about collaborative learning, addressing bias, integrating sustainable development goals, assessment, and questioning. She has a background in school teaching and pre-service education at the University of Cumbria.


photograph of the authorAndrew Read

Andrew Read is an Independent Educational Consultant. He was head of the Education Division at London South Bank University and, before that, Head of Teacher Education at the University of East London. Before working in higher education, he was a primary school teacher.

Leave a Reply

Your email address will not be published. Required fields are marked *