Reflections on academic standards from the marking and assessment boycott

Student sitting exam alone
Image credit: unsplash, Jeswin Thomas, CC0

In this post, Dr Charlotte Desvages and Dr Itamar Kastner reflect on the notion of academic standards and its relationship with assessment and feedback, drawing on the events from the 2023 Marking and Assessment Boycott. Charlotte is a teaching Lecturer in Mathematical Computing, and Itamar is a Senior Lecturer in Linguistics and English Language in the School of Philosophy, Psychology and Language Sciences. This post  belongs to the Hot Topic theme: Critical insights into contemporary issues in Higher Education.


On 20 April 2023, in the context of a years-long dispute on pensions, pay, and working conditions, academic staff across the UK started a Marking and Assessment Boycott (MAB) called by the University and College Union (UCU). The MAB lasted until September 2023, directly impacting assessment culminating in the May and August exam diets. As often is the way when systems ‘breakdown’ or suddenly require large-scale workarounds, a huge amount of learning and reflection emerges. In this post, we draw on our experience and reflections of the MAB as an entry point to unpick how regulatory ‘academic standards’ influence our assessment and feedback practices. We argue that a focus on pedagogically-informed course and programme design, where marks are dissociated from feedback, would be more beneficial to students’ learning than a focus on an aggregated numerical grade.

Background to the MAB and academic standards in relation to assessment and feedback

In our University, assessment is regulated by each School’s Boards of Examiners according to the Taught Assessment Regulations, which in turn are decided by the Senate’s Academic Policy and Regulations Committee. Edinburgh saw strong participation in the MAB, prompting Senate to change the regulations temporarily in order to “mitigate against the impact of significant disruption to students, without compromising academic standards” [1]. Boards of Examiners were asked to make an academic judgement on whether they had sufficient information on students’ performance to decide on progression or degree awards; to what extent was the Board confident that a student had achieved learning outcomes and was ready to progress or graduate?

Our proxy for this decision is usually an average of numerical marks. With many of these unavailable due to the MAB, many Boards of Examiners decided that they were not competent to make an informed decision, leading to a large number of students affected by delayed results despite the relaxed regulations – including 30% of students [2] graduating in July either without an award, or with an unclassified degree. Although the immediate impact on students was regrettable, this cautious approach likely contributed favourably to the very small number of misclassifications [3]. Even under a narrow understanding of “academic standards” defined by graduates obtaining the appropriate degree classification, it is difficult to see how Boards applying the full extent of available mitigations would have maintained academic standards.

Perspectives on academic standards

Often, and certainly in the context of the MAB, the institutional understanding of “academic standards” is about the regulatory framework of Quality Assurance, an interlocking system of regulations set by Scottish ministers (through the Quality Assurance Agency), the University’s regulations, and each School’s Board of Examiners. The “quality” in question relates to the quantitative outcomes produced by Boards of Examiners (a course mark, a progression decision, a degree classification). Quality Assurance is how institutions demonstrate robustness and reliability of those outcomes to external stakeholders. In the process, all the learning and growth a student has experienced are distilled into one final grade.

This is an administrative process, happening after all learning and assessment are complete; by necessity, it’s based on shortcuts. For example, the 40% grade boundary provides a standardised threshold over which a student is deemed to have passed a course. This saves Boards of Examiners from having to wrestle with its implications: are we passing students who have achieved 2 out of 5 learning outcomes? Are we contenting ourselves with “achievement” being defined as 40% of the best possible performance?

In contrast, from an educator’s perspective, that is perhaps more internal facing, “academic standards” is understood as starting with rigorous, pedagogically-informed course and assessment design, and includes everything that happens before the final grade. In this sense, academic standards are standards we set on students’ engagement, learning, and growth, and on ourselves as instructors. We design assessment to give students the opportunity to demonstrate their progress; to give us confidence that they are ready for the next stage.

Regulatory mitigations only introduced more possible shortcuts for Boards of Examiners to produce aggregated outcomes with limited information from courses. Clearly, this cannot fit with educators’ understanding of “maintaining academic standards” – by definition, we did not have sufficient information to assess whether they were maintained. These different understandings of “academic standards” explain how academic staff who raised concerns about mitigations were left unsatisfied by the University’s response, and particularly concerned about continuing students [4]. Indeed, the major concern to UCEA [5] was primarily the disruption to the administrative function of exam boards, rather than, say, the delay of feedback on coursework, or the impact on student learning [6].

Returning to the Feedback and Assessment Principles and Priorities

Perhaps we can better understand the relevant assumptions about marks and feedback by considering the University’s Feedback and Assessment Principles and Priorities. None of these refer to marks or grades, although an educator’s perspective might be that these all contribute to upholding robust academic standards:

Infographic of Assessment & Feedback Principles and Priorities
Infographic of Assessment & Feedback Principles and Priorities from Prof Tina Harrison’s Teaching Matters post, 7.07.22.

 

In our experience, the dissociation of marks from feedback is not only supported by research, as demonstrated at recent university events (such as talks Ungrading: What it is and why should we do it? bRashne Limki, and To Grade or to Ungrade, that is the question! by Dave Laurenson, James Hopgood and Itamar Kastner), its benefits are also clear to students. While they would be lost without feedback – in that learning, by definition, does not happen without feedback loops – most students clearly understood (and supported) the withholding of marks as a tool.

We argue that the MAB laid bare how we have administrative processes serving a distinct purpose from that of teaching/learning. Institutional requirements for an aggregated numerical grade can place an inherent barrier for educators seeking alternative feedback and assessment methods which could be more beneficial for learning, and which could in fact contribute to maintaining robust academic standards.

So, where do these reflections leave us? We would encourage individuals, Schools, and managers to consider learning and feedback as a priority in the first stages of designing curriculum and assessment. We should be ensuring that the pedagogy is in place, and only later thinking about satisfying the administrative requirements of numerical evaluation (if at all). We must continue to challenge the model that defines “academic standards” as both requiring, and being limited to, a quantitative interface with external stakeholders.

Here’s one example to end on: in the National Student Survey (NSS), “feedback” has consistently been a sore point for our University. One of management’s responses has been to implement stricter deadlines for returning marks and feedback to students. But how does this move engage with the pedagogical grounding of feedback? Where does the magic number in the “15 working days” turnaround come from? And how does any of this support educators to improve their practice?

[1] APRC 22/23 8 – Minutes of the 2/05/2024 meeting of APRC. Accessed 23/07/2024.

[2] SQAC 23/24 5B – Appendix C, Degrees Awarded Outcomes. Paper presented at the 16/05/2024 meeting of SQAC. Accessed 23/07/2024.

[3] See footnote 2, Appendix B. Accessed 23/07/2024.

[4] e-S 23/24 3F – Appendix 1 (Maintaining Academic Standards), Report of Motions and Items not included on Senate Billet from 2022 to April 2024. Paper presented at the April/May 2024 e-Senate. Accessed 23/07/2024.

[5] Universities and Colleges Employers Association; the body representing universities as employers in the dispute which led to the MAB.

[6] See e.g. UCEA news release, 28th July 2023, describing the “sector-wide impact” of the MAB exclusively as the percentage of students who were (un)able to graduate in July.


Itamar Kastner

Itamar is a Senior Lecturer in Linguistics and English Language in the School of Philosophy, Psychology and Language Sciences. His research investigates the structure of words from theoretical, experimental and computational perspectives, alongside evidence-based approaches to pedagogy. He has been a member of the university Senate since August 2023.


Charlotte Desvages

Charlotte is a teaching Lecturer in Mathematical Computing in the School of Mathematics, teaching computing courses focused on Python programming skills and introductory numerical methods. Her interests include peer learning for programming (via pair programming, code review, live coding); the development of computational thinking linked to mathematical thinking; and accessible and inclusive teaching practices. She is also currently the director of Equality, Diversity, and Inclusion in the School, and has been a member of Senate since August 2022.

Leave a Reply

Your email address will not be published. Required fields are marked *