In this blog post, Sharron Ogle from Biomedical Sciences reflects on what assessment means for her, and her students…
What does a numerical mark mean?
As Programme Director of the online MSc in Biodiversity, Wildlife and Ecosystem Health for more than 7 years, I’ve asked myself this question more than once. I recognise the need for benchmarking, standardising and quality assurance, but I also recognise that it is very difficult to assign a numerical value to an individual piece of work where there are so many variables. What does a numerical mark even mean, in an environment where there is no single correct answer and there are very few ‘facts’? The focus within our programme is much more on understanding conflicting viewpoints, and applying current theories to practice in the real world. It is incredibly frustrating that a UK student would be delighted with a mark of 65, but a US student would be devastated by feeling they have failed, yet both will have demonstrated an equivalent understanding of the course content and proficiency in research, critical thinking, reasoning and communication. With a diverse international student cohort dedicated to self-improvement rather than competing with each other, allocating them simply a numerical mark just doesn’t make any sense to me.
Feedback first!
Feedback, however, is a different story. Providing tailored comments that recognise good practice and suggest improvements in an individual piece of work is the only means by which students can develop and grow. This, in my mind, must be the true purpose of assessment. We now mark written work almost exclusively within Grademark, which allows us to clearly and accurately signpost the smallest detail in every assignment. This helps us achieve tailored feedback. And if we are to develop knowledge and understanding over a broad curriculum base, as well as develop our students’ graduate attributes, then we must provide a wide range of assessment opportunities. This diverse range should allow students to learn not only what they want to learn, but to also try out different research methods, different ways of communicating and, above all, different ways of thinking.
Innovation and variety
I remember very vividly attending a PGT event in 2012 when the then-VP Learning and Teaching, Sue Rigby, suggested that there is more room for creativity in assessment than we might think. The take-home message was that the PGT regulations provide only a very broad framework for assessment, and shouldn’t be seen as a barrier to implementing new and innovative methods. That single event really stayed with me, and has led to us widening our assessment portfolio, developing innovative, bespoke assessments for our courses. We now offer students a wide range of highly relevant written and online assessment opportunities, which mean that a student could study with us for three years without ever doing the same type of assessment twice.
The student experience
Our students come to us with an immense amount of experience and knowledge, but also great uncertainty around their ability to perform at PG level. They are vulnerable, entering new and unknown territory, and have only limited time to master the many and complex subjects and skills we ask them to. Assessment should not be daunting, but should be an opportunity for all students to tailor their learning to their individual interests and needs, to share their expertise for collective learning, and for all students to be individually supported in this endeavour. Above all, assessment should be meaningful: an integral part of the student experience, not separate from or unrelated to, but central to the learning and community they share as part of a PG degree at the University of Edinburgh.
Thanks Sharron. I have wondered about the same thing, particularly where marks from different assessments and courses are merged together, as if judgements of quality by different people on different tasks are easily accommodated into mathematical operations. I remember the event with Sue Rigby that you mention and how refreshing it was. However, it seems difficult to get around the issue of marks and I wonder how much energy this takes up that could be used to discuss what was learned and what could be learned from the assessment activity.
Hi Tim, and thanks for your comments. I would love to try going marks-free as an experiment, but clearly that’s not possible within the current system, and actually some students would find this too difficult to accept I think. But what about students setting their own assessment goals and benchmarks for success? The SLICCS-style courses take that on board to some extent, but could it form the basis of a whole new assessment strategy for PG learners?
Does Standards-based grading or its variants offer a way out here? See for example “Specifications grading” by Linda Nilson.
Crudely, the idea is to set tasks which are Pass/Fail with very good work needed for “pass” and some facility to resubmit initially failing work in the light of feedback.
The sort of set-up one might be thinking about would be to have a number of lower-level tasks and some higher level ones. Grades emerge by counting the number of completed tasks; it might be that you have to complete all but one of the lower level ones for a pass and complete all of them plus 3 higher level ones for an A.