Is your assessment really necessary?

Photo Credit: Unsplash, Patrick Hodskins, CC0

In this post, Professor Richard Blythe, Personal Chair of Complex Systems in the School of Physics and Astronomy, shares his reasons for radically changing the way he assesses students…

When I started teaching my first major course, I heard rumblings from the students that I was overburdening them with continuous assessment. One of the few things on which university physics teachers agree is that mastery of physical principles comes mainly from solving problems. Therefore, it is typical for each lecture course to be accompanied by (usually weekly) sheets stuffed full of problems for students to attempt. Multiply this by at least three concurrent courses, and you start to see the problem.

Round like a circle in a spiral

In the olden days (that is, when I was a student), these problem sets were intended primarily for formative feedback. You’d work on them, hand them in, and get some comments to help you sharpen your skills. Over time, students expressed a desire for their attempts to contribute summatively to their course grade, partly to reflect the effort expended, and partly to take the edge off the exam.

This has led to a vicious circle whereby students direct large amounts of effort to coursework – sometimes to the exclusion of other, potentially more valuable, forms of learning. In turn, this means that to really get students to do something, you have to attach a mark to it, and round and round it goes. Worse, research (Kember 2004) suggests that if students feel overloaded, they fall back on strategic learning approaches, which make them feel less secure about the subject, which in turn makes the problems take longer to solve than we ever intended.

Following the Radical Road

My kneejerk reaction to student grumbling about workload can be summarised as “Well, they would say that, wouldn’t they?” However, for unrelated reasons, I had become sensitised to students’ concerns and dissatisfactions more generally, and questioned whether all this coursework was actually achieving anything. I decided that radical action was needed, and immediately halved the amount I was asking students to hand in. I then spent the rest of the semester fretting about the impact this might have on learning and particularly exam results.

These fears proved unfounded. When I received the exam board spreadsheets, I had to double check that I hadn’t been sent the ones for the previous year by accident: the extra workload endured by students in previous years had apparently achieved nothing (or, at least, nothing that can be measured by an examination: whether anything useful falls in this space is not, I think, a discussion for now).

The trickle becomes a flood

Emboldened by this experience, we have – in the School – become more critical of adopting multiple rapid-fire problem-solving hand-ins as our go-to form of continuous assessment. Over time, we have seen these partially replaced by online multiple-choice quizzes (Top Hat is great for this), class tests or making compulsory hand-ins partly or fully optional.

There has been a nagging worry that we are missing something – perhaps opportunities to engage more deeply with the material, written feedback or practice with exam-style questions – by ditching traditional hand-ins. I was particularly concerned that an online quiz, which I had introduced into one course (as a replacement for a hand-in), was essentially measuring only engagement rather than any learning. I therefore looked at how well marks for the remaining hand-ins and the quiz correlated with the exam mark. To my surprise, the correlation was about the same – that is to say, equally low (R-squared of about 0.35 to 0.4). Even more surprising was the fact that while a student’s hand-in mark tended to systematically exceed their exam mark, the quiz mark fluctuated either side around it.

Although I am cautious about viewing exam performance as the final word on a student’s competence in a course, I think it is entirely reasonable for students to use coursework marks to provide an indicator of how their learning is progressing, and the smaller the difference between these and the exam marks, the better. So by this measure, the short quiz questions were taking less time and providing a less biased measure of progress. Perhaps more worthwhile is the fact that, by spending less time obsessing on the hand-ins (not to mention all the marking it entails), we are finding more time for face-to-face interactions with students about their work.

Reference

Kember (2004) Studies in Higher Education 29:165-84. “Interpreting student workload and the factors which shape students’ perception of their workload” https://doi.org/10.1080/0307507042000190778

Richard Blythe

Professor Richard Blythe holds a Personal Chair of Complex Systems in the School of Physics and Astronomy. In his research he aims to understand the statistical properties of complex interacting systems that are driven out of equilibrium. Applications include the clustering of swimming bacteria and the spread of social behaviour through a population. Having experimented with a variety of teaching and assessment methodologies in his undergraduate classes, Richard co-founded the Experienced Teacher Network to exchange these experiences (good and bad) and generate new thinking in university teaching.

Leave a Reply

Your email address will not be published. Required fields are marked *