In this post, Dr Jenny Scoles, an Academic Developer at the Institute for Academic Development, shares some lessons learned through failing to successfully implement a really simple exercise aimed at improving student and staff feedback dialogue…
Given the evidence of widespread student dissatisfaction with feedback, and the education research that suggests that tutor’s feedback is often not understood or seen as useful, a few years ago, myself and two colleagues (Professor Mark Huxham and Dr Jan McArthur) set out to develop an intervention that allowed students to identify their own feedback needs and instil a sense of control in the feedback process.
Our initiative involved a simple, easy to implement, and (we thought) fool-proof exercise that we hoped would encourage dialogue between students and lecturers: Students were given the opportunity to request specific topics on which they would like to receive focused feedback when submitting their coursework assignments. Other educational researchers have recommended such approaches. For example, Hyland (2001, p. 245) suggests ‘providing students with a cover sheet with their first assignment in which they could write some of their feedback requirements’. Bloxham and Campbell (2010) studied the use of interactive cover sheets as an initiative to generate dialogue in assessment feedback. Their small-scale study found limited student uptake, and suggested improvements for further research: ‘further activity and evaluation is required to test the findings with larger year groups, in other discipline areas and in higher levels of study than Year 1 undergraduates’ (p. 299).
Focused feedback initiative
We decided to build on Bloxham and Campbell’s suggestion to implement a larger scale study across more year groups, which explored the effects of a slight modification to an assignment cover sheet. 13 module leaders in a university science department participated in the project, and their modules included a range of undergraduate and masters level courses. In total, 710 students were enrolled on these modules. At the beginning of the trimester, the module leaders were quite simply requested to add the following sentence to the bottom of the coursework assessment brief before distributing it to students:
Would you like feedback on any specific aspects of this piece of work? If so, please indicate what you want feedback on at the end of your script.
At the respective deadline dates, lecturers collected the students’ completed coursework and sent us any scripts that had requested focused feedback. We used both quantitative and qualitative methods (a mixed-methods approach) to help answer the following research questions:
- What proportion of students chose to request focused feedback on their essays?
- If students chose to request feedback, what kinds of topics did they choose to highlight?
- Did the focused feedback option require more work for tutors marking a student essay (as measured by words written in response)?
- Did students who requested focused feedback achieve significantly higher grades than those who did not?
The quantitative analyses quickly showed that most students did not participate in the initiative: only 6% chose to request focused feedback (that is, only 42 students out of 710). And for those who did ask, most requested help on relatively superficial topics, such as referencing and citations, as opposed to ‘deeper’ issues like logical argument or comprehension. There was no evidence that providing focused feedback saved tutor time, with staff responding to focused feedback requests by providing more feedback. And, there was no difference in the mean mark awarded to students who did, and who did not, request focused feedback.
We were genuinely surprised that not more students had taken us this simple exercise to help them get more personal and specific feedback.
The qualitative responses helped us find out why we this initiative had failed…
Lack of trust
Our findings suggested that the students were so concerned with the final summative mark that they avoided engaging with focused feedback in case it influenced their mark negatively. There was a fundamental lack of trust in the initiative’s intention, as one student noted, “what if it’s wrong and I would draw their [the tutor’s] attention?” Students were strategically keeping quiet about their perceived weaknesses in case they lost marks by highlighting areas of work that the tutor may otherwise have glossed over.
Lake Wobegon effect
Some students appeared complacent about their need for feedback since they ‘did alright’. There was a perception that feedback was instead for students who were struggling:
I’ve always believed that the really personal more specific feedback is for the people that really went wrong.
Students with this view often achieved only average marks: there was a sense of inflated self-confidence even though there was considerable room to improve their mark. We termed this the ‘Lake Wobegon’ effect. This effect was coined by Justin Kruger (1999) and taken from Keillor’s (1985) book Lake Wobegon Days, where all the children in the book believe that they are above average. Kruger found that it was quite a common trait for people to see oneself as above average in their skills and abilities when comparing themselves with their peers. One possibility was that tutors were inadvertently re-enforcing the Lake Wobegon effect by (understandably) focusing their efforts on students who were struggling. Indeed, our quantitative findings showed that tutors tended to give less feedback to students who performed better in the coursework.
This research project was a fascinating example of how different the world can look to two groups of people involved in the same process. Whilst the staff (and us) considered assessment and feedback as an opportunity to learn, the students approached assessment as a stressful game that they needed to play in order to ‘win’ the highest mark possible. In our failure, we had discovered something important: to make focused feedback work, we needed to look closely at these hidden rules of the ‘assessment game’, and recognise the different interests of the players involved.
This blog post is adapted from a book chapter: Scoles, J., Huxham, M., & McArthur, J. (2014). Mixed-Methods Research in Education: Exploring Students’ Response to a Focused Feedback Initiative. SAGE Research Methods Cases, Sage.
Bloxham, S., & Campbell, L. (2010). Generating dialogue in assessment feedback: Exploring the use of interactive cover sheets. Assessment & Evaluation in Higher Education, vol. 35 , pp. 291–300.
Hyland, F. (2001). Providing effective support: Investigating feedback to distance language learners. Open Learning, vol. 16 , pp. 233–247.
Keillor, G. (1985). Lake Wobegon Days. New York, NY: The Viking Press.
Kruger, J. (1999). Lake Wobegon be gone! The ‘below-average effect’ and the egocentric nature of comparative ability judgments. Journal of Personality and Social Psychology, vol. 77 , pp. 221–232.