How do we do effective feedback?: A practical example

Image credit: Patrick Perkins, unsplash, cc0

In this extra post, Jane Hislop and Tim Fawns from The University of Edinburgh’s Medical School continue conversation on effective feedback by spotlighting the success of their online MSc Clinical Education course. Taking up students’ general tendency for self-comparison, Jane and Tim demonstrate how formally introducing peer review elements into assessments can help transform feedback into a more active process. This post responds to a recent Teaching Matters series on Assessment and Feedback.


The Teaching Matters blog post from Neil Lent and Tina Harrison, “Feedback: From one-way information to be an active dialogic process”, considered the question, how do we do effective feedback? The importance of moving away from viewing feedback as transmission of information, and a product passively received from teachers, to an active process that students engage with (Winstone et al., 2022) is well recognised in the literature (Boud and Molloy, 2013; Carless and Boud, 2018; Winstone and Carless, 2019). In this post, we share our experiences of introducing a peer review process within formative assessments of our online MSc Clinical Education.

Nicol (2020) argues that through comparing their work to that of others, students engage in “self and co-regulation of learning” (p. 1) and generate their own internal feedback. For Nicol, comparison is a natural process inherent within self-regulation and this process can be exploited by making it explicit.

We designed a formative assessment approach that would make the natural comparison process between peers explicit and help students to generate their own internal feedback. We were also pragmatic about how we could continue to manage the needs of our growing number of students (> 160 students in our first year). Finally, we were keen to model feedback practices with our learners who are both clinicians and educators from a diverse range of global and professional backgrounds.

For their formative assessments, we asked students to post on a shared group discussion board (each with approximately 20 peers) a writing task focused around the course topic that would ‘feed forward’ into their summative assignment. Students were then asked to comment on two of their peers’ work via the discussion boards. Finally, as recommended by Nicol (2020), students posted reflections on what they might change about their own work having gone through the comparison process. Engaging in writing at this stage allows the learner to generate feedback around metacognition (Tanner, 2012; Nicol, 2013). Once students had had time to undertake these tasks (a week or so later), a tutor summarised their thoughts about the posts and responses. Nicol (2020) discusses the benefits of teacher feedback coming after students have generated their own internal feedback, so that students are more likely to make sense of, and therefore act on, the teacher’s feedback.

As a first step, we piloted this approach within one of our first-year courses. Responses to a questionnaire suggested that students were not entirely on-board with this approach to feedback and wanted feedback from an authoritative other, such as a tutor, to ensure their work was ‘correct’. Winstone and Boud (2022) discuss the conflict that can arise between assessment and feedback, and the need to clarify the purpose of formative feedback in relation to supporting learning. As a team, we reflected on this and realised that students had not sufficiently understood the purpose and rationale of this formative assessment task.

The following year, we introduced the peer review approach within all Year 1 formative assessments. Our introduction to the task included recording a podcast of a team discussion, as to the purpose of the task and the pedagogical rationale, and students were directed to Nicol’s (2020) paper on the power of internal feedback. This time, we evaluated the formative approach by interviewing students at three different stages (at the start, middle, and end of their first year). While we have further data analysis to do, preliminary findings suggest students saw value to learning through this approach.

The key learning from our experiences of introducing a new approach to formative assessment has been the importance of putting in groundwork to prepare learners as to its purpose and pedagogical underpinnings. Now, in our third year of running this approach, we are increasingly convinced of its value to students. The marking burden of formative assessment has also greatly reduced. We would encourage others to consider if this approach might be helpful within their own context.

Acknowledgments: Thanks to Brian Carlin, Year 1 programme lead in Clinical Education, and to Kirstin Stuart James and Janette Jamieson for helping with the evaluation.

References

Boud, D. and Molloy, E. (2013) ‘Rethinking models of feedback for learning: The challenge of design’, Assessment and Evaluation in Higher Education, 38(6), pp. 698–712. doi: 10.1080/02602938.2012.691462.

Carless, D. and Boud, D. (2018) ‘The development of student feedback literacy: enabling uptake of feedback’, Assessment and Evaluation in Higher Education, 43(8), pp. 1315–1325. doi: 10.1080/02602938.2018.1463354.

Nicol, D. (2013) ‘Resituating Feedback from the Reactive to the Proactive’, in Boud, D. and Molloy, E. (eds) Feedback in Higher and Professional Education: Understanding It and Doing It Well. Oxon: Routledge, pp. 34–49.

Nicol, D. (2020) ‘The power of internal feedback: exploiting natural comparison processes’, Assessment and Evaluation in Higher Education, 46(5), pp. 756–778. doi: 10.1080/02602938.2020.1823314.

Tanner, K. D. (2012) ‘Promoting student metacognition.’, CBE life sciences education, 11(2), pp. 113–120. doi: 10.1187/cbe.12-03-0033.

Winstone, N., Carless, D. (2019) Designing Effective Feedback Processes in Higher Education. 1st edn. London: Routledge Taylor & Francis Group.

Winstone, N. E. et al. (2022) ‘Measuring what matters: the positioning of students in feedback processes within national student satisfaction surveys’, Studies in Higher Education, 47(7), pp. 1524–1536. doi: 10.1080/03075079.2021.1916909.

Winstone, N. E. and Boud, D. (2022) ‘The need to disentangle assessment and feedback in higher education’, Studies in Higher Education, 47(3), pp. 656–667. doi: 10.1080/03075079.2020.1779687.


photograph of the authorJane Hislop

Dr Jane Hislop is Director of the PG Certificate in Simulation Based Clinical Education and lecturer in medical education in the Medical School. Jane teaches on online distance-learning programmes for those involved in undergraduate and post-graduate education of health professionals (including qualified doctors, nurses, dentists, pharmacists, and allied health professionals). In addition, Jane works part-time as a Clinical Education lead in Musculoskeletal Physiotherapy within NHS Fife Scotland. Jane has a particular interest in peer assessment and feedback, as well as Simulation methodology. You can find Jane on Twitter at @hijanehislop.


photograph of the authorTim Fawns

Dr Tim Fawns is Co-Director of the MSc in Clinical Education and leads a course on ‘Postdigital Society’ for the Education Futures programme in the Edinburgh Futures Institute. His main academic interests are in education, technology, and memory. Recent publications include “An Entangled Pedagogy: Looking Beyond the Pedagogy–Technology Dichotomy” (2022) in Postdigital Science and Education and “Online Postgraduate Education in a Postdigital World” (2021) in Beyond Technology.

One comment

Leave a Reply

Your email address will not be published. Required fields are marked *