Assessment and feedback in conversation with students: Towards co-creating assessment and feedback

Graffiti sign saying 'Together, we create!'
Image credit: pixabay, stocksnap, CC0

In this post, Dr Catherine Bovill explores how to effectively engage staff and students in the co-creation of assessment practices. Catherine is Senior Lecturer in Student Engagement at the Institute for Academic Development. This post is part of the Learning and Teaching Enhancement Theme: Assessment and Feedback Principles and Priorities.


The University’s new assessment and feedback principles include ‘Our assessment and feedback practices will involve conversation with students’ – and this is considered a baseline expectation across the University. Linked to this, there is a priority aspiration to have ‘Students as partners/co-creators in assessment and feedback’, and this sets out strategic direction for ongoing enhancement. As Tina Harrison says in a recent blog post, ‘Ongoing conversation with students about assessment can develop a shared understanding of the purpose of assessment, assessment expectations and marking criteria’. And in co-creation of assessment and feedback, staff create opportunities for students to share decisions about, and responsibility for, assessment design, assessment methods, marking criteria, and/or feedback.

Research demonstrates that there are many benefits of co-created assessment and feedback. Students gain better understanding of the assessment process (assessment literacy); tend to adopt deeper approaches to learning (and focus more on learning rather than grades); enhance their skills development; and enhance their assessment (including exam) performance (Deeley 2014; Deeley & Bovill, 2017; Delpish et al 2010; Hardy et al 2014; Sambell & Graham 2011). There are many different ways in which staff and students are co-creating assessment and feedback, and I include some examples here to help demonstrate what is possible.

  1. Students designing their own assessment method in the form of an asset

At The University of Edinburgh, Andy Cross and Liz Grant run an interdisciplinary course at the Edinburgh Futures Institute, called ‘Currents: Understanding and Addressing Global Challenges‘. They ask students to ‘create an asset’ in order to address the learning outcomes. Students have been highly creative, and have developed for example, a board game; a poem; a fictional journal entry; a proposal for a light exhibition; and many other types of asset. (Watch a recording of Andy Cross presenting this work at an Engage network event for more information)

  1. Students designing marking rubrics combined with peer and self assessment

At the University of Cumbria, Meer & Chapman (2015) involved students in co-designing marking rubrics/criteria in their Human Resource Development course. They included three iterations of designing marking criteria. The first was designed by the staff alone, the second designed by the students alone, and the third designed in a collaboration between the staff and students. Alongside being involved in designing marking criteria, students are invited to undertake peer and self assessment using the criteria. Students reported gaining a much better understanding of what staff are looking for in assessments.

  1. Students mark sample essays

Dan Bernstein teaches a Psychology class at the University of Kansas in the US. A week before the exam he brings to class a rubric that helps articulate his assessment expectations to the class, along with three examples of unmarked previous student essays. The essays are written on questions not included in the exam. Students then use the rubric to mark the essays, and then they have a class discussion. Dan Bernstein points out particular strengths and weaknesses in the essays, and clearly articulates his expectations (see Cook-Sather et al 2014 for more information).

  1. Students design essay questions

Peter Kruschwitz teaches Classics at the University of Vienna, and for the last 10 years has invited students to design their own essay questions. He gives the students 6-8 keywords to ensure they stay focused on the broad topic area, but students are encouraged to write essay titles that enable them to focus and place emphasis on areas of particular interest. Over the years, Peter Kruschwitz has seen a consistently higher performance from students, linked to greater engagement, than when he used to set the essay question for students (see Cook-Sather et al 2014 for more information).

  1. Choice between two assessments

At University College Dublin (UCD), Geraldine O’Neill led a project inviting staff from different disciplines across UCD, to create a choice of two different assessments in their course which students could choose between. Staff provided information sheets about the two assessments in each course so that students could make an informed choice about which assessment they would complete. Although attention was paid to ensuring the assessments were equivalent, the outcomes challenged staff perceptions that students might be motivated to choose an assessment that could be considered in any way easier (for more information see O’Neill, 2010).

  1. Designing multiple choice questions

Paul Denny at the University of Auckland designed a piece of software called ‘Peerwise’, which enables students to design multiple choice questions (MCQs), answer others’ MCQs, and rate the quality of the MCQs designed by others. Students have to write the correct answer with a rationale for why the correct answer is correct, as well as several potentially believable incorrect answers and the rationale for why these are incorrect. Staff in different subjects who use Peerwise at different universities have been known to use good quality student questions within their MCQ question banks for summative exams (see Denny et al, 2008)

  1. Co-assessing a presentation

At the University of Glasgow, Susan Deeley teaches Public Policy. In her Service Learning course, she invites students to co-assess a presentation, where students award themselves a grade and she also awards them a grade. She then meets all the students individually to agree upon a final grade. She encourages the students to negotiate and articulate a rationale for their performance. If they can’t reach agreement on the grade, Susan Deeley retains responsibility for awarding a final grade, but she is transparent and open with students about her rationale for decisions.

Opening up conversations about assessment and offering students choice within assessment processes can be helpful first steps towards co-creation. With the growing evidence of the benefits of co-created assessment and feedback, and supported by the University’s new Assessment and Feedback Principles and Priorities, perhaps you could consider adapting one of these examples in your own practice. Or maybe you have other examples from your practice, which you could share on the Teaching Matters blog (teachingmatters@ed.ac.uk).

If you are interested in finding out more, you might be interested in the Advance HE resources on Student Partnerships in Assessment: https://www.advance-he.ac.uk/membership/advance-he-membership-benefits/student-partnerships-assessment

References

Cook-Sather, A., Bovill, C., Felten, P. (2014) Engaging students as partners in learning and teaching: a guide for faculty. San Francisco: Jossey Bass.

Deeley, S. J. (2014) Summative Co-Assessment: A Deep Learning Approach to Enhancing Employability Skills and Attributes. Active Learning in Higher Education 15 (1) 39–51.

Deeley, S. J., & Bovill, C. (2017) Staff-student partnership in assessment: enhancing assessment literacy through democratic practices. Assessment and Evaluation in Higher Education 42(3) 463–477.

Delpish, A, Holmes, A, Knight-McKenna, M, Mihans, R, Darby, A, King, K and Felten, P (2010) Equalizing Voices: Student-Faculty Partnership in Course Design, in Werder C and Otis M (eds) Engaging Student Voices in The Study of Teaching and Learning. Sterling: Stylus.

Denny, P., Luxton-Reilly, A. & Hamer, J. (2008) The Peer-wise system of student contributed assessment questions. Paper presentation, Australasian Computing Education Conference (ACE2008), Wollongong, Australia, January. Conferences in Research and Practice in Information Technology (CRPIT), Vol. 78. Simon and Margaret Hamilton, Eds.

Hardy, J., Bates, S.P., Casey, M.M., Galloway, K.W., Galloway, R.K., Kay, A.E., Kirsop, P. and McQueen, H. (2014). Student-Generated Content: Enhancing Learning through Sharing Multiple Choice Questions. International Journal of Science Education 36 (13) 2180–2194.

Meer, N. & Chapman, A. (2015) Co-creation of Marking Criteria: Students as Partners in the Assessment Process, Business & Management Education in HE. Online: https://www.tandfonline.com/doi/full/10.11120/bmhe.2014.00008

O’Neill, G (Ed) (2011) A Practitioner’s Guide to Choice of Assessment Methods within a Module Dublin: UCD Teaching and Learning https://www.ucd.ie/teaching/t4media/choice_of_assessment.pdf

Sambell, K., and Graham, L. (2011) Towards an Assessment Partnership Model? Students’ Experiences of being Engaged as Partners in Assessment for Learning (AfL) Enhancement Activity. In S. Little (Ed) Staff–Student Partnerships in Higher


Catherine Bovill

Dr Catherine Bovill is Senior Lecturer in Student Engagement at the Institute for Academic Development (IAD), University of Edinburgh. She is also Visiting Fellow at the University of Bergen, Norway, Visiting Fellow at the University of Winchester, UK, and previously a Fulbright Scholar. She is a Principal Fellow of the Higher Education Academy and Fellow of the Staff and Educational Development Association. Cathy is co-chair of the Curriculum Transformation Programme Student Engagement Strategy Group, leads the IAD programme and course design team, and leads the UoE Student Partnership Agreement and funding scheme. Her research focuses on co-created curriculum, student-staff partnership and student engagement.

Leave a Reply

Your email address will not be published. Required fields are marked *