Formative feedback through tutorial participation reviews

iStock [Warchi]
iStock [Warchi]
Krysten Blackstone, a PhD student and tutor in the School of History, Classics and Archaeology, shares her experiences developing formative feedback for students.

I teach history, and like in many humanities subjects, finding a balance between formative and summative feedback can often be challenging due to the nature of our assessment practices.  For the course I teach, assessment is comprised of non-written skills, an essay, and an exam.  This semester I felt some of my students needed more in the way of formative feedback – feedback that isn’t graded, but is meant to highlight areas for improvement before it is officially assessed.  My solution for this was a formal mid-semester participation review.

Setting the stage throughout the semester is key.  By telling students about the review from week one students have a tangible deadline to work towards, rather than the seemingly distant end of semester.  Similarly, I make it very clear from the beginning that I expect weekly contributions from every student: non-participation is not an option in my class.  This greatly aids the review process as because of the ‘forced’ participation, I always have assessable contributions from every student.

In my experience, the participation review needs to be scheduled before the natural half-way point of the semester, which seems to be just before students become ‘set in their ways’, meaning change is still possible. My review week is around the fourth/fifth week of an eleven-week term. This week works particularly well because students still have time to adjust, but it gives the tutor long enough to gather constructive feedback.

To do the review, I committed to consistent weekly preparation to achieve the best results.  Each week, I keep a rough note of individual participation – the amount and the quality of contributions -including specific examples.  Although I do not bother with strict numerical grades, it still lets me keep track of each student throughout the semester.

Personally, I like to set office hours for the specific purpose of offering this feedback on their non-written skills.  I have found that meeting in person allows the review to be more of a conversation rather than just a critique, but these meetings can easily be adapted to email. While the main intention is to give student feedback, I also encourage them to give me feedback as well; if they found a particular tutorial exercise unhelpful, these meetings are a good place to discuss it.

My feedback to the students is focused on two things: what they do well, and what could be improved upon. I avoid negatives completely.  Of course, if a student is performing poorly it is not my intention to give them a different impression. However, these brief meetings are meant to encourage students to contribute more, and to change the way they think about how they contribute, not to discourage them. I find most of my students are very capable, they are just sometimes shy, or lack confidence.  By focusing on ways to improve instead of what has gone wrong, it is easier to yield positive change.  These meetings are also a chance for me to demonstrate to the students I pay attention to and value their contributions.

The one-on-one nature of the review allows for very specific feedback to be given, even to the best of students.  The timing early in the semester almost guarantees that the students’ non-written skills grade will have improved by the end of term.  Once put into practice, I had immense success.  Out of the 24 students I taught, 23 of them participated. Feedback from students has been positive, one student commented on how the mid-semester assessment had given them “fair and helpful notes on [..] participation”.  Another said:

“They were very helpful, they cleared up my concerns about where I stood for my grade and gave me specific things to improve on”

Students in my tutorials were certainly more encouraged and engaged during the second half of the semester but, most exciting for me, was that the quality of their contributions also increased significantly.

Krysten Blackstone

Krysten Blackstone is a History PhD student and a tutor within the School of History, Classics and Archaeology.  Her research focuses on morale and identity in the Continental Army during the American Revolution.  In addition to this, she is also a committee member and contributions editor of Pubs and Publications, where you can find more of her writing.

2 comments

  1. I really like the idea of doing this in a tutorial format. I organise essay preparation tutorials with my students and seminar participation is part of one of their assessments but I can see how this would work. My problem is that I have a lot of students (80 on one module) so I struggle to remember names and keep track of their participation. Do you have any advice in that respect.

  2. That certainly makes it a bit trickier. I have students, in classes of 12, so it’s slightly easier to keep track, though does require me to take notes in each class. (Our online attendance system conveniently has pictures with the names which helps enormously). Personally I have a massive excel spreadsheet with their names, weekly participation scores (A, A+, B-, etc.) and then a brief comment about what they did well or what they didn’t do well, which I fill out at the end of each class. That may not be overly helpful if you have 80 students and struggle with names though. I also do rotational source presentations, which may work. Each week two of my students have to bring in a primary source and briefly (3 minutes on average) present on it. They get to choose the source and as long as they can connect it to that weeks topic I don’t really have many restrictions. If you could manage that over the course of a couple weeks you could identify the student based on the source, rather than their name, and it would mean you have something small to comment on for each individual. I suppose actually, this could be done on a larger scale in that instead of doing individual participation feedback, you could assess the group as a whole – noting particularly good practices and ones that aren’t so good. Then you could address those things in a class (and turn it into a discussion/teaching moment), or even via email – highlighting some really good interactions/participation you witnessed and also a few things to avoid or work on, without calling any one person out. While not as personalized, this would still give students a solid basis of what exactly you like and don’t and they can decide how it informs upon their own contributions.

Leave a Reply

Your email address will not be published. Required fields are marked *