In this post, Dr Claudia Rosenhan, a teaching fellow at Moray House School of Education, shares her findings from a PTAS-funded project on enhancing assessment literacy amongst PGT students and scorer reliability amongst Postgraduate (taught) staff…
Assessment literacy – the grasp of assessment principles and practices – is not an automatic knowledge that academic staff and students possess. Students, however, are asked to respond to assessment feedback and use it in their learning without particular instruction. Academics set assessments and evaluate the results in a way that is, more often than not, intuitive rather than principled.
Our recent project on enhancing assessment literacy, therefore, aimed to enhance this skill by focusing on level descriptors. These are the definitions of criteria at determined levels in a marking scheme that inform both academics’ judgments and students’ understanding of the feedback they have received on a particular task. The literature on assessment has revealed the issues that surround these descriptors, which can be summarised as fuzziness and inconsistencies (see e.g. Grainger et al 2008; Adie et al 2013). Hence we rewrote the descriptors on our MSc programme in an attempt to make them more comprehensible and relevant to the assessments they are used for.
Our project then aimed to understand whether staff and students on a master’s programme use these level descriptors to create a community of practice. Etienne Wenger defines these communities as groups of people who have in common a particular concern, and improve their understanding of this concern in regular interactions (Wenger 1999). The concern, in this case, is learning and teaching, and the assessment task is where assessors and students meet in interactions.
Our project revealed that rather than converging on a common understanding of assessment, students and academics understanding diverged. Students hinted that they believed academics used tacit knowledge to assess, rather than refer to the explicit standards written in the descriptors. This finding was supported by a focus group, which revealed that academics frequently resorted to ‘common sense’ in their evaluations. Assessment literacy cannot be attained if knowledge is not shared between both groups.
Our workshop at the University’s Learning and Teaching Conference aimed, therefore, to work out how this common understanding can be achieved:
Motion 1: Assessment literacy depends on clear criteria and a shared understanding of these criteria. In this activity assessment was revealed to be very much dependent on context. Participants realised that understanding must draw on individual aims and purposes of the assessment in question.
Motion 2: Clear and comprehensible feedback allows the building of assessment literacy. In this activity we asked participants to evaluate a critical incident. The question we asked was: what constitutes good feedback, so that students can participate in the assessor’s knowledge.
Motion 3: Students need to understand the ‘grammar’ and vocabulary of assessment and get inside the assessor’s mind. In this activity we showed that assessors are often not aware how they wrote feedback. Based on a corpus of around 45000 words of dissertation feedback, participants were encouraged to predict what kind of language they would expect to find in this feedback.
In sum, our empirical project, as well as the workshop, explored the necessity of constant active engagement with assessment practices to foster assessment literacy. Future directions in that field must, therefore, explore how this can be achieved against increasing demands on academics to assess and provide feedback with ever decreasing resources.
You can read the final PTAS report here.