Welcome to the March-April Hot Topic: Moving forward with ChatGPT.
With Twitter feeds awash with opinion pieces, blog posts, videos and podcast episodes discussing the impact of ChatGPT in Higher Education, this Teaching Matters series aims to jump feet first into the conversations and debates that this Artificial Intelligence (AI) technology has stirred up in the world of university learning and teaching. With sensationalist headlines about universities going back to pen and paper exams and ChatGPT acing medical school exams, this series aims to ask critical questions about the relationship between AI software and the future of our University.
This post, written by Jenny Scoles, Josephine Foucher and Tina Harrison, starts the series with some sign-posting to useful materials recently published online, which have come to our attention. Hopefully reading through these will give you a broad understanding of some of the issues we are facing as educators, as well as highlighting the challenging ethical dilemmas we must now start to untangle.
At the end of November last year, the San Francisco tech company OpenAI launched ChatGPT, which is a software that uses “large language models to generate text responses to natural language prompts” (see conversation between Tim Fawns and Dave Cormier). In other words, it is an extremely smart AI text generator that has the capacity of offering – in a matter of seconds -reasoned responses to questions. The tool can write essays, work computer code, and guess medical diagnoses, among many other possibilities.
Such software is not new; many have forewarned us of its growing inevitability, but it seems that the launch of ChatGPT has been the watershed moment that has propelled AI software into the mainstream. In part, this is because it is the first time that an AI linguistic software has been made available to the public through a free and intuitive web interface.
ChatGPT is raising a strong emotional response in the academic community, from fear and over-whelm, to excitement and positivity. As Kate Lindsay states, “[ChatGPT] doesn’t so much present a threat to the university experience, but rather directly into the heart of the purpose of a university education – its ability to ‘teach you how to think’.” ChatGPT’s ability to seemingly complete complex cognitive tasks that might make it indistinguishable from human-generated content invites us – as a University – to reflect on the existential challenge this poses to the very process of learning and academic culture.
Peter Bryant offers some useful points about why we should not panic, and Alexandra Mihai advises us to get off ‘the fear carousel’. Bron Eager has created a free “Beginners guide to ChatGPT, including practical examples for using AI technology in your teaching practice” to help those overwhelmed with all the chat about Chat. Matt Bower and Mark Alfano (Macquarie University) have produced a short video, which “provides teachers with a brief introduction to ChatGPT, including an analysis of how well ChatGPT could complete different types of assessment tasks in an undergraduate unit”.
Key to the debates around ChatGPT is our relationship with assessments, student-staff trust, and cheating. Colleagues have argued that ChatGPT will be detrimental to the integrity of online exams. Others (Rudolph et al, 2023) contend that the software’s use will become quotidian and must therefore push educators to come up with innovative assessment methods beyond the traditional essay, or even rethink assessment altogether. ChatGPT should thus be an opportunity for educators to improve their skillset, and continue to foster trusting relationships with students. In fact, ChatGPT could perhaps be an opportunity to engage in more horizontal relationships with students and actively seek creative ways to work with students to use AI creatively at university.
Then there is the issue of diversity and inclusion – will ChatGPT help or hinder? Amanda Kirby believes ChatGPT can provide positive opportunities for individuals who are neurodivergent. Dean Fido and Craig Harper argue that such AI writing software has the potential to address attainment gaps “through the provision of “plain language” descriptions and comparisons of essay structures.” Conversely, others, such as Sam Illingworth, argue that AI could actually undermine diversity in the curriculum.
In the midst of these debates, universities are starting to publicise their responses to working with ChatGPT (e.g., Deakin University) to help guide their staff and students on their responses to working with (and not necessarily against) AI software. This series aims to start a wider conversation at The University of Edinburgh about how we can move forward to work ethically, positively and critically with AI software and similar technologies.
In this spirit, our position at The University of Edinburgh is not to impose a blanket restriction on the use of AI, but rather educate our students (and support staff) to enable responsible use of AI by our students. In a University as broad as ours, there is likely to be huge variation in the extent to which colleagues may want to engage with AI. Some colleagues may want to make explicit use of AI (e.g. asking students to analyse and critique the content it generates), while others may specify that AI should not be used, or only used in specific ways, and that is appropriate. The reality is, AI is out there and students are already using it, and they will be entering workplaces where AI is also likely to be used.
University of Edinburgh guidance is now available for students on the use of AI on the Academic Services webpage. This reminds them and emphasises the importance of their own original work, makes students aware of what AI can and cannot do very well currently, and how they might appropriately engage with it. It also advises on how to acknowledge the use of AI, whilst also allowing for innovative use of AI and taking advantage of the positives it can offer.
Upcoming related events
- Beyond the Chat: Understanding the Impact of ChatGPT in Higher Education, Thursday 30th March, 9:45am – 12:30pm (University of Portsmouth Events Hub – online).
- Academic Practice and Technology Conference – 30th June 2023 – Call for Proposals: “Implications and Ethical Dimensions of using Artificial Intelligence in Higher Education Teaching, Learning and Assessment”. Deadline for submission is 5th May 2023.
Joséphine Foucher
Joséphine is doing a PhD in Sociology at The University of Edinburgh. Her research looks at the intersection between art and politics in contemporary Cuba. She supports Jenny Scoles as the Teaching Matters Co-Editor and Student Engagement Officer through the PhD Intern scheme at the Institute for Academic Development.
Tina Harrison
Tina is Assistant Principal Academic Standards and Quality Assurance and Professor of Financial Services Marketing and Consumption. Tina joined the University in 1993 and continues to maintain an active academic role in the Business School. She has had overall responsibility for the University’s quality assurance framework as Assistant Principal since 2009. She plays a key role in the Scottish HE quality landscape as a member of QAA Scotland’s Advisory Board, chair of the sparqs University Advisory Group, and member of the Quality Arrangements for Scottish Higher Education (QASHE) group.
Jenny Scoles
Dr Jenny Scoles is the lead editor of Teaching Matters. She is an Academic Developer (Learning and Teaching Enhancement), and a Senior Fellow HEA, in the Institute for Academic Development. She provides pedagogical support for University course and programme design. Her interests include student engagement, sharing practice, professional learning, and sociomaterial methodologies.
On the assessment front, there is some effort on figuring out how to make output from AIs detectable:
https://arxiv.org/pdf/2301.10226.pdf
The method requires cooperation from all AI service providers but learning institutions might collectively be able to convince them to do so.