In this post, Vassilis Galanos offers a proposal of a counter-Turing Test that can assist teaching in non-ChatGPT-able ways. This is part 2 of Vassilis’ post from last week in which he situated the challenges posed by ChatGPT within a bureaucratic and capitalistic infrastructural culture that supports its existence. This post belongs to the Hot topic series: Moving forward with ChatGPT.
III. “She’s It’s a model and she’s it’s looking good” – if it has a model card
Second Kraftwerk reference (The Model)*, but I will not write an analysis of that one here. In 2019, Margaret Mitchell and colleagues recommended that AI models (such as large language models (LLMS)) should be accompanied by “model cards,” short documents evaluating the models datasets, risks, limitations, and biases, among others, as to forward transparency. While I was thinking over the Winter break how to incorporate-yet-deconstruct ChatGPT in class, I thought to ask students to use it actively as a helper, but also write a short “model card” in which they assess the algorithmic output in terms of originality of content, biases, and quality of references. I conducted my own experiments in advance, using some of my course’s discussion questions as prompts, to explore multiple glitches. My favourite one was the case where, in response to asking some standard references about a core theory, the software generated the title of an existing article that was written by the wrong author whose name had also been associated with the theory. Namely, Langdon Winner, who has written extensively about technological determinism, including GPT, based on his latest Facebook posts. According to ChatGPT, however, he has also written the article “Technological Determinism is Dead, Long Live Technological Determinism,” which in reality was authored by Sally Wyatt. To me, that glitch was a good case for re-examining technological determinism! I made some live demonstrations of it in class to spark students’ interest.
Due to my course’s focus on political and economic structures of the internet, I showed both sides of the coin: indeed, ChatGPT services appear to be much cheaper than legal human editing or illegal essay writing services. That coincided with the week when Billy Perrigo’s “OpenAI Used Kenyan Workers on Less Than $2 Per Hour to Make ChatGPT Less Toxic” article was published on Time while several online commentators urged users (students, teaching staff) to stop feeding more data to OpenAI’s colossal machinery. I left the decision up to students.
What happened? Most students didn’t even bother – only two (out of 104) students used GPT and only one wrote a model card. Both students reported on the poor results and the immense amount of optimisation process they had to go through: making the essay more personalised, fact- and reference-checking, connecting arguments. Many students emailed me to apologise that they would not use ChatGPT precisely because of these burdens – correcting ChatGPT and ensuring that the text is credible. The one student mentioned in my previous post was a creatively erroneous exception.
Now let me assume that some students used ChatGPT but did not disclose it, and that my course’s marking team was unable to detect that because of good polishing work. Conducting my own experiments with journal article writing using ChatGPT’s assistance, what I found myself ending up with is usually a mere 10% of sentences in the final draft having been produced by the machine (indeed, one does not want to use brain energy to construct those mechanistic sentences). The rest involved rephrasing prototyped text, conducting additional research on top of pre-existing arguments, creating meticulous puns and punchlines, and organising the structure. Recent requests by journals to include declarations of ChatGPT usage are as worthless as my initial recommendation to use model cards, besides the playfulness value of it. A more interesting and important requirement would be to a provide a carbon emission calculation with every LLM query to incentivise more well-thought uses of the model, rendering transparent its environmental and data pipeline work resources.
IV. ChatGPT’s philosophy reminds us that appreciating hidden gems is not elitist.
Book reviews are typical types of assessment in Humanities courses. Typically, a course organiser will offer a list of book recommendations they have read and ask students to summarise them, earning some extra marks if they connect the book with other course readings. (Sometimes course organisers have not read the books and seek to read good summaries by students so that they can pretend to have read them – that is another story, yet related to the turbo-academic speed rates.) As a researcher, but also avant-garde music collector, I have developed an interest in digging up underrated and obscure decades-old bibliographic references, partly in a long-term attempt to criticise hype of the “new.” That being said, for the purposes of my course, I have included in my list of book recommendations three (excellent) books on internet history: a well-known canonical text with thousands of citations from the early 2000s, one that is written from a non-Western perspective with a few dozen citations from the early 2010s, and one that is Western-oriented but has been published in 2022 and looks at a niche aspect of the topic. The first book can easily be found online in its entirety (don’t ask me if this is legal), the second can only be found in print, and the third can be accessed online only in part, via Google Books, unless a library provides access to it. I tried to construct book reviews using ChatGPT, knowing that the model draws data from existing online reviews. As expected, a review of the first book was excellent and could definitely get an average mark, even higher if the model was asked to connect the theme with some course material. Reviews of the other two books were, again, as expected, very poor – essentially, rephrasing existing book descriptions from online bookstores.
There is, unfortunately, excessive tokenism in the value of digging up niche and obscure references. It’s an old debate concerned with the value of mainstream being more accessible and thus rightfully successful, as an accusation for lovers of the obscure in that their passion stems from intellectual elitism. However, supporting obscure references can be also viewed as an act of political and artistic resistance to data accumulation and metrics that reinforce the “average.” There is virtue in giving older publications second chances or supporting new authors who have recently published. Although one cannot deny this might come at the expense of a senseless chase for obscurity for the sake of the obscurity or for constantly new material; there is reason that some books remain in obscurity and indeed many new books reinvent the wheel. But in light of ChatGPT as a social mirror, it should depend on the teacher’s selection skills, to recommend some new or overshadowed books for students to review. (The same computational problem about producing reviews for obscure books becomes more crucial when contemporary AI is used to analyse rare diseases or classify gender non-conforming individuals.)
V. From robots-against-robots to a fight against the probable.
The probabilistic mechanics of ChatGPT has led to the development of software that can suggest the likelihood of a text being produced by an LLM testing it against similar or overlapping databases. Such plugins are now introduced as part of plagiarism-detection software, creating an additional divide across educational communities in terms of ideology (whether these should be used, cf. Turnitin as a prior instance) but also economy (can universities afford such software). The situation is homologous to de/encryption software used against en/decryption software, employment of facial recognition data to detect deep fakes, or the old proposal to use autonomous robots in specially designated warfare battlefields without the physical participation of any soldier (anecdotal recommendation by Prof Aaron Sloman in the 1980s, also the Plawres Sanshiro and Medabots cartoons). The latter example might sound laughable to some – precisely because of the unspoken truth that war cannot be “just” if casualties do not involve human harm. Possibly, that’s what justifies, in similarly unspoken manners, contemporary normalisations of robots against robots, including ChatGPT plugins: probabilistic machines that detect the probability of a text being written by probabilistic machine. Who normalises what more quickly is an expression of power.
For those who remember fraction systems in elemental algebra, you might share the inner fulfilment of eliminating a common denominator to solve an equation. My suggestion, from all the above, is that ChatGPT teaches us about probabilistic redundancy in our everyday writing. The reason that so many groups are fascinated or intimated by it has to do with the deeper consideration about what our own bureaucracy has done to us. This is paired with the realisation that bureaucracy is redundant if it can be operated by a machine that does not understand its cause.
To sum up, ChatGPT in education offers a good opportunity to discuss what has become robotic and redundant in our bureaucratic environments, what is valued as creative or replicable, and what are our criteria for perceiving something as “average” or acceptable, mainstream or obscure. Humanity managed to achieve (in part) the 1940s cybernetic vision of a life calculable in probabilities and predictabilities. But now that this routinisable version of a life (including teaching) is not found to be as attractive, we probably have to think of a living/teaching modes that are closer to a place imagined by James Joyce, “where the possible was the improbable and the improbable the inevitable” (Joyce 1939: 110).
I want to thank colleagues James Stewart, Akshata Singh, and Richard Baxtrom for initiating some discussions that led me to some of the above arguments and Joséphine Foucher for kindly inviting me to assemble them here.
*For the first Kraftwerk reference, visit last week’s post!
- Adiguzel, T., Kaya, M. H., & Cansu, F. K. (2023). Revolutionizing education with AI: Exploring the transformative potential of ChatGPT. Contemporary Educational Technology, 15(3), ep429.
- Baudrillard, J. (1981). Simulacra and Simulation. Ann Arbor: The University of Michigan Press.
- Galanos, V. (2023). Expectations and expertise in artificial intelligence: specialist views and historical perspectives on conceptualisation, promise, and funding. Doctoral Thesis. Edinburgh: University of Edinburgh.
- Joyce, J. (1939). Finnegans Wake. London: Penguin Books.
- McCarthy, C. (2023). Stella Maris. Gyldendal A/S.
- Mitchell, M., Wu, S., Zaldivar, A., Barnes, P., Vasserman, L., Hutchinson, B., … & Gebru, T. (2019, January). Model cards for model reporting. In Proceedings of the conference on Fairness, Accountability, and Transparency, Association for Computing Machinery (pp. 220-229).
- Schüte, U. (2020). Kraftwerk: Future Music from Gernamy. Great Britain: Penguin Books.
- Sørensen, K. H., & Traweek, S. (2022). Questing Excellence in Academia: A Tale of Two Universities (p. 236). Taylor & Francis.
- Turkle, S. (1980). Computer as roschach. Society, 17(2), 15-24.
- Virilio, P. (2012). The Great Accelerator. Translated by Julie Rose. Cambridge and Malden: Polity.
Vassilis Galanos (it/ve/vem) is a Teaching Fellow in Science, Technology and Innovation Studies at the School of Social and Political Science, University of Edinburgh and associate editor of the journal Technology Analysis and Strategic Management. Vassilis researches and publishes on the interplay of expectations and expertise in the development of AI, robotics, and internet technologies, with further interests in cybernetics, media theory, invented religions, oriental and continental philosophy, community-led initiatives, and art. Vassilis is also a book, vinyl, beer cap, and mouth harp collector – using the latter instrument to invite students back from class break.
Twitter handle: @fractaloidconvo