Podcast: Beyond the algorithm: Digital divide, biases and hidden labour – Episode 3 (18 mins)

Teaching Matters Generative AI podcast Episode 3: Beyond the Algorithm

The third episode of Generative AI podcast series↗️ features Lara Dal Molin, a second year PhD student at the Science, Technology and Innovations studies↗️, The University of Edinburgh. Discussing some of the core issues associated with Generative AI technologies usage, Lara leaves us with a compelling invitation to ponder on how we can guide our students to cultivate a responsible relationship with these technologies.

In this episode, Irene asks the following questions to Lara:

1. What is the impact of ChatGPT on digital divide?

Lara thoughtfully articulates what AI technology can do to the existing digital divide. Could it reduce the divide or enhance it?

I think on one hand, generative AI provides a considerably wide section of the population with the opportunity to interact with AI for the first time in history. Really, this is unprecedented and it presents probably a large potential for AI literacy and outreach in general. So this potentially reduces the digital divide…

On the other hand, we are kind of witnessing the largest round of technology testing without consent in the history of humanity. Really, in so many countries, GPTs and generative AI technologies were just thrown on the web without use cases, without guidelines, without regulations for use, and most importantly, without a transparent way of operating.

She considers both arguments and invites us to think about the kind of future we want to build. Highlighting the knowledge discrepancies between the companies, the people and the government with regards to AI technology usage, Lara encourages us to ask the following questions: Who does this technology benefit? Who does this knowledge benefit? How does this work? How much data is saved and lost? What are the implications of these interactions?

2. What are the skills required for the use of chat GPT? How do you think these skills will impact digital divide?

In this section, Lara talks about prompt engineering – the practice of finding the most appropriate input or prompt for any GPT technology to allow this technology to solve a task.

I think I’m not aware of any university level course on prompt engineering so far. So the only way in which people could have got familiar with this technique up until this moment is by interacting directly with AI tools themselves…

She explains how this can feed into the digital divide discussion:

Previous versions of the model, as well as comparable models that are available out there, require the purchase of prompting credits as well as some kind of identity verification. So we can really say that individuals that have had the opportunity to develop the skill of prompt engineering so far definitely have had some kind of privilege. And I think that we can see how that feeds in directly on arguments on the digital divide.

3. What do you think about gender bias within the world of Generative AI? 

Firstly, Lara begins this section iterating the issue of gender bias associated with Generative AI technologies. She highlights an empirically proven fact and a hard truth that Generative AI technologies show considerable levels of gender and intersectional biases.

Lara puts forth some questions to us to ponder on:

How harmful is this bias? Where does it appear? Which identities does it affect the most? And how does this bias manifest once the model is deployed in the real world? These are obviously really difficult questions to answer and definitely require some long term institutional research efforts.

This exists and it will require many efforts to resolve.

Secondly, Lara talks about the double-edged sword model of Generative AI technologies. While, the newer versions of these technologies claim to be fairer compared to their predecessors and competitors, with all the regulation and filtering of their content, this also hinders research efforts to understand and investigate the full potential and capability of these technologies.

Lara reminds us of the hidden labour behind the screen:

Someone had to label that content. Somebody, some human had to say that that content was harmful in the first place.

So while the results of this labour can be somehow appreciable in the model because the model really physically has slightly less instances of this toxic content, the model itself is far from ethical because of the way in which it was built. So there is this sort of terrible, really sad irony here.

4) How can we benefit from these developments without compromising the safety of the users or data workers? 

Moving forward, Lara points to strategies that can help us envision a better future that holds for academia and beyond:

  • involving users directly in technology design processes
  • users or data workers to be adequately paid for their working conditions
  • recognition of their labour bearing in mind that when we start to add, let’s say, minoritized communities within data processes, the implications of visibility that this process brings along.

…people who actually need this work the most may be harmed by this work itself.

With that, Lara adds that a solution can only be found by having an open and honest conversation with all stakeholders involved.

To conclude the episode, Lara ties this back to academia, saying:

I think maybe in a slightly overly optimistic way, the introduction of generative technologies within labour and education, may be highlighting some longstanding issues within these fields. These issues existed already. The fact that students are overwhelmed, the fact that there is hidden labour in our workforce and supply chain of labour. So maybe this is finally a time in which we can address some of these issues comprehensively and once and for all, maybe with the help of this technology, maybe not, that will be up to us.

Lara explains how she plans on embedding such a practice within her teaching:

… the fact that this tool is there. They [students] can use it if they want, but the focus now should shift on their own relationship, the students’ relationship with the content. So what does it mean for them?

So perhaps this is a way in which we can start considering all of these identities and diversity that we have in our students and workforce and establishing a healthy relationship between them and the technology.

The next episode of this series will feature Dr James Stewart who will focus on the big picture, the major players of the generative AI world whilst providing some practical tips on how to use of Generative AI technologies in learning, teaching and research.

Stay tuned!


(1:56) What is the impact of ChatGPT on digital divide?
(6:13) What are the skills required for the use of chat GPT? How do you think these skills will impact digital divide?
(9:17) What do you think about gender bias within the world of Generative AI?
(14:18) How can we benefit from these developments without compromising the safety of the users or data workers?

Transcript of this episode↗️

photo of the authorLara Dalmolin

Lara is a second year Ph.D. student in Science, Technology and Innovation Studies at The University of Edinburgh. She is also a part of the social data science research cluster with the University of Copenhagen. Her research interests are within the realm and intersection of language, A.I. and gender, asking how all of these things come together and interplay. Her research project specifically is about trying to integrate intersectional and especially queer perspectives in large language models.

photograph of the authorIrene Xi

Irene Xi is a postgraduate student, currently undertaking Sociology and Global Change course at The University of Edinburgh. She earned a Communication Bachelor’s degree from Monash University. Also, she is a Chinese girl who is enthusiastic and interested in AI and online technologies.

Episode produced and edited by:

photo of the authorSylvia Joshua Western

Sylvia is currently doing her PhD in Clinical Education at The University of Edinburgh and has a Master’s degree in Clinical Education. Her PhD research explores test-wise behaviours in Objective Structured Clinical Examination (OSCE) context.  Coming from a dental background, she enjoys learning about and researching clinical assessments. She works part-time as a PhD intern at Teaching Matters, the University’s largest blog and podcast platform through Employ.ed scheme at the Institute of Academic Development.

photograph of the authorJoséphine Foucher

Joséphine is doing a PhD in Sociology at The University of Edinburgh. Her research looks at the intersection between art and politics in contemporary Cuba. She supports Jenny Scoles as the Teaching Matters Co-Editor and Student Engagement Officer through the PhD Intern scheme at the Institute for Academic Development.

Leave a Reply

Your email address will not be published. Required fields are marked *