Donna Lanclos, Richard Watermeyer and Lawrie Phipps sitting at a table

And then there were questions

By Donna Lanclos, Lawrie Phipps, and Richard Watermeyer.

We have already posted our opening keynote for RIDE 2024.

After we gave our talk, conference participants broke out into conversations at their tables; they had been given one of 8 prompts that we provided for discussion about GAI tools.  They talked amongst themselves for 20 minutes, and the energy level of the room was wonderful.  This felt very close to a method of managing audience discussions after talks proposed by Dr. Eve Tuck, who has people “peer review” their questions with each other for a few minutes before opening up for questions from the entire group.  This is very effective in defusing the “more of a comment than a question” phenomenon.

We then had an additional 10 minutes for general Q&A, some submitted online via Poll Everywhere (used by online participants and those in the room), and some asked their questions from within the room.  We kicked off the general Q&A with one question from a woman, as we knew that inspires a more diverse set of people to participate in the discussion.  We had such a good discussion in the room that we only had time for one Poll Everywhere question, so we are answering the rest of them here in Part 2 of our RIDE2024 blogging.

But if there is some tool, say AI tool to help get rid of some cumbersome work like drawing conclusion out of piles of papers, why using AI which might reduce the cumbersome work is regarded not being appropriate? So, rather than criticizing AI why not embrace AI to better be able to help us?

Donna:  We did address this in the room, we pointed out that one person’s definition of “cumbersome” might not be another’s, and furthermore that “cumbersome” does not mean that the work should not be done.  Perhaps we should think, rather than outsourcing the labor to GAI tools, about putting more people onto the task, so it’s not all on an individual?  Perhaps we should think about what work is worth doing, and what is an administrative load that has very little reason for being, but it requires a holistic perspective on labor in universities, not just a sense that something is hard and therefore not worth doing.

I’d also make the point that I’ve witnessed people using GAI tools to “help” and that often ends up with them having to check and redo the work…so the extent to which that saves time and labor is arguable.

Lawrie: For me, I worry about what is lost. I said on the day about witnessing how auto-summaries lose nuance, for example the long hard work of reading papers to draw your own conclusions is really important. I might take a few days to go through those papers and elicit a new understanding, possibly new knowledge; to have a machine make a best guess, and then “average out” a response takes something out of the process. What if there is one, or a few comments that are both outliers, and also rich learning, but the machine misses them. 

Richard: We need a wholesale reappraisal of what ‘drudge-work’ actually comprises. Some of what we may find as the most laborious, time-consuming and least fulfilling aspects of our job as educators and researchers nonetheless provide the foundation to our greater contributions and achievements. It may often be the case that, that which feels like the most pointless and undeserving undertaking has a far from insignificant long-term reward. 

How do individuals step outside the overwhelming pressure to play the game with GAI?

Donna:  Some people don’t have the power to say no.  I don’t know if anyone has been directly instructed to use GAI tools, but if they have been, I can absolutely see how it would be difficult if not impossible to refuse.  I think more often we see situations where GAI tools are embedded in systems and digital environments (like Teams, Google Suite, Zoom), and it’s unclear how to switch them off, making people vulnerable to the tools without warning or consent.  

Collective action and also communication outside of and across institutions can be very useful here.  Social media conversations about GAI tools and opting out are widely available (especially on Blue Sky in my experience) and simply knowing about other people in the sector who have successfully said no can be the support one needs.  

I also think there’s a leadership role–institutional leaders have a responsibility to listen to the people in their organization who are concerned, beyond telling them to “get over it” or that “it’s inevitable.”  They need to listen to refusal and take seriously concerns about the ways these tools can distort academic practice and violate academic ethics.  

I’d also wish for people to insist on the importance of creativity and humanity in their academic practices.  LLMs and GAI do not offer new insights, but kludged together extrusions of what has come before (and without attribution of sources!!).  They are worth saying no to, in doing work that matters.

Lawrie: Leaders need to allow them to. Leaders need to model behaviors that allow any member of staff to experiment with the idea of refusing to bow to the pressure of using any technology they are uncomfortable with, not just GAI. But I have not heard leaders say “use the GAI”. Pressure is coming from people in the sector, our peers. We see AI gurus and AI thought leaders all over LinkedIn now. They are building narratives of inevitability, and “must use or be left behind” which is a huge pressure on people. 

Richard: With difficulty. It may transpire that only those in the most secure positions are able to resist the seductions of GAI tools in terms of task efficiency and labour saving. Those in other words whose academic lives are less hostage to a productivity mania. 

In an age of “lack of time” and people’s general entitlement to “instant information”, do you worry of a generation of people using LLM’s without critically thinking or verifying the responses, that we will head towards a potentially dangerous echo chamber of false information and propaganda?

Donna:  Sure, but I think we are already there in terms of an information environment that’s polluted by bad actors and misinformation.  GAI tools that can extrude fact-agnostic content are amplifying that problem and scaling it up, but certainly didn’t invent it.  I’d point here to the good work of people like Mike Caulfield, and to library and information science workers, who have a long history of activism and scholarship around information and misinformation.  Project Information Literacy has a lot of useful things to say about this issue, and like Mike have been having this discussion long before GAI tools showed up.  

Lawrie:I agree with Donna, but would also add that we will also see a conformity of information. 

Richard: While GAI tools might be imagined as clearing spaces for deep-thinking and intellectual craft, in reality the gaps they create tend to be automatically filled with new expectations of productive output. GAI offers no clearing space or charging point for intellectual growth rather a ramping up of informational dependency.

It is all or nothing? There are some tasks that might save people time, but is engaging with GAI tools at all an admission of willingness for all uses of GAI?

Donna:  I see people engaging with GAI tools and the people who make them in constructive and critical ways, so I don’t think that is the same as being willing to use them at all costs.  I think we as a sector need to insist on people being clear and transparent about why and how they might choose to try to engage (or not), as it’s the lack of transparency on the part of GAI and LLM venture capitalists that is a huge part of the problem we are describing.  

Lawrie: There are huge advantages to using GAI, and AI in a range of situations, but not in places where humans are the added value, and not where the tools might mean we lose something. Right now the tools are still new, we are still in the hype mode, we need to slow down, do the research, and see what the impact is and where it could be useful. Useful defined by the users, not the vendors. 

Richard: For me it is about having a critical conversation about the precise value proposition of GAI and why/when you might choose and not choose to use it. It’s about intelligent, critically reflexive and honest application.

Surely the biggest challenge AI presents to academia is that it comprehensively demolishes most of our assessment model?

Donna:  I don’t think we have a monolithic assessment model, and certainly the traditional assessment model of people in rooms writing essays informed by their memories of lectures is one that’s been interrogated for quite a while.  I’d point here to some recent work by Peter Bryant at U of Sydney about creative approaches to both curriculum design and assessment that could do an end-run around much of the moral panic about students and assessment.  But assessment is not something that has suddenly become “a problem” just with GAI tools–before that there was the pandemic emergency, and before that there were all sorts of other reasons that academics worried about how to tell if their students were learning, or if staff were effectively teaching.  I also think that, based on our preliminary results, we have just as much work to do on the staff side of academic integrity (if not more) than we do on the student side.   The work of Sarah Elaine Eaton is very instructive in this regard.  

Lawrie: I don’t think it demolishes assessment, but it does force us to think more clearly, more strategically about the nature of learning. 

Richard: We need to move beyond the hype and hysteria framings of GAI tools to instead exploit the GAI zeitgeist as a window for critical reflection on the efficacy of established methods and processes by which we educate and research. 

Does digital divide apply just to Africa? Our students in the UK don’t have access to GAI tools unless they pay.

Donna:  Of course the digital divide is all over the place–we mentioned South Africa in part because we have started another survey there.  And yes, these tools can end up being part of the suite of things that students don’t have access to because of lack of money, lack of connectivity and other infrastructure challenges.  

This is a question for Lawrie – What ethical consideration has been taken into consideration when the AI tool(s) have been selected for qualitative research? For example, how the AI tool mentioned has been tested for fairness, and unbiased?

Lawrie: I’m not sure I understand the question. I think you mean have the tools that have come up in our research been considered with regards to ethics? Our research relied on academics telling us what they had been doing with the tools – so unless they have stated they went through ethical review before they used them, then we have no way of knowing. As for our work, yes we went through an ethical review. 

Donna:  I am not Lawrie, but if your question is about the use of GAI tools in qual research (I know, for instance that some have been embedded in CAQDAS platforms such as NVivo to generate themes), I have no idea if they have been tested to see what implicit biases were in the training data (a problem with most GAI tools, really).  And as Lawrie says in answer to another question, I’d worry in using GAI tools to assist in qual research and analysis about the loss of insight, the inability to see the outliers and exceptions that can help illuminate patterns.

No one has mentioned the environmental impact of generative AI. The amount of water and energy that these systems use is unsustainable.

Lawrie: The environmental impact of generative AI is without any doubt a significant problem (along with many other silicon valley initiatives). There is an increase in energy consumption associated with both use and training, and this will inevitably lead to more carbon emissions, especially when the electricity is derived from fossil fuels. Water consumption for cooling is also problematic. The rate of pace of technological advancement in AI, seems to outstrip efforts to mitigate its environmental footprint. There is also a reliance on non-renewable energy sources to power them, which not only exacerbates climate change but also underscores a pressing issue: the sustainability of technological progress (across all industries) in the face of environmental degradation. The problem is that we have very little data about the impact on the environment from any of the companies. Without a concerted shift towards greener energy sources and more efficient computing practices, generative AI will become, if it isn’t already, one of the key notable contributors to global ecological problems.

Donna:  We didn’t mention it in part because it wasn’t the primary point of this discussion, but yes, the environmental impact of GAI, the server farms that companies like OpenAI are wanting to build (and demanding more power plants be built to support), these are all dire consequences that should definitely be folded into the “should we use these tools” discussions.  

What are your thoughts on anthropomorphising GAI tools? Do you think we could change this approach by changing the narrative and talk about “outputs” rather then it “responding”?

Donna:   I think humans anthropomorphize things all the time, so it’s no surprise that something that seems to be capable of a conversation would be a likely candidate.  And it’s very important that we be clear-headed about the fact that GAI tools are no minds, no matter our shorthands to talk about them doing things like “hallucinating.”  It’s a great idea to try to be precise with our language, talking about “outputs,” as you suggest, but also making sure that we talk about LLMs, Machine Learning, and Natural Language Processing much much more than we use the lazy shorthand of “AI.”

Lawrie: I don’t know. My main worry is that how we refer to it sometimes drives how we think of it. In my opinion, far too many people are giving it agency, where none exists. 

Is creating the technology human? Are the people behind the technologies human? The people who have created, built it, maintain it develop it? If so are they lesser than the humans behind HE?

Donna:  I would never say that the people building the tech are “lesser” than educators.  I would absolutely say that they have very different motivations for building (and selling) the tech than educators do in their own work, and that’s worth paying attention to.  

Lawrie: Always look at the motivations behind the people building the tech. 

Richard: The technology and the manner of its insertions tells us much about the human psyche/condition in the era of late capitalism . . . and crucially its seemingly inexorable unraveling. I’d say that the architecture of GAI is massively informative as a window onto the Self.