Misinterpretation, Misrepresentation, or Misdirection? Whatever, it missed the point of our argument.

Screen shot of the article page for Generative AI and the Automating of Academia, link in blogpost

 

 

Let’s start by saying that we are so pleased with the reception that we got for a piece on Generative AI and the Automating of Academia. It’s been awesome to have positive reviews, and we (Donna, Lawrie, Cathryn, and Richard) have been invited to some great events as a result.   We are so pleased that our work is being cited in the literature already.  It’s always gratifying to see the impact of our thoughts on the work of colleagues old and new.

However, as with many authors, we occasionally find our words being used to support arguments that we don’t actually support.

 

Some articles suggest, via citation, that we are saying what we didn’t actually say.

A recent publication discussed the ethical implications of using GAI like ChatGPT in higher education, it focused on things like data privacy, algorithmic bias, plagiarism, and “hallucinations”. It enthused about “the transformative potential of AI in education” but did emphasize some need for consideration of ethical issues. The paper suggested  measures such as clear policies, innovative assessment methods (and advanced plagiarism detection) to address concerns universities might have. It also advocated for collaboration among staff, AI developers, policymakers, and students to harness the tools’ potential in an ethical way. I think we would judge the overall tone towards GAI in the paper to be at least “cautiously optimistic”, but with a strong emphasis on benefits.

 

But then, we noticed that a paragraph started:

“However, the value of chatbots extends beyond saving time on administrative burdens; rather, they can additionally transform pedagogy (Watermeyer et al., 2023).”

 

Readers, this is not an argument that we made, inferred, or otherwise intended in any way, shape, or form!

So we’d like to be clear.

In our article we:

 

  1. Discuss the history of our current neoliberal experience of academia
  2. Situate the arrival of GAI tools in that history and current moment
  3. Focus on academic behavior around GAI as evidenced in recent survey-based research we carried out
  4. Argue that what academics think GAI tools will be useful for is evidence of what they are struggling with in our current neoliberal experience of academia

 

There are moments where we suggest that, all things being equal, GAI tools might provide opportunities for academics to reflect on what they want to be doing and not in their work.  But ALL THINGS ARE NOT EQUAL and academics are embedded in a cultural web of inequality and hierarchies that mean (as usual) that technology cannot be offered as a “solution” to these struggles.  Technology, including GAI tools, acts as an amplifier to the current problems in academia.

Do, please, cite us for that!

As in:

Brailas, A. (2024). Postdigital Duoethnography: An Inquiry into Human-Artificial Intelligence Synergies. Postdigital Science and Education, 1-30.

 

Felix, J., & Webb, L. (2024). Use of artificial intelligence in education delivery and assessment. UK Parliament POSTnote 712.

 

Hayes, Sarah, Michael Jopling, Stuart Connor, Matt Johnson, and Sally Riordan. “‘Making you aware of your own breathing’: human data interaction, disadvantage and skills in the community.” Postdigital Science and Education (2024): 1-16.

 

Veletsianos, G., Jandrić, P., MacKenzie, A., & Knox, J. (2024). Postdigital Research: Transforming Borders into Connections. Postdigital Science and Education, 1-20.