Petar Jandrić explains the post-digital condition as not just an extension of the digital but representing a shift in our engagement with digital technologies. Jandrić posits that the post-digital is a state of being where digital technologies are no longer seen as separate or revolutionary but have become integrated into the fabric of everyday human experience. The post-digital era, possibly, necessitates a re-evaluation of our relationship with technology, one that transcends the simplistic binaries of digital/analogue or human/machine. It calls for a more complex understanding of how the permeation of digital technologies influences societal dynamics, ethics, and educational systems (Jandrić, 2017, “Learning in the Age of Digital Reason“). Given that definition, the incorporating of the arrival of Large Language Models (LLM) or Generative AI (GAI) into common usage, especially in academia, is already covered in the post-digital condition. However, the potential behavioural changes that are wrought through these tools may need further nuancing – when I got together with Cormier, Hall and other colleagues in 2009 to post the idea of “Preparing for the postdigital era” none of us in the room had considered the affordances that GAI would bring. The Post-digital era that we considered was not what we are seeing. Toward the end of our essay we wrote:
“We hold out hope for the postdigital era. We hope that it provides the framework for an environment that is good enough, firstly, to hold an individual as they identify and develop authentic personal experiences, and secondly, to stimulate that individual to extend her/his questioning and actions in the world… the digital is secondary to the relationships that form and develop. A central actor in the postdigital era, is, therefore, a significant, more-experienced other against whom the individual can securely test their authentic experiences. Within the postdigital era, the personal and emotional comes to the fore, and anchors cognitive development.”
I started thinking about some of the behaviours we have recorded in academics in their use of LLMs and GAI. Their behaviours might kindly be described as “augmented” by the technology available to them, this has always been the case, but I wanted to think through the adoption of LLMs or GAI into their practice. Does it anchor cognitive development? Does it help people identify and develop authentic personal experiences? Does the personal and the emotional come to the fore in these Augmented Academics?
Augmented Academics: Exploring the impacts of Generative AI Augmentation in scholarship.
The concept of “Augmented Academics” is my thinking through some form of evolutionary process in the academic landscape, reflecting the use of LLMs or GAI to enhance or increase intellectual pursuits. Most of the narratives in the media between technology and academia promise efficiency and innovation (it’s almost as if the copy was lifted straight from a vendor’s press release). But these tools have been available for a very short time, and while there are really smart people doing research on their impact, there are a huge amount of academics just trying them and seeing what happens. I wanted to do a thought experiment; a what if… And I know that the GAI apologists will critique me for being overly dystopian, but I think we need some alternative narratives to cheerleading.
Loss of Rigour. One of the principal concerns is the potential erosion of intellectual rigour. Relying on GAI generating content could result in the dilution of in-depth research and critical thinking; machine-generated outputs may not necessarily align with the specific academic discipline practices, and it is prone to errors
Ethics and Plagiarism. The question of authorship is much debated in this space. When GAI generated content significantly contributes to academic publications, traditional notions of originality and intellectual property become blurred. This could, and probably already has led to issues with plagiarism or misattribution that compromise academic integrity.
Quality Control and Peer Review. GAI lacks the human faculty for discernment and context. Inconsistent or erroneous outputs will slip through the academic peer-review process, especially if reviewers are not aware that part of the content is machine-generated. And how long will it be before some journal decides, for the sake of expediency, to make ChatGPT reviewer 3?
The Echo Chamber. GAI is trained on existing data, this will perpetuate existing biases and reinforcing the status quo. The practice could inadvertently create an intellectual echo chamber, limiting innovative thought or the questioning of established theories.
Accessibility and Disparity. The availability of the tools is limited to those that can afford and access them. This access to a select few will exacerbate existing disparities within academia. Scholars with access to these tools may produce work at a quicker rate, potentially overshadowing those who do not have the same resources, regardless of the quality, the quantity will overshadow.
Data Privacy and Security. Finally, using GAI often involves uploading data to cloud-based systems. In the context of research involving sensitive or proprietary information, this poses a significant risk of data breaches or unethical data usage. It is not subject to the same safeguards we are accustomed to in the HE Sector.
If we think through the natural progression of these ideas, and assume that even though they see the dangers, many “Augmented Academics” will still emerge. Indeed, given the neoliberal emphasis driving both teaching and research, it is highly likely that no matter what the quality, the quantity these academics push out will make them stars, with spurious new models and ideas all crafted through algorithms.
So if that happens what are the futures that might be in store for the higher education sector?
Algorithmic Conformity The potential loss of individual thought and intellectual diversity as algorithms become dominant in shaping academic discourse.
Synthetic Academic Abyss. An academic environment overly dependent on machine assistance, potentially becoming barren of genuine human insight. This is a second-order academic environment where technology’s role in evaluating or creating content leads to a decline in quality and originality. The abyss is the potentially bottomless pit academia could fall into when human and machine-generated knowledge becomes indistinguishable.
Algorithmic Suppression. Who trains the tools controls the discourse! A GAI controlled, mechanical academic environment where human input becomes secondary to machine algorithms designed by tech billionaires.
Neo-Dogmatism. Where these new GAI technologies could lead to a revival of uncritical acceptance of established theories. People accept what the bot tells us. This is also the state where a supposedly ‘harmonious’ relationship between technology and human academics instead leads to a stalemate, inhibiting new knowledge and potential progress.
We already see across the social media platforms academics boasting and boosting their prowess with the Generative AI tools. But where is the rigour, where is their evidence-based research looking at the impact of the machine on their practice.
As the Generative AI starts to drive us toward these “Algorithmic Conformities” and “Synthetic Abysses” where do we turn?
Librarians become one part of the essential academic checks as we reach this pivotal moment, as custodians of not just scholarly resources, but also as arbiters of ethical and qualitative standards in the emergent digital spaces that are most impacted by the Generative AI. Their expertise has always been grounded in understanding the information and research literacies; but they now bear the responsibility of extending their work in supporting staff and students in these skills and including more work in understanding a complex landscape littered by datasets and articles written by non-humans. Curating dependable databases where non-human work is clearly identified, becoming the arbiters of responsible use of GAI research methodologies, and of course educating the academic community on the critical evaluation of machine content; librarians serve as the first bulwark against the potential pitfalls of machine generated academic dystopias. Their role continues to evolve into that of ethical stewards and intellectual gatekeepers, safeguarding the rigour and integrity of academic work amidst the encroaching tides of automation and algorithmic influence. But for how long can the GAI tide be held back, especially with so many academics complicit in allowing it to subvert the culture.