Roy Batty, an android, saying "I've seen things you wouldn't believe"

What went wrong with Generative AI in Education: Reflections from 2033

Donna Lanclos sits down with Lawrie Phipps as he retires from higher education, and looks back over the last 10 years and the impact of Generative AI. 

“Lawrie what the F*CK?”

Bad theories reinforced

Despite evidence indicating that learning styles have limited impact on student achievement, government policies mandated the integration of AI platforms that perpetuated this disproven theory. As a result, students were pigeonholed into rigid categories, hindering their ability to develop well-rounded skills and adapt to diverse learning environments.

Superficial Personalisation

AI algorithms, developed without a comprehensive understanding of pedagogy, aimed to personalise learning experiences. However, these systems relied on simplistic data points, such as student preferences or engagement metrics, leading to shallow personalization that failed to address individual needs effectively.

Biased Decision-Making

Even though it was widely known, AI algorithms used in educational settings were biased due to flawed training data or inadequate algorithmic design. This bias perpetuated inequalities, disproportionately impacting precarious students and reinforcing existing systemic disparities.

Less Human Interaction

Over Reliance on AI-led instructional design to gradually diminish the importance of human interaction. Students lacked the connection with teachers and peers, leading to decreased engagement, motivation, and overall learning outcomes.

The neglect of Emotional Intelligence

AI in education focused primarily on cognitive development, neglected the crucial aspect of emotional intelligence. The absence of empathetic human interactions stunted students’ social-emotional growth, hindering their ability to navigate relationships and handle emotional challenges.

Data Privacy

The government’s haste to implement AI in education overlooked critical data privacy concerns. Students’ personal information and learning data fell prey to data breaches and unauthorised access, compromising their privacy and potentially leading to unintended consequences.

Lack of Critical Thinking

AI-driven assessments prioritised rote memorization and standardised testing, stifling students’ development of critical thinking skills. The focus on narrow metrics overshadowed the cultivation of creativity, problem-solving, and independent thought.

Digital divide

The widespread adoption of AI in education deepened existing inequalities. Students from disadvantaged backgrounds, lacking access to reliable technology or internet connectivity, were left behind, exacerbating the digital divide.

Consequences of Automation

Automated grading systems, sold as time-savers for educators by edtech companies, inadvertently devalued the feedback process. Students received generic, algorithm-generated responses that failed to provide meaningful insights or foster a genuine understanding of their strengths and weaknesses.

Stagnant Innovation

Finally, the heavy reliance on standardised AI platforms stifled a thriving innovation culture in education. Teachers felt constrained by rigid curricula and AI-driven instructional methods, inhibiting their ability to tailor instruction to the unique needs of their students, and with more classes being added and staff being limited to “push button” teaching, their desire, and time to innovate was lost.

Which leaves us here

As a sector, we embraced the generative AI and the changes it brought, we hailed it as a way of freeing up our time for more creative pursuits, we sought to turn our minds to higher things, and we became complicit in the very process that led us to this point. The fates again recognised our hubris, as they did with the early 20th century promises of mechanisation and industrialisation, and they laughed, as many of us ignored the implications and consequences of our actions.

Roy Batty, an android, saying "I've seen things you wouldn't believe"