Last year in Dublin, I stood in a lecture theatre training a group of third-level educators on the use of AI in academia and research. As part of the session, I asked if any of them had received coursework from students that had been generated by an AI but passed off as their own. Nearly every single person in the room raised a hand. Some raised two. It was, of course, by then common knowledge that education and AI were at a crossroads, but it was still quite startling to see all of these hands twitching in the air, given how serious universities typically view plagiarism in student assessment.
Interestingly, the tell-tale signs were not hallucinations or factual errors, but instead a dollop of overly grandiose prose, some American English spellings – or a sudden uptick in the clarity and quality of writing from students who – let’s say – had not demonstrated such intellectual depth or eloquence earlier in the term. Many who spoke were concerned about the challenges AI posed to their abilities to evaluate their students comprehension.
It’s less than a year later. I think it’s fair to say that, given access to any of the current generation of large language models (LLMs), even a mediocre student today could produce a 25,000-word thesis – complete with citations – in less than a day.
I think it’s also probably fair to say that the final output of this single day’s work could be very difficult to separate from the work of a diligent and brilliant young mind who had devoted years to the study of the subject.
The problem of hallucination
The problem here is self-evident: if students can produce PhD-level essays without understanding them – let alone be able to critique their accuracy – that’s a big problem.
Despite the excellence of newer AI models, they can still make serious mistakes. While hallucination is becoming rarer, it is far from extinct. More problematic is the fact that some models will happily take something written on the internet and present it as fact. Marketing copy, four-year old Reddit reviews and published science can all sit happily side by side in the hierarchy of truth.
Of course, the value of education – the whole point of it – is not just in knowing the right answer to the questions on a final-term paper. The value is in learning how to learn and gaining a deep understanding of a subject. And most importantly, the value is in creating individuals who have critical skills and can think for themselves.
I spoke with Prof Alan Smeaton who leads the Government of Ireland AI Advisory Council and just published a report on AI and education. As one of the country’s leading authorities on AI, he’s a little biased of course, but he is surprisingly upbeat about the issue.
The good news, he says, is that third-level institutions are set up to be nimble enough to adapt to this change. Colleges and lecturers have the freedom to design the students’ learning experience however they see fit. To do that for the modern world though, staff in our universities and other higher-education institutions (HEIs) need to embrace these tools rather than bury their heads in the sand, thinking this whole AI thing will blow over. It will not.
Guidance available
There are pockets of proactivity across the country. University College Dublin has created a clear guide on how students should approach their use of any generative AI (GenAI) applications. Smeaton co-published a free course for Dublin City University staff on AI Literacy.
The Centre for the Integration of Research, Teaching and Learning (CIRTL) at University College Cork issued excellent guidance on this problem in a ‘Short Guide’ for its faculty: “The more transparency there is around the rationale behind assessment decisions and the lines around acceptable or forbidden AI use, the better students will be able to meet these expectations.
“Because, at the end of the day, AI is a tool and, as such, is neither inherently good nor inherently bad – what matters is how it is used. But to use it effectively – or to follow advice to avoid it completely – students need to understand what AI is, why it is or is not appropriate and, perhaps most importantly, to see the ways their own experience and expertise shape its use and effectiveness.”
The guide encourages staff to talk about AI with students, and to explain when and how it should be used. It also suggests ways of adapting assessment to the new world order, such as placing greater emphasis on collaborative work, analytical thinking, and personal reflection – or even requiring students to submit annotated drafts that show the steps they took to reach their conclusions.
Confusion remains
These are all great ideas and similar guidance can be found on nearly every university website if you look closely enough, but the burden of work in learning how to use AI, determining an approach on how to integrate it and communicating all of this to students still seems to lie mostly on the shoulders of the individual instructor. This is tricky for students who have to deal with variability from one lecture to another and the stakes can be high for the students who may get it wrong.
In March 2023, Trinity College Dublin issued guidance that, “If a student generates content from a GenAI tool and submits it as their own work, it is considered plagiarism, which is defined as academic misconduct in accordance with the College Academic Integrity Policy”. Yet, as of just a couple of months ago, Gen AI use is now permitted by default for Trinity students in all courses, unless otherwise specified. It’s confusing, I would imagine, for students and staff alike.
Many of the educators I spoke to on the subject across various disciplines and institutions weren’t clear on what their role was in this domain. Many don’t even use the technology themselves, which makes policing its use an interesting challenge.
It’s clear to me that we desperately need to raise the AI literacy of educators and students alike so they can know how to use AI tools in a reliable, secure, equitable and inclusive manner.
There are myriad ways AI can help learners and educators to excel.
Not every college lecturer is inspiring, and not every learner takes in information the same way.
AI can be wonderful for student comprehension, of course. With Claude or ChatGPT or Gemini, students can essentially build their own tutor for any subject and interact with it using pictures, voice and text. For example, NotebookLM is a tool that I find learners love because it allows them to upload sources and then create study notes, study plans and even a voice podcast, which is a real stand-out feature.
With these new tools, each individual student can personalise how they learn, what they learn and at what pace. This is possible today using free tools – it’s just about knowing how to ask the right questions.
Lecturers, armed with domain expertise and a rudimentary knowledge of how to leverage AI, can innovate to build better study materials, games or interactive content that align perfectly with their understanding of the subject matter and desired curriculum.
Beyond study, the power of GenAI to co-create new work is only at its infancy. In the near future, we will come to view agents as collaborators and teammates in research and education and beyond – a new breed of digital colleague.
Leadership needed
We also need leadership from Government and HEIs to recognise that AI is here to stay and that we all need to adapt more quickly to this new reality.
The prevailing wind of the last two years has understandably been one of caution. That now needs to give way to pragmatism and seizing the opportunity in front of us.
Preparing young adults to work side-by-side with AI is not just smart: it’s vital to their success in the real world.
India Today recently reported that AI skills will be a mandatory part of coursework from the age of 8. For what it’s worth, this terrifies me a little. I am not someone who blindly believes this wild enthusiasm for AI is a good thing. But it is a thing that is changing our very world. Put plainly, we need to understand GenAI now and wield it – or risk becoming its subjects.
What’s needed now is a coordinated national strategy that provides frameworks, funding and training to help educators and students navigate this digital transformation. Without this leadership, we face an inconsistent patchwork of approaches that leaves some learners and institutions behind.
The organisations leading the way have shown that adapting to AI doesn’t mean abandoning academic rigour – it just means redefining it for a world where human-AI collaboration is the new literacy.
The question is no longer whether to embrace these tools, but rather how quickly we can develop the wisdom to use them well.
Further information on Get Started with AI here.
Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news