Why the education sector needs to embrace AI and stop punishing students who use it - Hassan Ugail

A week ago, the Higher Education Policy Institute (HEPI) published its findings into how university students are interacting with generative AI. Its report, titled Provide or Punish? Students’ views on generative AI in higher education, found that more than half of students polled admitted to using generative AI for help on assignments.

Since ChatGPT was released in November 2022, generative AI has caused an explosion of interest. Inevitably, large language models such as ChatGPT have emerged as a ubiquitous tool in the academic landscape.

HEPI’s survey is the first UK-wide study to explore students’ use of generative AI since ChatGPT was released. They polled over 1,200 undergraduates. Their findings - that use of generative AI has already become normalised - should not come as a surprise.

Hide Ad
Hide Ad

Like any new technology, it can be used for both good and bad. However, while some people have argued that students should not use language models such as ChatGPT and Google Bard, I would argue the opposite.

A general view of The Chat GPT website. PIC: John Walton/PA WireA general view of The Chat GPT website. PIC: John Walton/PA Wire
A general view of The Chat GPT website. PIC: John Walton/PA Wire

Personally, as an academic whose day-to-day work involves interacting with AI, I use generative AI models like Copilot, Stable Diffusion and ChatGPT, to kickstart ideas for projects and generate snippets of computer code while writing large computer programs. To me, the argument is simple: generative AI is a tool (much like the calculator) that can not only save us time but advance our understanding.

I could spend a week writing a piece of computer code fixing syntax along the way. ChatGPT will do it for me in a matter of minutes. The result won’t be perfect, but that is the point at which human oversight becomes important.

Likewise, students can also benefit from using generative AI, so long as they do not pass it off as their own work and make it clear why they have used it.

Hide Ad
Hide Ad

We should recognise generative AI for what it is: a vast knowledge repository, rather than perceiving it (as some have) as an entity of intelligence.

It serves as an invaluable resource akin to a sophisticated search engine. Embracing this reality, we are beginning to advocate for the constructive use of ChatGPT among students, steering clear of outright prohibition. To address plagiarism concerns, we have recently implemented a strategy where students are encouraged to submit both their original work and versions generated by ChatGPT. This approach will not only help significantly reduce the likelihood of plagiarism but also help them realise the limits of these tools and appreciate the power of their own creative thought.

ChatGPT and generative AI in general is a boon for humanity. But, just like Google, you cannot trust everything that comes from it.

When calculators were first introduced, people were against students using them. Now all students use them. We simply changed our assessment methods. It’s the same with ChatGPT. I think we should let students use it.

In my view, we should embrace it.

Hide Ad
Hide Ad

What needs to happen now is an acceptance of the reality that generative AI is here and that students are using it. Prohibition will not work. Therefore, academic institutions need to implement clear policy guidelines on the use of AI, because at the moment, there is confusion.

Professor Hassan Ugail is director of the Centre for Visual Computing and Intelligent Systems at the University of Bradford.

Comment Guidelines

National World encourages reader discussion on our stories. User feedback, insights and back-and-forth exchanges add a rich layer of context to reporting. Please review our Community Guidelines before commenting.