ChatGPT and the Future of University Assessment

An image of a robot holding a book and teaching, surrounded by other robots and people
Image generated by DALL-E 2

ChatGPT-3 is a state-of-the-art language model developed by OpenAI. It is based on the GPT-3 (Generative Pre-trained Transformer 3) architecture and has been trained on a massive amount of text data. It has the ability to generate human-like text, answer questions, and complete various language-based tasks. It can also perform well on a wide range of natural language processing (NLP) tasks, such as text summarisation, translation, and text-to-speech. Additionally, it has the ability to generate text based on given prompt, which is unique for its large size and capability. It is considered as one of the most advanced language model available and it continues to be updated and improved.

(Generated by ChatGPT-3)

ChatGPT-3 became freely available on 30 November 2022.

In the short space of time that has passed academia has been ‘stunned’ by the technology’s essay-writing skills and usability (Herne 2022), escalating concerns that students can now easily produce AI-assisted written work with minimal effort, dumbing down the value of their university degree. In the past few weeks alone a plethora of articles and guides have emerged to advise how we can approach this new challenge to academic integrity and quality, some of them written by ChatGPT-3 itself, just to prove the point.

But let’s not forget that we have a history here, machines have traditionally been troublesome for Universities. In the early 1930s student unions were debating that the lecturer was at risk of being replaced by the gramophone (Lindsay 2017); in the 1990s the growth of ‘distance’ learning was poised to cause the eventual dissolution of the university and the profession of the university lecturer (Eamon 1999), which was somewhat mirrored 30 years later during the online pivot of Covid19. Let’s not forget that lecture capture has been deemed a technology that would drive students out of lecture halls and replace faculty with recorded online provision (Fagan 2021).

The difference we have with ChatGPT is that it doesn’t so much present a threat to the university experience, but rather directly into the heart of the purpose of a university education – its ability to ‘teach you how to think’. There have been shadows of this in the past, for instance the hostility towards Wikipedia (Coomer 2013), the emergence of essay mills, not to mention simple, now common place tools such as spell checkers and calculators. I remember vividly a very angry professor in the early 2000s telling me that reading lists with hyperlinks would make students baby birds, with wide open mouths expecting to be spoon fed. We’ve pretty much moved through all those advancements in technology and realised their benefits, but this one, I would argue, is different. Not because it does not have its benefits, but because of the sheer volume and scale of what’s coming will be meaningfully different and ultimately challenge the foundations upon which we measure that ability to think – university assessment.

So what does ChatGPT mean for University assessment? Let’s look at four scenarios.

1. Ban it

Whilst this is a rather a futile exercise, it is an option that we have already seen some schools take (Yang 2023), and it’s likely we’ll see more go down this path. Blocking a tool from a network or device is something most students can get around by simply logging into other networks, using other devices. Could we develop detection software like Turnitin, that is trained to identify AI language algorithms? GPT-2 Output Detector Demo has made a start here, identifying patterns of irregularities in written work that might indicate chat bot assistance. Turnitin have announced that they will build AI detection into their products. But GPT is only going to advance, every time we prompt the free ChatGPT-3 we train it and hurtle towards ChatGPT-4 which according to OpenAPI will be 100x more powerful. At this rate we may well become human-batteries fuelling the machines through constantly feeding them information and data, becoming trapped within an AI generated writing matrix! This detection race is one we will not win. Hard exiting out of this cycle requires a different approach.

Seriously though, plagiarism and cheating is not new, and is already a significant activity across Universities. The QAA recently estimated that one in seven (14%) graduates may have paid someone to undertake their assignment (QAA 2020). A recent bit of polling also indicated that 16% of students had cheated in their online exams in 2022, and 52% surveyed knew someone who had cheated in their academic assessment that year. A very small percentage, only 5%, had been caught. AI technology has the ability to make this practice easier and more accessible, but banning it is simply trying to implement an analogue solution to digital problem. And it’s questionable whether it is the technology or our approach to assessment that is problematic.

2. Return to exams

This is an extreme scenario, but again one I am expecting to see, especially where there may be less appetite or resource to advance pedagogically and approach assessment in different ways. Of course exams still exist in many institutions, but increasingly it is recognised that requiring students to be placed in highly pressurised inflexible environments lacks a degree of inclusivity and manifests all kinds of attainment gaps. A return to pen and paper exams feels like a real step back but it is one some universities are willing to take to thwart the machine (Cassidy 2023). Sitting online exams under controlled conditions, with lock-down browser technology and the ability to detect whether students are pasting large chunks of text into their authoring interface is another option. But online proctoring software comes with financial and operational challenges, not to mention ethical considerations and known opportunities for academic dishonesty.

There may be something said though for encouraging more synchronous writing exercises. Johann N. Neem, talking to Inside Higher Ed, says that faculty could find ‘new ways to help students learn to read and write well and to help them make the connection between doing so and their own growth’ (D’Agostino 2023). An example would be offering opportunities for students to write in class and learn to approach writing as a practice of learning as well as a demonstration of it.

3. Develop AI Literacies as part of Student Assessment

This is undoubtably a good step. Over the past few weeks a range of ideas for engaging, authentic and creative assignments have been shared by the higher education community on how ChatGPT can be incorporated into student assessment and develop critical AI literacies. These range from essay reflection and improvement exercises, to prompt competitions and fact checking. A growing body of guides are emerging – I particularly like ‘Update Your Course Syllabus for chatGPT‘ from Ryan Watkins, which includes 10 AI-ready assignment ideas.

AI is not going away, it is technology that we are all going to be (and already are) engaging with. Developing AI literacies to enable students to use the technology responsibly and critically, is part of preparing them for the world of work. I’m not on board with a recent statement by JISC that ‘We should really regard them as simply the next step up from spelling or grammar checkers: technology that can make everyone’s life easier’ (Weale 2023). We have to help students to understand where AI writing tools can support them, where it can enable a better outcome, and what its limitations are. Unlike a spell-checker, there is a lot more room for error – after all ChatGPT-3 does not provide you with an answer to your prompt, it provides you with an output. That is a different thing altogether.

It’s also worth noting that AI literacy is a space where we should be mindful of looming social inequalities and environmental challenges. Improved future releases of GPT are not planned to be free to access, whilst the carbon footprint of training just a single AI model is significant. As institutions usually committed at some level to social justice, Universities need to look at where they stand on these matters and support students in navigating them.

4. Assess ‘Humanness’

Like we can not ban AI tools, we can not use them for all assignments. Currently tools such as ChatGPT-3 can not do a number of things that are generally expected from a university student – it can not consult, critique and cite third party sources, it can not refer to recent real world events or published material, it can’t demonstrate higher-level thinking, argue or have original ideas. The more detail and facts you ask for, the more it falters. Today, one of the best ways for students to prove that they are not a predictive language model is to demonstrate sophisticated thinking, which after all is the purpose of a university education. And we have to ask, if a machine can tell us what we need to know, what is the point in learning it? We need to shift contexts of what students need to know and how they need to learn.

Approaching assessment with the question ‘what are the cognitive tasks students need to perform without AI assistance?’ is a good start. It is possible that the rapid evolution of ChatGPT can positively increase adoption of assessment techniques that measure learners on critical thinking, problem-solving and reasoning skills rather than essay-writing abilities. Increased use of oral and video assessment, reflective assignments that ask students to explain their thinking process, developing concept diagrams, mind maps, engaging in ideation and group projects are other examples of assessment techniques. This is something many Higher Education learning professionals have been promoting for some time in a bid to better connect University education with workplace skills, offering more authentic experiences. Professional bodies would do well to look at what value their End Point Assessments hold in an AI landscape and work with Universities to explore alternative methods.

5. Using ChatGPT to support assessment processes

Much that has come out of the sector on ChatGPT and assessment has very much focused on student learning and risks of plagiarism or cheating. However, on the faculty side there are opportunities to review how this technology can support the assessment burden that many institutions face. The more sophisticated we make assessment in response to AI, the more difficult it will be for AI technology to support tasks such as marking. However, whilst assessment rubric contains generic criteria around structure, referencing and content topics then there is scope to train AI to support the grading of these elements. Multiple formative opportunities could be put in place to raise the bar on these elements, and ultimately reduce the load of markers who can then focus on feeding back on students’ demonstration of cognitive skills.

Another possible time saver would be Multiple-Choice Question production. Back in 2018 Donald Clark wrote on the potential for AI to reduce the burden of MCQs which are time consuming and difficult to author and quite often lack the quality and the quantity needed to make them as robust as they could be. Not only can AI help to generate question banks, but it can enable more forms of open input questions as the technology makes possible the interpretation of answers such as typed words, numbers of short answers. Moving forwards the possibilities for ChatGPT to generate customised questions for individual students based on their prior knowledge and proficiency, or to support students in designing their own assignments, could lead to attainable personalisation of curriculum (Barber et al 2021).

In just a few months ChatGPT-3 has laser-focused Higher Education on reviewing the essay as a form of assessment, on the practice of writing, how that will change and why. But we are already moving on. AI is also in the space of media production, not only can it produce art, videos and audio it can adopt voices and faces – our chat bot is evolving beyond written words. It would be naive to think we can stand still. This is a technology that is already two steps ahead of our attempts to contain it in some form of meaningful, solid assessment strategy. In writing this piece I have avoided the possible option that we ‘do nothing’ in response to ChatGPT-3. Considering how slow Universities can be to implement change this is a possibility, but it is not a viable or sustainable choice. Unless we confront the implications of AI for teaching and learning, and embrace it as a part of our policies and pedagogies to develop critical thinking in an AI world, then we really will start to the lose the value of a University education.


Barber, M., Bird, L., Fleming, J., Titterington-Giles, E., Edwards, E., and Leyland, C. (2021). “Gravity assist: Propelling higher education towards a brighter future”Office for students. Available online at:

Cassidy, C (2023) ‘Australian universities to return to ‘pen and paper’ exams after students caught using AI to write essays’ The Guardian, 10 Jan 2023. Available online:

Clark, D (2018) Learning Designers will have to adapt or die. Here’s 10 ways they need to adapt to AI…. [Blog] Donald Clark Plan B. Available online

Coomer, A (2013) Should university students use Wikipedia? The Guardian, 13 May 2013. Available online:

D’Agostino, S (2023) ChatGPT Advice Academics Can Use Now. Inside HigherEd, 13 Jan 2023. Available online:

Eamon, D.B. (1999) Distance education: Has technology become a threat to the academy?. Behavior Research Methods, Instruments, & Computers 31, 197–207. Available online:

Fagan, J (2021) University of Exeter Lecturers Threaten Industrial Action over Lecture Recordings. Available online:

Heikkilä, M (2022) ‘We’re getting a better idea of AI’s true carbon footprint. MIT Technology Review’, 14 Nov 2022. Available online:

Hern, A. (2022) ‘AI bot ChatGPT stuns academics with essay-writing skills and usability’ The Guardian, 4 Dec 2022.

QAA.(2020). “Contracting to Cheat in Higher Education –How to Address Contract Cheating, the Use of Third-Party Services and Essay Mills.” UK:Quality Assurance Agency (QAA)

Watkins, R (2023) Update Your Course Syllabus for chatGPT [Blog] Ryan Watkins. Available online:

Weale, S (2023) ‘Lecturers urged to review assessments in UK amid concerns over new AI tool’ The Guardian, 13 Jan 2023.

Yang, M (2023) ‘New York City schools ban AI chatbot that writes essays and answers prompts’ The Guardian, 6 Jan 2023. Available online:

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s