Why using AI to make more content may be the least interesting thing we can do with it

Mid-century style illustration showing a transition from ‘Content Generation’ to ‘Judgement & Meaning’. On the left, stacks of papers, a laptop, and a university building represent content production. On the right, an open book, a human profile with a question mark, a magnifying glass, and balance scales represent critical thinking, interpretation, and judgement. A soft arrow connects the two, suggesting a shift from producing content to making meaning.

Higher education is not short on content.

It has lectures, readings, videos, activities, frameworks, rubrics, and resources in abundance. What it is increasingly short on is time, clarity, and space for deep thinking. Yet as generative AI becomes embedded across the sector, much of the attention remains fixed on accelerating the very thing we already have too much of.

This tension sits at the heart of a recent survey exploring how generative AI is being used in the development of educational content and online courses across UK higher education. The study invited learning designers, digital education specialists, academic developers, heads of online learning, consultants, and edtech partners to share how AI is shaping their practice, where it adds value, and where it raises concern. The aim was not to evaluate tools, but to build a clearer picture of emerging workflows, opportunities, and challenges — and to inform future guidance on responsible, effective AI-supported course design.

What emerged from the 62 responses was not resistance to AI, nor uncritical enthusiasm. Instead, a more interesting story surfaced: one of widespread adoption, coupled with uncertainty about purpose.

And that raises a fundamental question.

Are we using AI to improve learning — or simply to produce more of it?

AI is Widespread, But it’s Use is Narrow

The survey results make one thing clear: AI is already part of everyday practice.

Over 70% of respondents reported using AI regularly, with a further group using it occasionally. Very few said they did not use it at all. In other words, the question of whether AI belongs in course design has largely been settled. But how it is being used tells a more revealing story.

Most respondents described using AI to:

  • Draft or refine content
  • Generate ideas or outlines
  • Improve efficiency
  • Speed up routine tasks

These uses are understandable, particularly in a sector under sustained workload pressure. But they also point to a consistent pattern – AI is being used primarily as a productivity tool, not a pedagogical one.

Only a small number of respondents described using AI to reshape assessment, deepen critical engagement, or support reflective or dialogic learning. Where AI was avoided, it was often because of concerns about quality, ethics, or the risk of undermining academic judgement. A number of respondents noted deliberately avoiding AI in assessment contexts. Another avoided it whenever “academic judgement really mattered.”

That response is telling. When does academic judgement not matter in higher education?

We Don’t Have an Issue with AI, we Have an Issue with What We are Asking it to Do

What emerges from the survey is not anxiety about technology, but discomfort with what its use appears to reinforce.

Much of higher education still operates within familiar models of teaching: delivering content, explaining concepts, and testing knowledge acquisition. When AI is applied within that same framework, it simply accelerates an already content-heavy approach. For some respondents, this made AI ‘feel less like innovation and more like an intensification of existing problems’.

Several participants expressed concern not because AI was being used, but because it was being used in ways that aligned too neatly with traditional delivery models rather than supporting pedagogy, learning design, or critical engagement. A number also raised concerns about AI-generated content being created without sufficient subject matter checking or critical review — particularly when time pressures made scrutiny difficult.

In an age of information abundance, this is a warning sign. The more content we produce, the harder it becomes to assure quality, coherence, and academic integrity. Scaling content creation without equivalent attention to sense-making, interpretation, and judgement risks amplifying error, surface understanding, and misplaced authority. What respondents were pointing to is not just a workload issue but a design one: When education centres on producing more material, the risk of uncritical adoption grows alongside it.

From this perspective, the challenge is not how efficiently we can generate content, but how intentionally we design learning. The real opportunity for AI lies not in accelerating production, but in supporting discovery, interpretation, and meaning-making — the things that cannot be automated.. This points to a deeper structural issue. In a sector where many academics are disciplinary experts rather than trained educators, teaching is often framed as delivery rather than design. When AI enters that environment without a shift in pedagogical intent, it risks reinforcing exactly the practices many institutions are trying to move beyond.

The discomfort is not about AI itself. It is about what its use seems to prioritise.

There is a Confidence Problem

One of the clearest messages from the survey was not resistance, but uncertainty.

Respondents consistently pointed to:

  • A lack of clear institutional guidance
  • Limited examples of good practice
  • Uncertainty around ethical and legal boundaries
  • Little support for designing AI-aware assessment

When asked about professional development, the strongest preferences were not for technical training, but for:

  • Practical exemplars
  • Short, ideation and co-design workshops
  • Opportunities to learn from peers
  • Guidance grounded in real teaching contexts

The survey participants are largely not asking how to use AI, they are asking what good use looks like. What does responsible, pedagogically sound, defensible use of AI actually involve?

From Content Creation to Cognitive Design

This is where the conversation needs to shift.

The most valuable use of AI in education is not generating content faster, but helping learners think better. Used intentionally, AI can surface assumptions in student reasoning, expose gaps in understanding, challenge dominant perspectives, support reflective dialogue, and model disciplinary ways of thinking.

But this only happens when learning is designed for it.

It requires:

  • Assessments that prioritise judgement over output.
  • Activities that reward reasoning rather than reproduction.
  • Learning outcomes that value interpretation, not just information.

78% of respondents felt least confident about designing AI-aware assessment and alternatives to the traditional essay or report. If AI can now produce fluent academic prose with ease, then writing quality can no longer be an indication of learning. What matters is not how polished a response appears, but the quality of judgement it demonstrates — how evidence is weighed, positions are formed, and arguments are justified within a disciplinary context.

From this perspective, AI does not undermine education. It exposes the limits of assessment practices that have long treated writing as a proxy for thinking.

When AI is Used Well

One of the most striking findings from the survey was that respondents who reported more positive experiences tended to use AI as a thinking tool rather than a writing tool.

Several respondents noted using AI to sense-check learning outcomes, explore alternative ways of framing concepts, or reflect on how students might interpret an activity or assessment design. Others used it to surface assumptions, brainstorm playful learning activities, to test alignment, or support reflective and inclusive design decisions. In these cases, AI was not doing the intellectual work on behalf of the educator, but helping to make the design thinking more explicit. What these examples have in common is that AI is supporting the development of judgement rather than replacing it.

The Strategic Question Institutions Now Face

The question for higher education is no longer whether AI should be used. It is What kinds of thinking do we want our graduates to be capable of in an AI-rich world?

If the answer includes criticality, ethical reasoning, reflexivity, and informed judgement, then AI must be designed into learning with those outcomes in mind.

Respondents expressed these as some of the biggest areas institutions needed to focus on:

  • Intentional assessment design
  • Clarity about where AI adds value and where it does not
  • Support for staff to experiment safely
  • Alignment between pedagogy, policy, and practice

Without this, AI risks becoming just another layer of activity — busy, impressive, and ultimately hollow.

A Final Reflection

If there is one finding from the survey I will be taking with me it is that AI has made content easy. That changes everything.

The role of education is no longer to provide answers, but to help learners ask better questions. To weigh evidence. To understand context. To recognise bias. To develop and defend a position with confidence and care. And that, ultimately, is where the real focus in course design now lies.

Leave a comment