When Google published a major update on AI and Education in May 2024, it signalled a significant step in how one of the world’s largest technology companies imagined learning in an AI-enabled world. Google showcased the potential of large language models to tutor, summarise, assess and personalise content at scale. Yet, it drew fair criticism with commentators arguing that while the technology was impressive, the pedagogical framing and teacher-agency dimensions remained underdeveloped (Webb, M. (2024); Hill, P. (2024))..
Google’s latest publication — AI and the Future of Learning — represents an evolution. It speaks less about what AI can do and more about how learning might be reimagined. There is now language about agency, meta-learning, “learning how to learn,” and designing AI tools that guide rather than simply answer. While the progress is notable, there remains a gap between vision and practice. The paper is still light on how this translates into curriculum design, teacher professional development, assessment reform or equity of access. And that is where the real work begins.
What Has Improved Since 2024
To Google’s credit, this new paper moves beyond AI as a homework helper. It acknowledges that the purpose of AI in education is not to provide answers, but to deepen thinking. It foregrounds personalisation, metacognition and learner autonomy. It imagines AI tutors that ask “why?” instead of just giving “what”.
It also speaks more directly to global inequities — the teacher shortages, lack of access to high-quality materials, and the barriers faced by learners who sit outside traditional systems.
But despite this shift, pedagogy is still an implied backdrop rather than the engine. There is still limited conversation about how AI fits within curriculum frameworks, or how teachers’ roles must be redesigned, not just supported. It is still unclear how assessment will evolve when AI can generate essays, debate arguments or even simulate reflective practice. Most importantly, it does not show how we cultivate the things AI cannot automate — empathy, resilience, criticality and curiosity.
Curiosity as a New Literacy
This is the part of Google’s approach I do rather favour. If AI can provide information instantly, then the real value in education becomes the ability to ask better questions. This is why curiosity must now be treated as a literacy — as essential as reading, numeracy or digital skills.
Curiosity is not accidental. It grows in spaces where students feel safe to question, where mistakes are treated as thinking, and where learning is designed as exploration rather than consumption. AI can support this — it can prompt reflection, generate counterarguments, offer alternative perspectives — but it cannot create the desire to wonder. That remains profoundly human.
To embed curiosity in learning, we must design activities that begin with student questions, not teacher answers. We need assessments that reward exploration, not reproduction. We should use AI not as a shortcut to information, but as a companion in inquiry — a thinking partner that provokes, prompts and occasionally disagrees. These are frameworks and toolkits I am currently working on.
Linking to the Curriculum and Assessment Review
Published the day prior, the UK Government’s Curriculum and Assessment Review: Building a World-Class Curriculum for All argues for a curriculum that is knowledge-rich yet future-facing — one that prepares learners for uncertainty, not just examinations. It emphasises agency, adaptability and meaningful application.
Google’s paper indirectly echoes this. Both documents imply a shift away from the passive acquisition of content toward active, reflective, human learning. Yet both raise the same question: how do we turn aspiration into implementation? How do we design learning journeys that are coherent across modules, that build skills progressively, and that make space for thinking, not just completion?
Online programmes in particular face this challenge — they must be flexible enough for diverse learners, but structured enough to retain academic integrity and coherence. This is precisely where thoughtful design and digital pedagogy matter.
The Skills White Paper and the Missing Digital Spine
In my recent commentary on the Government’s Skills White Paper, I argued that while the policy imagines a modular, stackable, lifelong learning system, it is missing a digital spine — the connective infrastructure that would make it possible. Without integrated systems, shared data standards, and support for learner digital capital, skills reform risks becoming a collection of disconnected pilot projects.
Google’s vision of AI-enabled personalisation aligns, in some ways, with the White Paper’s ambition. But technology alone will not deliver flexible, lifelong learning. Infrastructure will. Policy will. Pedagogy will. Digital Leadership will. And unless we build systems that value digital inclusion, human connection and student persistence, AI will amplify inequality rather than reduce it.
Where the Opportunity Lies
This moment — between vision and practice — is exactly where educational institutions, online providers and policymakers need to build their capabilities, and quickly. AI will not, by itself, create better learning or solve systemic problems in the sector. But it can, if designed with intention, enable us to build learning systems that are more human, not less, and deliver experiences that prioritise curiosity, belonging and purpose. The opportunity lies in using AI not to automate what we already do, but to redesign how we teach, support and connect — creating education that is responsive, inclusive and deeply attuned to what it means to learn in an age of intelligent machines.
For institutions and online providers, the most immediate opportunities sit in these areas:
- 1) AI-enabled learning, grounded in pedagogy
Design programmes where AI enhances learning design and supports students to learn in an AI word— scaffolded modules, student agency and progression, not pushing out content at scale.
2) Teacher capability and AI-confident practice
Build staff confidence through workshops, toolkits and coaching so educators become co-designers with AI, not system supervisors.
3) Digital capital and inclusive participation
Create onboarding, skills pathways and support models to ensure every learner has the access, skills, literacies and confidence to succeed in AI-enabled environments. x
4) A digital spine for lifelong learning
Engage advisory on interoperable systems, data standards and ethical analytics to connect modules, credentials and learner journeys across the institution. - 5) Curiosity, care and connection embedded in design
Design learning that prioritises belonging, questioning and human connection — because retention is built through relationships, not dashboards. - 6) Digital leadership and change capability
Help leaders move beyond slow, committee-led processes to more agile decision-making lead others in the redesign of processes. In an AI-driven world where skills and knowledge evolve quickly, institutions need leadership that can align teams, make ethical choices about technology, and streamline programme approval so innovation keeps pace with learning.
Conclusion
Google’s AI and the Future of Learning is not the answer. But it provides a warning.It signals that the age of automated content has arrived, and content alone will not build an AI-ready, AI-literate or employable workforce. If anything, it makes clear that the real value of education now lies in what AI cannot produce: curiosity, collaboration, ethical judgement, adaptability and kindness — the human skills that will define employability in an AI-shaped economy.