We use cookies to improve your online experiences. To learn more and choose your cookies options, please refer to our cookie policy.

AI is already shaping how children learn, search, and make sense of the world. It’s in the apps children use, the answers they’re given, and the assumptions they make.
Professor Rose Luckin, Founder and CEO of Educate Ventures Research and an internationally recognised expert on AI in education, is working with Nord Anglia Education schools to ensure students learn to use these tools – in an age-appropriate and carefully guided way, without being misled by them.
Here, she explains what AI literacy really means at different ages, and how parents can help children stay curious, sceptical, and in control of their own thinking.
What does “AI literacy” mean at Nord Anglia – for a seven-year-old vs a 17-year-old?
For a seven-year-old, AI literacy is not about using generative AI. It is about building foundations. Children this age are already encountering AI through voice assistants, recommendation algorithms, and smart toys. They need to understand that these systems learn from patterns in data, that they are made by people who make choices, and that computers do not “know” things the way humans do.
Most importantly, we are developing the questioning habits and critical thinking that will serve students later. Can you trust everything you are told? How do you check? What makes a good source? These fundamentals of human intelligence – curiosity, scepticism, verification – are the bedrock of AI literacy, long before children touch a chatbot.
A 17-year-old, by contrast, is likely already using generative AI. They need sophisticated understanding: how these systems produce confident-sounding nonsense, why fluency is not accuracy, how bias enters through training data, and crucially, awareness of their own thinking. Are they using AI to enhance their capabilities, or outsourcing the cognitive work that builds their intelligence? That self-awareness is what separates effective use from dependency.
What is the simplest way to explain AI hallucinations to parents and children?
AI does not understand a single word it produces. It recognises patterns in text; it has no grounding in reality, no lived experience. When it does not know something, it does not say “I do not know”. Instead, it generates plausible-sounding text that may be completely false.
For older children actually using these tools, the message is simple: confidence is not competence. AI sounds authoritative even when it is wrong.
What are the most common ways children are misled by AI outputs?
Three patterns concern me. First, accepting fabricated facts, particularly dates, statistics, and quotations. Second, mistaking confident tone for expertise. Third, and most troubling, gradually outsourcing their thinking.
This cognitive offloading is the real danger. Recent research shows test scores dropping significantly when students rely on AI, and greater trust in generative AI correlating with reduced critical thinking. The risk is not wrong answers. It is that young people stop developing the thinking muscles they need.
How should we teach children to keep their voice and originality when using AI?
By positioning AI as a thinking partner, not an answer machine.
AI can help brainstorm or challenge ideas, but original thought must come from the student. We must therefore teach them to pause before accepting outputs, to ask whether the AI’s answer reflects what they actually think.
The goal is augmented intelligence: AI making us smarter, not replacing our thinking. And it is as much about integrity and responsibility as it is about attainment.
What is a short checklist families can use at home?
Parents don’t need to be technical experts. What matters most is encouraging conversation, curiosity, and healthy scepticism. Here are four crucial questions to ask when your child uses AI for learning:
“What do I need?” Be clear about your purpose before you start. If you do not know what you want, you cannot judge whether AI has delivered it.
“Where did this come from?” AI does not cite sources reliably. Find the original.
“What is missing?” AI gives an answer, not the full picture.
“How do I check?” Never submit AI-generated content without verification.
How can we prevent “prompting skill” becoming a proxy for privilege?
This is precisely why we must teach AI literacy explicitly and systematically, and why we must not leave it to be acquired at home by those with resources.
Critical thinking, verification habits, and metacognitive awareness are not optional enrichment. They are foundational for every child navigating an AI-integrated world – and for becoming a confident, independent thinker within it.
-ENDS-
About the author
Rose Luckin is an internationally respected academic and influential voice on the future of education and technology, particularly AI. With over 30 years of experience, she is a recognised expert on AI in education, serving as an advisor to policymakers, governments, and industry globally.
Rose is Professor Emerita at University College London and Founder and CEO of Educate Ventures Research Limited (EVR), a company that provides thought leadership, training, and consultancy to the education sector to help them leverage AI ethically and effectively.
At Nord Anglia Education, Rose is helping students and staff navigate AI thoughtfully and critically, with a focus on ensuring school communities use AI to become smarter and more capable thinkers, not to outsource the very learning that builds their intelligence.
Rose also serves as an advisor to Cambridge University Press and Assessment and is co-founder of the Institute for Ethical AI in Education. She is President of The Self-Managed Learning Centre in Brighton and sits on a range of advisory boards within the education and training sector.
Rose was honoured with the Bett Outstanding Achievement Award in 2025, named a Leading Woman in AI EDU at the ASU-GSV AIR Show in 2024, and awarded the 2023 ISTE Impact Award, becoming the first person outside North America to receive their top honour. She holds a PhD in Cognitive and Computing Sciences and a First Class Bachelor’s degree in AI and Computer Science.