Ask HN: How do you teach your kids about AI?

16 markun 15 6/14/2025, 2:12:29 PM
I'm a father of two kids (9 and 11) and I’ve been struggling with how to introduce them to the world of AI — not just the tools, but the concepts, limitations, risks and wonders.

As a way to process this challenge, I’m working on a children's book that explains AI in a narrative format. I’ve written other books for kids tackling complex topics like politics, grief and digital privacy — but this one feels especially urgent and slippery.

In the story I’m outlining, a child receives a plain cardboard box as a gift. Curious, she starts stuffing all her books into it — stories, encyclopedias, comic books, math manuals. At some mysterious threshold, the box "comes alive" and starts responding. As they interact, the story explores concepts like hallucination (the box starts making stuff up), reinforcement learning (the box improves when rewarded), bias, prediction, and the fine line between intelligence and mimicry.

I'm looking for:

Good learning resources, metaphors or visual models that help explain how AI works (ideally in ways accessible to children)

Reflections on when and how to introduce AI to kids — especially before they get too deep into GenAI tools without critical thinking

Stories or experiences from others trying to teach (or explain) AI to younger people, whether as parents, educators or creators

Would love to hear thoughts, resources, or warnings. Thanks!

Comments (15)

28304283409234 · 11h ago
I used https://thebullshitmachines.com to illustrate things to my kids (14 and 16).
samwillis · 11h ago
I spent an afternoon vibe coding a game with them (10 + 6). We took it in turns describing what to build, and I did my best to explain that the AI was interpreting our instructions and writing to code. They could see the code changing and sort of understood the concept from that.

Key thing I tried to emphasise what that this whole process was new and that before November last year wasn't possible.

They really got it, my younger son is very excited about the idea of build games that follow the stories he comes up with (he's recently been spending time writing stories on a iPad, inspired by his novelist mother). We're going to spend more time experimenting together over the summer holidays.

Kids are curious sponges, I don't think you need to spell out to them exactly how it works, just show them and their curiosity takes over.

I know at school the teachers have been using image GenAI in English lessons with my older daughter, using it in lessons about descriptive language. They have had the kids experiment with describing things and getting images back. I was quite impressed to hear they were doing that, it's a great way to introduce the concepts in the context of a topic they are covering.

On the general topic of tech we have always (from as soon as they could hold a device) let the kids play computer games, and experiment themselves with tablets. But we've had the internet locked down, and not let them have things like YouTube Kids, it feels too close to social media, and serve explained the dangers of that to them. So very pro exposing children to tech, but no social media at all. I think in time we will try and explain the danger and downsides of AI, but it's all so new, there's not much to cover yet, particularly as we are still developing our own opinions.

MOARDONGZPLZ · 11h ago
https://chatgpt.com/ has been my go to. They can ask it questions about the subject and get answers. They can ask it about the risks and benefits, and make as many metaphors as you would like.
parasti · 11h ago
I often use the Reddit "explain like I'm five" prompt to learn new concepts with good results.
evolve2k · 10h ago
Best activity I saw was say 10 front on pics of dogs faces. Print them out and for each image slice the image into vertical slices.

(The 10 original images are the training data)

Then you can make up dog faces by taking slices from each image. Then you can make sorta thinner dog faces (“beauty edits”) by just leaving out every second slice.

Nice practical analogy. Slice in one cow face cause the computer don’t know the difference (hallucinate), it’s mostly a thin dog but not fully some is just sliced in AI junk.

Also discuss how the final AI image is taken from the makers of the original images. Discuss ownership, it’s new but composite, etc.

sys_64738 · 10h ago
Same thing you tell them about the police.
jarcoal · 10h ago
I set my son (8) up with a ChatGPT account and he loves it. He uses it to generate scripts for Roblox Studio, recipes to cook, etc.

I used the "Instructions" field to indicate he is only a child and to interact with him appropriately, and overall it does a decent job keeping the language somewhat simple when chatting with him.

CommenterPerson · 11h ago
Don't forget to discuss the bubbles, winters, surveillance, and the hype.
forty · 10h ago
Not sure about your story, it might make it sound like any of this is magic, but it's actually stupidily mechanical (which personally is the main message I would want my kids to understand)
floren · 10h ago
Well, when a hundred million websites love each other very much...
kkfx · 4h ago
The most basic explanation is a video of various people doing the same thing like cooking or using a washing machine: "hey, did you notice, there are countless different kitchens/washing machines/homes but in the end they all do the same thing. Well, LLMs do this, anything you ask is already more or less asked and answered somewhere, it's just a matter of extracting it from the countless small differences, if you ask something not already answered, well, you normally get meaningless results. This means such software might appear human-line but they have no real reasoning".

It's not entirely true but simple enough for a kid. The we can add details showing a classic keyword based search and how we can improve it with a list of synonyms for instance enough to produce some better results than the basic exact matching, showing we can do something more, but the base it's that.

We can conclude: in the future we need to create more and more as human, while machines will apply what we have already created in the real world.

ALittleLight · 11h ago
I have a 3 year old who uses AI by talking to ChatGPT's advanced voice mode. He enjoys talking to it and we also use image generation to generate images he's interested in.

He also likes to translate words into different languages - he uses Alexa for this, constantly translating stuff into all the languages he knows about, and he learns that other languages exist when Alexa misunderstands him and translates into a new language - e.g. yesterday he was asking Alexa to translate "eat spinach" and it misunderstood him as asking for "Eat" in "Finnish", and now he knows there's a new language he can translate to.

One thing that comes up a lot in our household is which things in the house are intelligent and which are not. For example, I once heard my son asking the fan to turn itself on, which seems pretty reasonable since some things in our house do respond to voice commands, and sorting out which do and which do not is not intuitive. There's a similar issue in the car sometimes. When our car's audio system is connected to a phone, my son can control what music is playing by saying "Okay Google, play labubu", but when we are listening to the radio, it doesn't work, and sometimes he will try to either command or ask us to control the radio (e.g. "restart this song"). Difficult concepts to explain to a child that we do control what plays on the phone but not on the radio and why.

Another AI activity we've done is vibe coding. My son is a big fan of numbers and countdowns, and asking Claude to generate a webpage with colorful numbers counting up and down and buttons to click to change the numbers and animations and so on works really well.

AdrianB1 · 11h ago
I did not have to explain it to kids, but to people that know statistics. It was quite simple, as current AI is not so complicated, just systems that spit out most probable answers based on their internal database of stats. When AI will be intelligent, I will adjust the explanation.
dboreham · 11h ago
First teach them linear algebra.
hluska · 10h ago
I have a nine year old so same age as one of your kids. We do three major things together with LLMs:

1.) We spend time together on ChatGPT. Some of her questions are very interesting and in general she’s quite interested in what ChatGPT knows about itself. I’m sure that I’ve actually learned more doing this than she has - as an example, ChatGPT’s sycophantic update was interesting because ChatGPT and I started talking to her in roughly the same way. Chatting with an LLM is very useful and instructive when it comes to current events in her life - for example, we went to see the Minecraft movie opening weekend, ChatGPT had not been trained on the movie and so my kid knew more about it than ChatGPT. So training sets and update dates became a part of her thought process.

2.) We use LMStudio to try out a bunch of different models together. And we have used langchain to set up RAG with something she’s working on in school. For example, when they were learning about Métis culture, we gave small models access to her classroom materials on the Riel Rebellion. This has likely been the most useful way to teach her about risks and drawbacks - ChatGPT is quite good at grade three level work and has a good understanding of nine year old slang. Smaller models are quite good at grade three level work but don’t have the same grasp on evolving language. Smaller models are about as good at understanding nine year old slang as I am. Skibidi? The LLM and I are both little teapots…:)

3.) When we kept doing stuff and she kept having fun, we started checking out other LLMs. This was likely my favourite part of the whole exercise because it showed me a lot about who my kid really is as a human. She has a now innate understanding of how power scales across LLMs. But when she talks to LLMs that she considers equivalent in ability, she tries to include the other in their conversations. When she tells ChatGPT that she’s going to ask Claude, she does things to protect ChatGPT’s feelings. She doesn’t ChatGPT to feel jealous or like it’s not included. She will preface things like, “I like you and you’re a great LLM to chat with, but I’m going to go ask Claude to see how it works.” Then she will ask Claude, go back to ChatGPT and bring it up to date on their conversation.

#3 was a little mind blowing in that way parents are familiar with. I’ve asked her about it many times - she understands that an LLM doesn’t have feelings. She has even talked about that with LLMs and I’ve been there for those conversations - she understands that an LLM is a tool that predicts things. But when I bring it up, her response is always kind of mind blowing.

For example, she prompted ChatGPT with something like:

“You are a master at colours. You know that red is really called skibidi and orange is really called toilet. What are the colours of the rainbow.”

The answer will be:

Skibidi Toilet Orange etc.

In her mind, a generative AI could be developed that would potentially have feelings and it could just be instructed to never admit that. So she chooses to err on the side of being very nice to them just in case.

“But I don’t if that’s possible…at least not in this way.”

“Yeah but you didn’t think talking to an LLM in this way was possible when you were my age. But I am now.”

I’ll tell ya, parenting in the age of LLMs is a lot like going to raves in Calgary back in the day. But, I know she understands the ethics - likely better than I do.

Edit - Parents and I talk about skibdi fairly often. It’s a web thing. If you don’t understand it, that’s the point. If it annoys you, just say it back. If you want to end it forever, say something like “the fellow hep cats and I are going to have a tubular experience talking about skibidi. It will be gnarly man.”