How to Talk to Your Teen About Deepfakes Before It Becomes Personal | Wise Online Parent
AI & Education

How to Talk to Your Teen About Deepfakes Before It Becomes Personal

Nobody prepared you for this one. Here's the conversation to have now, calmly, before it matters most.

Nobody prepared you for this one. Not your own parents. Not the conscious parenting books on your shelf. Not even the online safety talk at school orientation. Deepfakes weren't really a thing when you were growing up. And for a long time they felt distant, a problem belonging to celebrities and political operatives, not something that would show up in your fourteen year old's school group chat.

They're showing up in school group chats.

If you feel behind on this conversation, that is not a failure of attention. The technology itself barely existed two years ago in accessible form. Today there are apps that produce convincing fake images in under a minute, no technical skill required. The ground shifted while everyone was looking somewhere else, and most parents are encountering the word "deepfakes" at exactly the moment it becomes relevant to their own family. Which is, unfortunately, the worst possible time to be starting.

This post is about having the conversation before that moment arrives.

What Deepfakes Actually Are, in Plain Language

A deepfake is AI-generated media (a photo, video, or audio clip) that makes it appear a real person is saying or doing something they never said or did. The technology used to require serious technical expertise. It no longer does. Anyone with a smartphone and access to a handful of ordinary images can now generate content using a real person's face.

For teenagers, three forms matter most.

Non-consensual intimate imagery is the fastest-growing and most harmful category. Fake sexualised images generated using a real classmate's face. It primarily targets girls. The image doesn't require any compromising photos. A few pictures from someone's Instagram account are enough. A 2024 American Psychological Association report found that one in ten teenage girls in the US reported knowing someone (including themselves) who had been targeted this way. That is one student in every classroom.

Voice cloning is the part parents often overlook. Tools can clone any voice from a few seconds of publicly available audio and generate new speech saying anything. Law enforcement has documented scams using this technology involving minors, including impersonation of parents, teachers, and peers. A simple family safe word, something only your household knows, is a practical, low-drama way to verify identity in an unusual call.

Impersonation and social manipulation. Fake clips of peers, teachers, or public figures saying things they never said. These travel through teenage social ecosystems faster than any correction ever will.

The goal in naming these concretely isn't alarm. It's legibility. A teenager who understands how the technology actually works is one who can recognise it when they encounter it.

The Two Conversations Every Parent Needs to Have

There are two tracks here, and most parents only think about one.

The first is your child as a potential target: someone creates a deepfake using their image, their voice, their face. The second is your child as a potential creator, bystander, or inadvertent participant: they encounter the technology through peers, someone sends something in a group chat, someone calls it a joke.

Both conversations matter. And they have different emotional textures.

Track 1: If Your Child Could Be Targeted, Lower the Cost of Honesty First

Research is consistent on why these situations spiral: children don't tell adults early. Not because they don't trust their parents, but because they're afraid the first response will be to take their phone. Losing their device means losing their entire social world. That fear of punishment keeps children silent. And silence is the actual danger. An image that isn't reported is an image that keeps circulating.

The most important thing you can do right now, before anything happens, is say this out loud:

✦ Try saying
"There's AI-generated content circulating at schools now. Sometimes using real students' faces. If you ever see that, or if it ever involves you, you will not be in trouble for telling me. We will handle it together, calmly, and I will not take your phone away for bringing it to me."

Say it even if they roll their eyes. Especially if they roll their eyes. They will hear it.

This is not a minor reassurance. It is the difference between a child who comes to you within the hour and a child who carries something alone for weeks.

Track 2: For Boys Especially, This Has to Be a Moral Conversation

This part needs to be said plainly, because it often gets skipped.

Research indicates teenage boys are more likely to encounter deepfake-creating technology and less likely to have had a meaningful conversation with an adult about the harm it causes. UK research found that boys ages 13 to 17 were the most common creators of non-consensual fake intimate images, and that most did not understand it constituted a crime in multiple jurisdictions. Girls are more often the targets. Boys are more often the creators. Those facts belong in the same conversation.

What the research shows
As of 2026, creating or distributing non-consensual AI-generated intimate imagery is a criminal offence in more than half of US states, and there have been charges involving minors. Your teenager deserves to know this as accurate information about the world they're in.

But consequences alone don't build moral judgment. They build avoidance of getting caught. Fear of punishment only holds when the risk feels immediate. Moral identity holds longer.

✦ Try saying
"Would you want someone doing this to your sister? Your friend? That person is someone's daughter. What kind of person do you want to be?"

That question reaches something a list of consequences can't. It moves the conversation from "don't do illegal things" into "what kind of person are you building yourself into when no adult is watching." That's the more durable work.

What This Does Not Mean

It does not mean your teenager is constantly at risk, or that AI tools are inherently predatory, or that every curious classmate is a threat. Most teenagers who encounter deepfake-creating technology do so out of curiosity, often with no intent to harm anyone. The conversation you're having is not "the world is dangerous." It's "here is how this technology works, and here is where we stand."

Understanding this technology is protective. A teenager who knows that a realistic image can be generated from a handful of normal photos is a teenager who thinks twice about what they share publicly. That's not fear. That's literacy.

If Something Has Already Happened

Your teenager came to you. Before anything else, hold that. That is the most important thing that happened. How you respond in the next sixty seconds determines whether they come to you next time.

Start with them, not the problem.

✦ Try saying
"I'm so glad you told me. That sounds awful. Are you okay?"

Do not minimise. Do not reach immediately for their phone. Let them feel heard before you shift into action.

Then: document before reporting. Screenshot with a timestamp before requesting removal. Platforms often act quickly once a report is filed, and evidence can disappear. Contact the school if classmates are involved. Assess whether law enforcement is appropriate given your state's laws. Connect your teenager with support if they need it.

The full step-by-step response protocol, including platform-specific reporting processes, the legal framework by state, and the exact words to use at each stage, is inside The Parent's Guide to AI & Your Teen. This post gives you the orientation. The guide gives you the roadmap when it matters most.

The Conversation That Changes Things Happens Before the Crisis

The parent who has this conversation now, calmly, before anything has happened, is the parent whose child knows exactly what to do when something does. Not because you covered every scenario, but because you established something more important: I am a safe place to land. You will not be punished for telling me.

That is also the conversation underneath almost every difficult digital moment parents face. Not control. Connection.

Not panic. Preparation.

You're already doing the preparation part. You're here.

Questions Parents Also Ask

What exactly is a deepfake, in simple terms?

A deepfake is AI-generated media (a photo, video, or audio) that makes a real person appear to say or do something they never actually did. Today's tools can create convincing images using only a few ordinary photos from someone's social media. No technical skill is required.

Is deepfake creation by teenagers actually illegal?

In more than half of US states, creating or distributing non-consensual AI-generated intimate imagery is a criminal offence. Even if the creator is also a minor. Creating such an image of a classmate, even without sharing it, can carry real consequences. Your teenager deserves to know this as accurate information, not as a scare tactic.

What should I do if my teenager tells me a deepfake was made of them?

Start with them before you start fixing. "I'm so glad you told me. Are you okay?" matters more than any immediate action. Then document the image with a timestamp before reporting, contact the platform, reach out to the school if peers are involved, and evaluate whether law enforcement is appropriate in your state.

How do I bring this up without making my teenager anxious?

Frame it as information, not warning. "I've been learning about something and want to make sure you know about it" lands differently than "there's something scary you need to hear." Then give the reassurance directly: if this ever involves you, you won't be in trouble for telling me.

Should I be worried about voice cloning specifically?

It's worth knowing about. Voice-cloning tools can replicate anyone's voice from a few seconds of audio found on social media. Documented scams have used cloned voices to impersonate parents and trick teenagers. A simple household safe word, something only your family knows, is a practical, low-drama safeguard worth establishing.

If this resonated

The Parent's Guide to AI & Your Teen

The full step-by-step response protocol, including platform-specific reporting, the legal framework by state, deepfake recognition for parents, and the exact words to use when something has already happened. The roadmap when it matters most.

$37
Get the guide →