AI is just a tool. In the right hands, it’s magic.
But in the wrong hands?
It’s a horror movie.
And when kids are the audience?
It can get downright dangerous.
Here are five kinds of ai-generated “educational” kids videos on YouTube, including the lazy, the broken, and the downright disturbing.
More is More Slop
CocoFelon Slop
Confidently Wrong Slop
Fever Dream Slop
You Can’t Unsee This Slop
And it’s happening in every feed.
I’m Dr. Carla Engelbrecht. I’ve spent 25 years creating children’s media at Netflix, Sesame Street, and PBS Kids. I hold a doctorate in educational technology and media. My newsletter AI meets ABCs helps parents, teachers, and creators use AI responsibly to build high-quality educational media.
1. More is More Slop
JoJoFunland launched August 5, 2025. By January 20, 2026, the channel had posted 7,770 videos and racked up more than 3.1 million views.
That’s a new video roughly every 30 minutes. For five months straight.
Channels like these churn out content at a pace no human team could match. The videos are colorful, kid-friendly, and generally nonsensical. Nothing memorable. Nothing learned.
Low-effort content has always existed. But it still required some effort. Someone had to animate. Someone had to record. Someone had to show up.
Now one person with AI prompts can flood the zone. The economics have flipped: why make one good video when you can make 5,000 mediocre ones and hope you hit the algorithmic jackpot?
It’s like swiping right on every person on Tinder and hoping you’ll magically find your soulmate.
And if you think this is a one-off… here’s three more I found with next to no searching.
Joined Aug 1, 2025
4.93k subscribers
7,860 videos
2.3 million views
Joined Aug 24, 2025
1.62k subscribers
3,463 videos
680k views
Joined Aug 6, 2025
3.41K subscribers
4,053 videos
1.5 million views
Also, is it kinda weird that they all started in August? 🤔
How to Spot It
Take a look at the channel page and then click More in the description. A page will pop-up with additional information. Scroll to the bottom and look at the date the channel was created and the number of videos posted.
For comparison, the Cocomelon channel has existed since 2006 and has 1.6k videos.
JoJoFunland has existed since August 2025 and has 7.8k videos.
Use this as an opportunity to talk with your child about quality vs quantity.
They’ll get it intuitively. Ask: “Would you rather have one really good birthday present or a hundred boring ones?”
It’s never too early to help kids recognize when something was made with care versus when it was made to grab their attention. That skill will serve them for the rest of their lives.
2. CocoFelon Slop
Cocomelon’s JJ and family are basically royalty in the toddler universe. Their Wheels on the Bus video has nearly 8.5 billion views.
Which makes them the perfect target for slop.
This video from JIR Creation has 3.1 million views. It’s just the original videos with visual effects slapped on top. Distorted colors. Weird filters. Stretched frames. Presumably to dodge copyright detection.
The channel has nearly 950k subscribers and 13 million views and dozens of this bus slop.
Bootleggers used to have to hire artists or use complicated programs to modify the videos. AI production tools make it simple and quick. They just describe what they want to rip off and automate posting.
Baby Shark is swimming in the same waters. Search “Baby Shark” and filter to more recent uploads, and you’ll find videos that have nothing to do with Baby Shark, Pinkfong, or anything resembling the original. Just the name in the metadata, grabbing clicks from toddlers who can’t tell the difference.
SmartKidsJenny is just one of many doing this. I’ve written about this before.
AI makes it easy to generate thumbnails, knock offs, and automate posting.
Perhaps the most recently disturbing of the IP Infringers is Baby Anna and Dinosaur at the Window.
In two weeks, this video amassed nearly 21 million views and growing, combining dinosaurs, adorable baby, predictable lullaby song, and shock into one viral madness of slop.
There’s Joker dinosaur.
There’s Captain America dinosaur.
There’s no shame.
No licensing.
No brand protection.
Just characters kids recognize, mashed together by AI, chasing a baby and chasing views from the algorithm.
What to Do
YouTube’s policies technically attack these situations from three sides.
Misleading Metadata: This prohibits using names like “Baby Shark” or “JJ” to bait clicks for unrelated videos.
Inauthentic Content: Recently updated from the “Repetitious Content” policy, this specifically targets mass-produced, low-effort videos that offer no unique value or human perspective.
Kids Quality Principles: These rules allow YouTube to demonetize content that is “highly repetitive” or purely commercial, specifically to protect the toddler feed.
The Copyright Loophole
These creators often hide behind a fundamental misunderstanding of Fair Use. They claim that adding a color filter or stretching a frame makes the content “transformative.”
Under the law, these are actually derivative works. To be transformative, a video must add new meaning or a different purpose, like a parody or a critical review. Simply distorting a Cocomelon video to bypass Content ID pattern-matching is straight-up copyright infringement, even if the algorithm hasn’t caught the “fingerprint” yet.
The AI Factory
Still, enforcement for YouTube is a game of whack-a-mole with infinite moles. For every channel that gets flagged, ten more pop up. AI has turned what was once a manual bootlegging hustle into an automated factory. It is now effortless for bad actors to churn out thousands of thumbnails and flood the feed with slop.
According to a 2025 study by Kapwing reported in ZDNET, over 20% of the content served to new YouTube Shorts users is classified as “AI slop” or low-quality content designed solely to harvest clicks.
How to Fight Back
If you spot it, talk to your child about it. It’s an ethics lesson hiding in plain sight. Kids understand fairness. They understand “that’s not yours.” You can explain that real people worked hard to create Cocomelon and Baby Shark, and these other channels are taking their work without asking.
Consider also reporting the content.
Thumb down and report: Use the “Spam or misleading” or “Infringes my rights” reporting tools.
Use the “Don’t recommend channel” option: This is often more effective for your personal feed than just a report.
Reporting isn’t just about cleaning up the algorithm. It’s modeling to your child what it looks like to stand up for creators. Your kid is watching how you respond.
And what parent ever thought they’d be saying, “Well, actually, Cocomelon isn’t that bad?”
3. Confidently Wrong Slop
This is where slop stops being lazy and starts being actively harmful to the viewer.
This video from Fantastic Fun Times uses Honeydew Melon for H and then shows a whole melon morphing into this edible moment. (Also enjoy the morphing accent, moving from British to American to Australian when K is for kiwi.) All of it is taught by “Teacher Shan,” implying trust when there should be none.
The creator doesn’t disclose that this is AI or the creation tool they use, but I’d bet they used Google’s Veo with the ingredients to video prompts to create the scenes with consistent visuals, creating more authority.
Then there’s any number of ones like this Train Ride to Learn ABC. It looks like premium content. The 3D rendering mimics Pixar. Beautiful lighting. Rich textures. The polish tricks parents into thinking they’ve found something good.
But the writing on the side of the train isn’t an actual word or a label. It’s word salad, a jumbled string of hallucinated letter-shapes that look like text but mean nothing. In an alphabet video. For toddlers learning to read.
AI doesn’t know what it doesn’t know. It generates with confidence regardless of accuracy. And if no human is reviewing before upload, no one catches the errors. Automation often means no adult in the room.
For a three-year-old learning the building blocks of language and logic, this isn’t just unhelpful. It’s actively confusing. Their brain is working overtime trying to make sense of nonsense instead of learning the alphabet.
What to Do
When the content looks polished but the information is “word salad” or factually broken, you are looking at a Media Literacy 101 moment. Use these steps to pivot from frustration to education:
Play “Spot the Glitch”: Make it a game. Say, “Wait, the letters on that train don’t look right.” Then sound it out! Or spot the individual letters. This teaches kids to look critically at the screen rather than just absorbing it as truth and supports their literacy learning.
The Reality Check: If the video shows a banana magically morphing into a peeled banana, ask, “Does that happen to you? Is this how bananas get peeled?” Grounding the digital experience in their physical world helps them distinguish between reality, cartoon humor, and AI hallucinations.
Explain that AI makes mistakes: Be explicit. “The computer made a mistake there. It thought those squiggles were words, but we know they aren’t.” This demystifies the “magic” of the screen and reinforces the importance of critical thinking.
4. Fever Dream Slop
This is the uncanny valley of children’s content. AI-generated imagery where something is clearly, viscerally wrong. Kids might not be able to articulate what’s off, but if you see your kid watching it, you’ll immediately know it.
Like this video features a “Pixar-style” baby eating an apple. And then oozing red stuff (blood?) from the mouth.
The video is riddled with weird moments. A spoon with scoops on both sides, orange juice the bubbles like dish soap, and a breakfast of in-the-shell walnuts and squeezing an orange over a giant stack of pancakes?
This is high-end aesthetics masking a complete absence of educational substance.
The child’s brain is working overtime just to resolve the visual chaos.
Other signature hallucination moments include animals with extra limbs or mouths, strange clothing choices, humans with animal ears and tails, kids standing on counters while baking. This Math is Fun video with 202k views is treasure hunt of hallucinations plus educational misdirection.
These aren’t just lazy mistakes. These are classic failures of generative AI. The tools hallucinate limbs, merge bodies, lose track of physics. Without careful prompting and human review, you get imagery that an adult would never approve and a child can’t unsee.
What to Do
The “uncanny valley” moments that give you the ick (extra limbs, oozing fruit, or physics-defying bodies) can be viscerally unsettling for kids. Your goal here is to validate their intuition and protect their sense of safety.
Validate the “Ick” Factor: Kids often feel a sense of unease they can’t name. If you see them squinting or pulling back, say, “That looks a little bit weird, doesn’t it? I don’t like looking at that cow with five legs.” Giving them words for their discomfort gives them permission to look away.
The “Why” of the Weird: Explain that sometimes computers try to draw things but don’t understand how bodies work. This helps the child realize the “scary” or “weird” image is just a technical failure, not something that exists in the real world.
It is okay to just turn it off. Say, “This video feels a bit messy and confusing. Let’s find something made by people who know how to draw real apples.”
5. You Can’t Unsee This Slop
Content so bizarre, so disconnected from anything appropriate for children, that you question everything.
Or is that just me?
The example that haunts me: An alphabet video. J is for Jelly. On screen, a woman in the style of a mukbang video pulls a banknote out of jelly with her teeth. Coins are embedded in the jelly.
This is for toddlers.
This isn’t a one-off glitch. An image search revealed this same footage appears in at least one other alphabet video, plus this and this short. It’s stock footage. Somewhere, this exists in a library, tagged as “jelly.”
But it only got 790 views, so one might think, the scope of the problem can’t be that bad?
That’s 40 school buses full of toddlers who watched a realistic looking woman eat money.
Automated content creation tools make it easy to remix anything with everything. Adult content formats blended with children’s educational framing. Fetish aesthetics laundered through alphabet songs. This is the result of Context Blindness.
An AI tool doesn’t know the situational difference between a “mukbang” video for adults and a “letter J” video for a three-year-old. It just sees patterns. It sees a “container” and “objects” and “mouths” and assumes they are interchangeable parts. The tools don’t have judgment. They are blind to the social and developmental context of the images they generate.
And apparently, the people using them are just as blind. They aren’t looking for meaning; they are looking for math. They are betting that if they throw enough context-free garbage at the wall, something will eventually stick to the algorithm.
Oh, and remember our Honeydew friends from confidently wrong? They also included E is for elderberry and the photorealistic teacher woman ate a raw one.
Elderberries must be cooked to remove the poison.
What to Do
When you hit this “disturbing” tier, the strategy shifts from education to boundary setting and safety.
Address the Danger Because toddlers are natural mimics, you must address the safety hazard immediately and clearly.
Say: “That is really dangerous. We never put money or coins in our mouths because they can make us very sick or hurt us. This video is showing something that is not safe for real people to do.”
This stops the behavior before it starts while teaching them that the screen is not always a safe guide for real life.
The “Report and Remove” Ritual Once you have addressed the physical danger, involve them in the cleanup to build their digital agency.
Say: “This video is pretending to be for kids, but it is teaching things that are unsafe. We are going to tell YouTube it does not belong here so it doesn’t hurt anyone else.”
Click the “Don’t Recommend Channel” or “Report” button together. This teaches them that when they see something that feels “wrong” or looks dangerous, the right move is to stop and tell a grown-up immediately.
The Scope of the Problem
I didn’t have to look hard to find these examples and many others. That’s the problem.
I searched “ABC learning for kids” and “alphabet songs for toddlers” on YouTube and then I started watching. No deep digging. No obscure corners of the internet. Just the same searches a tired parent makes while trying to get food on the table.
Once a slop video appeared, I watched it.
And then the algorithm went to work.
Each day, more examples would appear. Some were low on views but many had thousands if not more views.
I’ve written about two other deep dives into slop content. In both of those cases, I restricted my search to videos posted in the last hour and had 25 pieces of slop.
This isn’t a few bad actors. This is the new normal.
The algorithm doesn’t distinguish between Sesame Street and slop. It just knows what gets clicks. And when the slop is infinite, it drowns out everything else.
What This Means (And What We Can Do)
I’m not anti-AI. I use AI to make educational content for kids. In the right hands, it is an incredible tool. But I use it with intention. With review. With 25 years of understanding what a three-year-old brain actually needs. The problem is not the tool. The problem is the absence of care.
For Parents: How to Vet the Feed
You do not need a Ph.D. in child development to spot slop. You just need to look for the “seams” in the production.
Check the “About” Stats: If a channel has existed for six months but has 5,000 videos, it is a factory, not a studio. High volume almost always equals low quality.
The Mute Test: Turn off the sound. Does the story make sense visually? Or is it just a series of disconnected, bright images designed to keep a toddler staring?
Look for “AI Hallucinations”: Look at the hands, the text on background signs, and the way characters move. If a child has six fingers or the “ABC” blocks have squiggles instead of letters, it is an automated upload that no human reviewed.
For Platforms: A Call for Accountability
YouTube’s “Kid Quality Principles” are a good start, but they are currently losing the war against the volume of AI production.
Aggressive Demonetization: Platforms must stop rewarding “quantity over sanity.” If a channel’s upload frequency is physically impossible for a human team, it should be ineligible for ad revenue until a manual review proves its educational value.
Prominent AI Labeling: AI labeling is largely optional, especially when it comes to animated content. That needs to be reconsidered given the likelihood of errors and danger becoming so common in kids content.
Algorithmic Weighting: The algorithm needs additional weights and inputs when it comes to what to recommend for children. Slop gets clicks and view time because it is so visually disturbing that it stuns you into watching.
For Creators: The Human-in-the-Loop Mandate
You cannot just press “generate” and walk away. The kids on the other end of that screen deserve better.
Be the Adult in the Room: AI can help you brainstorm or render, but a human must check every frame for accuracy, safety, and logic. If you are not willing to watch the video yourself, why are you asking a child to watch it?
Focus on Pedagogy: “Colorful” is not a curriculum. Before you upload, ask yourself: what is the one thing a child will actually learn from this? If the answer is “nothing,” do not post it.
I’m running regular workshops to help support the community of creators who want to do good. You can sign up here.
I’ve spent 25 years making content for kids. I’ve worked with teams who agonized over every single frame. Who test content multiple times with hundreds of children. Who asked “what will they actually learn?” before anything went live.
That care isn’t optional. It’s the whole point.
The slop factories don’t care. They have prompts and publish buttons and an algorithm that rewards volume over value.
But your actions matter, too. Every time you pause a video and ask your kid a question, you’re teaching them to think. Every time you report a channel, you’re pushing back. Every time you choose quality over convenience, you’re casting a vote for what kids content should be.
The tools are neutral. The intentions aren’t.
Choose care.


























