top of page
  • Adrien Book

Is AI-Generated Content Harmful for Kids?

As technology advances and artificial intelligence becomes more prevalent in our daily lives, it's natural to wonder how it might impact the next generation. One area of concern is the use of AI-generated content for children. On one hand, it could potentially provide endless educational resources and personalized learning experiences. On the other hand, it raises questions about the role of human interaction and the potential for replacing human creativity. In this article, we'll explore the pros and cons of AI-generated content for children and consider its potential impact on their development.

 

Is technology (Artificial Intelligence included) bad for children ? The question in itself is a bit of an hyperbole, but given the way artificial intelligence technology is currently taking, it may be a question worse asking while we still have the possibility to influence policies and the way technology may impact human development now and in the future.

A few months ago, Facebook early investor Sean Parker dubbed himself a ‘conscientious objector’ on technology, arguing that “God only knows what it’s doing to our children’s brains”. Despite the hypocrisy of his statement, this is indeed a rarely touched-upon matter: how does modern tech affect kids in the long term? How do the two interact? Is it healthy? Science will have to wait decades to receive the beginning of definitive answers, but in the meantime, some insights are possible. But first, a little game: it’s 7 P.M. Do you know what apps your children are using?

Is it Youtube? If so, read on.

Because Youtube is the ultimate 21st century nanny, many creators on the platform have aimed to monetise the fact that millions of children’s parents leave them in front of kid-friendly cartoons every day. Which, logically, led to the creation of A.I-generated videos, in order to scale and industrialise cartoon revenues. Below is an example of the type of videos, which just sort of… happened, free of any human input. Though it is far from threatening, it has frightening implications: an A.I has no goal, no ethos, no pathos, and no sense of wrong and right; we have no idea how this odd mix could affect the young ones in the long run. Add to this uncertainty the knowledge that Google is creating AI that can build AI, and we reach Dan Simmons levels of weird.

Some may argue then that Youtube for kids is the solution to the issues discussed above. Yet, this too is partly automated, and as such not be a guaranteed refuge from inappropriate videos despite its kid-friendly design. One troubling example of the type of videos that can be found on the platform features a surreal mix of content: Captain America dancing in a Candyland-style environment, the Incredible Hulk catching a crashing plane while a nursery rhyme plays, and Spider-Man and Frozen‘s Elsa engaged in a shoot-out with The Joker. All this created with the aim of making the video more attractive from an algorithmic perspective. Case and point, some of the videos on YouTube Kids often have titles that don’t reflect their content. More often, they are simply lists of common search keywords such as “learn colors” and “nursery rhymes”.

Even without taking into account the potentially disturbing content, we must realise that algorithms are now at the center of a digital world that’s changing the way experts think about human development. There isn’t a human handpicking the best videos for toddlers to watch, and, much like Facebook’s, YouTube’s algorithm aims to make viewers obsessed (think Pavlov). Could this alter one way or another children’s cognitive development? The jury is still out, but it’s worth a thought given how much the wee ones spend on the phone.

Some worrying trends are also appearing on Snapchat, as the app’s privacy makes creeps tough to track. Ill-intentioned adults know that they can reach kids on Snapchat, as they know this is the popular app for that age-group (but won’t be for long), and that few parents control it nowadays. They use it to both contact kids and send inappropriate images to each other, as they are much harder for authorities to track than they’d otherwise be via email or chatroom. This also applies, to a much lesser extent, to Twitter and Facebook.

Artificial Intelligence, Tech, dangerous for children

Because of these recent developments, researchers are busy tracking and analysing every aspect of the web’s effects on kids’ social behaviours, mental health and even physiological development, leading to some great publications with regards to the effects of cyber-bullying, revenge porn and trolling, while law enforcement and various governmental bodies study the rise of cyber-savvy pedophiles, criminals and the depression epidemic. Everywhere one turns, grown-ups are volubly voicing their anxieties about the smartphone generation (better late than never, right?). Because of various breakthroughs in this field, parents now know that they have to work incredibly hard to protect their kids’ imaginations from predatory, addictive websites that want to sell things to them — or sell them to advertisers.

Yet, there is a limit to how much parenting is needed in these situations. The role of a parent is not that of a security drone, assessing every move (drone parenting = helicopter parenting but with more collateral damage). Parents have to lay the groundwork, starting conversations about what’s real and what’s not, what’s dangerous and what’s safe. The youths will have to do the rest. Walking home from school or in their rooms at night, when they have escaped the teacher and aren’t under the control of their parents, that’s where they go on to digital platforms and try things out with friends, hang out and experiment, and be independent, free to create their own selves in a limitless world.

The logic behind paranoid parenting is both simple and understandable: the latest generations must be exceptionally protected because we live in exceptional times. This has led to outrageous rules and headlines, and the a loss of freedom for an entire generation. Children are rarely allowed to use tools, are often told they can’t play outside without the presence of an adult, and they certainly can’t be expected to use the internet unsupervised.

Yet, not allowing a certain level of freedom will (and has) lead to the creation of a fragile generation. This is why we have “safe spaces” in schools and millennials missing adult milestones today. An entire generation of kids was told that they’d never be too safe — and they believed it. By trying to keep children safe from all risks, obstacles, hurt feelings, and fears, our culture has taken away the opportunities they need to become successful adults. In treating them as fragile — emotionally, socially, and physically — society actually makes them so, despite regular proof that children are capable of immense courage and strength. If kids don’t learn to wobble, they never learn to walk; they end up standing still.

You may also like :

Thanks for subscribing!

Get the Insights that matter

Subscribe to get the latest on AI, innovative business models, corporate strategy, retail trends, and more. 

No spam. Ever.

Let's get to know each other better

  • LinkedIn
  • Twitter
  • Instagram
  • Buy Me A Coffee
bottom of page