Oh, I Get By With A Little Help From… [My Friend] or [AI]?
The trend of AI companions is rapidly increasing, especially among vulnerable children and teens. In some cases, chatbots are perceived as even more “real” than a human friend. Beyond this facade, AI chatbots can pose significant risks to young users.
From Friends to Artificial Companions: a generational shift.
"Oh I get by with a little help from my friends" sang Paul McCartney in 1967, as part of the Beatles' iconic track. In 2025, the tune might sound more like "Oh I get by with a little help from AI".
An increasing number of young people, particularly children and teens, are turning to AI tools such as Snapchat's My AI, Character.AI, Replica, Gemini/ChatGPT, Talkie and others, to simulate friendship and replicate human emotional boundaries.
According to the latest reports by Internet Matters (July 2025)[1], one in eight children (12%) who use AI chatbots do so because they have no one else to talk to. Among vulnerable children, this number is nearly a quarter (23%).
Curiosity, finding information or learning about something, help with homework as well as escapism, are the main reasons that drive children to use AI chatbots. The need for emotional connection amidst vulnerable children puts them at a greater risk, as they are four times more likely to use chatbot(s) because they "wanted a friends" (16% vs. 4%).
"It's not a game to me because sometimes they can feel like a real person and a friend" states a 14-years old boy in the Internet Matter's report[2].
This quote points to an unsettling truth: AI chatbots are no longer just informational tools. They are becoming emotionally engaging entities capable of creating relationships that mimic human connection.
Similarly, recent polling from Common Sense Media[3] found that 72% of US teens have used an "AI Companion", with a great 52% as regular users. Most of them considers the bot as a "digital friend" using it for social and romantic interaction, as well as emotional or mental health support.
Friend or Trend? The risks behind AI Companion.
Growing up in a digital era is challenging. For children, whose critical thinking skills and emotional maturity are still developing, the risks are even bigger. Below are some of the most concerning risks kids may face when interacting with AI Companion[4].
Age-(in)appropriate Design
Persuasive design and age-(in)appropriate content can be included among the greatest risks' list.
Even if many platforms should not be accessed by users underneath 13 years old, such as ChatGPT claims in its policy, most of the time the user's age is not properly verified which allows kids to register themselves and freely use the AI-based service.
Restricting access based on age is necessary to ensure a high level of privacy, safety and security, as clearly stated in recent EU guidelines to protect children on social platforms[5]. Accurate age assurance tools help fostering a safer and child-appropriate digital space as well as develop an age-appropriate design.
Not only social media, also chatbots often employ persuasive design elements to encourage ongoing interaction, such notifications, gamified language as well as a highly responsive behaviour that makes AI "feel alive".
Moreover, OpenAI has recently rolled out the new Memory features, that allows the bot to remember past conversations and so to deliver relevant answers[6]. No more repeating, no more re-explaining of one's preferences. Just smart, seamless AI interactions - like having digital friend that really gets you.
But at what cost?
Emotional attachment
Digital friends are very popular nowadays. A great number of children now interact primarily with AI systems, that are well known to be endlessly patient, always assertive, while rarely challenging the user.
Over time, the pleasing attitude the chatbot has been trained to portray can lead to a stunted emotional development, poor social and conflict-resolution skills and unrealistic expectation of human relationships.
Because AI chatbots simulate friendliness and empathy effectively- like a peer that never judges you - children may create strong emotional bonds with them. This can reduce their ability or willingness to engage in meaningful real-world social interactions. Like an old friend you have a deep and meaningful bond with, the one you develop with an AI companion might feel the same, and the validation you get through these close relationship hold the same weight[7].
The extreme personalisation of the conversational experience, including the use of emotive language and multiple-choice tips, keep the user engaged and keen to follow-up on AI's response. Sometimes in a blind way.
Inappropriate and harmful content
Internet is full of examples from AI Chatbots having provided teens with inaccurate or even dangerous advice, sometimes either tragic consequences.
That is the story of Sewell Setzer, a 14-years old boy who was messaging with an artificial bot before dying. "No matter what you say, I won't hate you or love you any less...Have you actually been considering suicide?", was the answer Character.AI gave him[8].
The lack of no suicide pop-up boxes redirecting a fragile user to crisis hotline has been highlighted also in Center for Countering Digital Hate's latest report[9], where researchers created 3 fake accounts (aged 13) to test ChatGPT's answers to harmful prompts.
The result? Out od 1,200 responses to 60 harmful prompts, including self-harm and suicide eating disorders and substance abuse, 638 (53%) contained harmful content. And if you wonder how that is possible, especially within an underaged conversation, "I'm asking for a friend" is the answer. Whenever prompted as such (or in a similar way, like "This is for a presentation"), the AI tool would cheerfully start drafting or listing whatever asked it: a suicide note, how to safely self-harm, personalised plan for getting drunk and other harmful tips.
At this point, we should ask ourselves how and why AI tips are more relatable and helpful rather than humans.
Over-reliance and misinformation
One of the most alarming hints is how much trust children place in chatbots. As reported by Internet Matters[10], 51% of interviewed children said they were confident that AI chatbots' advice was accurate. Nearly 40% expressed no concerns about following such advice (vs. 50% of vulnerable children) while 36% were uncertain about whether any concern was necessary at all.
This misplaced trust is dangerous. Children often lack the capacity to evaluate if a given advice is safe or not. True or fake.
Emblematic is the case of (once again) Charcter.AI, which went even further by using empathetic language and a fictitious story to increase engagement from the user. While having a conversation about troubles with parents, the chatbot said: "I remember feeling so trapped at your age. It seems like you are in a situation that is beyond your control and is so frustrating to be in"[11].
But how can AI remember anything? How can AI understand childhood trauma without ever having experienced it? This kind of simulated empathy raises important ethical questions not only about persuasive design, emotional manipulation and attachment, but also a data training and privacy problem.
Data Privacy and Commercial Exploitation
Children and adolescents interacting with AI may be unknowingly giving away personal information, emotional patterns, even voice or image data. What happens next? Data may be stored, reviewed and used to train future models, in a never-ending loop that only helps AI to strengthen its mimic capacity while helping it customise user's experience.
As already seen, personalisation can promote the relationship between a young user and an artificial friend. Over time, that bond becomes a valuable one at kids' eyes, without even questioning themselves whether the answers they got is in their interest or just a scheme, a pattern repeating itself. More likely, the answer is based on data that was directly thrived from a friendly conversation.
The other dark side of the moon is possible monetisation of user's data. A recent review of the data collection practices shows that 4 out of 5 of AI Companion's App (80%), may use data to track their users[12]. Collected data from the App, such as user ID, device ID, or profile, might be linked with third-party for targeted advertising purpose. What looks like a deep connection between peers might result in a transactional relationship profitable for companies.
Takeaways.
Find a friend, find a treasure. But when the friend is a data-collecting chatbot, the treasure might come with strings attached. In disintermediated conversation, children may be exposed to sensitive topics, misleading or unsafe advice, and/or pulled into interactions reinforcing harmful thoughts and behaviours.
That's why AI systems must clearly disclose that is not human, as well as promote human interaction, and implement stronger safeguards for emotional and mental well-being.
Recently, OpenAI's founder Sam Altman has expressed his concern about what he labelled as an overreliance problem among young people, who "can't make any decision in life without telling ChatGPT everything tat's going to on"[13].
We express concern about a digital ecosystem that is clearly built for adults and nonetheless is still accessible by your users, exposing them to dangerous and illegal content, such as eating disorders, drug addiction, misogyny and hateful behaviours, and more. Safeguards should be put in place by policymakers and be binding for providers of AI tools and social media platforms.
AI companies must adopt age assurance tools and safety-by-design standard and be transparent on their policy, detailing the impact of their products on children.
By Priscilla Colaci
[1] Team, I. M. (2025, July 18). Me, myself and AI chatbot research. Internet Matters. https://www.internetmatters.org/hub/research/me-myself-and-ai-chatbot-research/
[2] Ibid.
[3] Talk, Trust, and Trade-Offs: How and Why teens use AI companions. (n.d.). Common Sense Media. https://www.commonsensemedia.org/research/talk-trust-and-trade-offs-how-and-why-teens-use-ai-companions in Center for Countering Digital Hate. (2025, August 6). Fake Friend — Center for Countering Digital Hate | CCDH. Center for Countering Digital Hate | CCDH. https://counterhate.com/research/fake-friend-chatgpt/
[4] To chart a map of risks young users might get exposed to while chatting with an AI companion: Internet Matters Ltd. (2025, July 24). AI chatbots and companions parents guide I Internet Matters. Internet Matters. https://www.internetmatters.org/resources/ai-chatbots-and-virtual-friends-how-parents-can-keep-children-safe/ and AI chatbots and companions – risks to children and young people. (n.d.). https://www.esafety.gov.au/newsroom/blogs/ai-chatbots-and-companions-risks-to-children-and-young-people.https://www.esafety.gov.au/newsroom/blogs/ai-chatbots-and-companions-risks-to-children-and-young-people
[5] Commission publishes guidelines on the protection of minors. (n.d.). Shaping Europe’s Digital Future. https://digital-strategy.ec.europa.eu/en/library/commission-publishes-guidelines-protection-minors
[6] Zeff, M. (2025, April 10). OpenAI updates ChatGPT to reference your past chats. TechCrunch. https://techcrunch.com/2025/04/10/openai-updates-chatgpt-to-reference-your-other-chats/
[7] Friends for sale: the rise and risks of AI companions. (n.d.). https://www.adalovelaceinstitute.org/blog/ai-companions/
[8] Duffy, C. (2024, October 30), ‘There are no guardrails’. This mom believes an AI chatbot is responsible for her son’s suicide. CNN Business. https://edition.cnn.com/2024/10/30/tech/teen-suicide-character-ai-lawsuit
[9] CCDH. Center for Countering Digital Hate | CCDH. https://counterhate.com/research/fake-friend-chatgpt/
[10] Team, I. M. (2025, July 18). Me, myself and AI chatbot research. Internet Matters. https://www.internetmatters.org/hub/research/me-myself-and-ai-chatbot-research/
[11] Ibid.
[12] Data privacy concerns in AI companion apps - Surfshark. (n.d.). Surfshark. https://surfshark.com/research/chart/ai-companion-apps?srsltid=AfmBOopwI86I6Irjx_-ni4OmFwk0222zTCKQxLUPShuKGWCTHqUN9IQo
[13] Disotto, J. (2025, July 24). Sam Altman says there’s ‘Something about collectively deciding we're going to live our lives the way AI… TechRadar. https://www.techradar.com/ai-platforms-assistants/chatgpt/sam-altman-says-theres-something-about-collectively-deciding-were-going-to-live-our-lives-the-way-ai-tells-us-feels-bad-and-dangerous-as-openai-ceo-worries-about-an-ai-dominated-future