studyfinds.org /1-in-10-teens-prefer-chatbots-to-humans/

1 In 10 Teens Prefer Chatbots To Human Conversation. Is Your Child At Risk?

StudyFinds Analysis 10-13 minutes 12/23/2025

Teen in love with chatbot

(Credit: StudyFinds)

Paper: Doctors Should Screen For ‘Problematic Chatbot Use’ As New Mental Health Risk Factor

In A Nutshell

  • One in ten teenagers find conversations with AI chatbots more satisfying than talking with actual people, and one in three would choose AI over humans for serious conversations
  • People who use ChatGPT most heavily report higher loneliness levels and socialize less with real people, though studies can’t yet prove the chatbot causes this pattern
  • Specialized chatbots designed as mental health tools show promise in reducing depression and anxiety symptoms, but everyday chatbots like ChatGPT serve a different purpose and may encourage unhealthy emotional attachments
  • Doctors should screen for problematic chatbot use as a new risk factor, watching for warning signs like treating AI as a friend, compulsive use, or increasing social isolation

A significant number of teens say conversations with AI chatbots feel more satisfying than talking with actual humans, and one in three say they would choose AI companions over people for serious conversations.

Those numbers may seem shocking, but loneliness is now considered a a public health crisis on par with smoking 15 cigarettes a day. As such, it appears young people are turning to AI for companionship. ChatGPT alone has massive weekly usage worldwide, with therapy and companionship ranking among the top reasons people use these digital confidants. For some, especially younger individuals, the appeal goes beyond convenience.

Researchers Susan Shelmerdine from Great Ormond Street Hospital and Matthew Nour from the University of Oxford examined how AI chatbot use intersects with the growing loneliness epidemic. Their analysis, published in The BMJ, points to a complicated and growing problem: while specialized chatbots designed as mental health tools show promise in controlled settings, everyday use of general-purpose chatbots appears connected to troubling patterns.

The scale of loneliness affecting modern society is particularly worrisome. In the UK, nearly half of all adults (25.9 million people) report feeling lonely at least occasionally, with almost one in ten experiencing chronic loneliness defined as feeling lonely “often or always.” Contrary to popular assumptions, younger people face high risk. Research surveying nearly 37,000 individuals identified 16-24 year-olds as the demographic most vulnerable to loneliness, with healthcare costs from loneliness actually higher for teenagers and young adults than for those aged 25-49.

The Mental Health Care Gap

Access to professional mental health care has become increasingly difficult. In England, one-third of people now wait three months or longer for mental health services, with many receiving no support during that waiting period. This gap between need and availability has created conditions where alternative solutions flourish.

Modern AI chatbots have become sophisticated conversational partners. These systems use deep learning to model natural language patterns, generating responses that can feel incredibly human. Voice interfaces have made interactions even more seamless. According to one survey, nearly two in five parents (36%) reported their children use AI chatbots for emotional support.

Chatbots specifically designed and tested as digital mental health treatments have shown some benefits. A randomized trial found that one specialized generative AI chatbot reduced symptoms of major depressive disorder, generalized anxiety disorder, and eating disorders compared to control groups. A separate review of 35 studies using AI conversational agents built for mental health interventions found evidence for reduced symptoms of depression and distress, though overall psychological well-being didn’t improve.

But these purpose-built therapeutic tools are different from the general chatbots most people actually use.

Chatbot relationship: Cartoon showing woman in love with artificial intelligence
The very same qualities that make chatbots appealing as companions also tend to foster unhealthy emotional attachments. (© aprint22com – stock.adobe.com)

Heavy Users Report More Loneliness

When researchers examined how people use everyday chatbots like ChatGPT over time, different patterns emerged. A study from OpenAI and MIT followed 981 participants who used ChatGPT over four weeks. Participants who logged the heaviest use reported higher loneliness levels and socialized less with real people. Markers of loneliness and emotional dependence were strongest among users with greater emotional attachment tendencies who expressed high trust in their chatbot.

The same research team analyzed natural chatbot use in a larger sample and found a strong connection between ChatGPT use and conversations with greater emotional content, especially among users who viewed ChatGPT as a “friend.” However, the researchers note this study lacked a non-chatbot control group and didn’t randomize how much participants used the chatbot each day, which limits conclusions about cause and effect.

Shelmerdine and Nour point to a troubling dynamic: the very features that make chatbots appealing as companions may encourage unhealthy attachments. Unlike human relationships, chatbots offer unlimited availability and patience. They rarely challenge users with difficult feedback or push back on problematic thinking. Among teenagers surveyed, one-third use AI companions for social interaction, one in ten find AI conversations more satisfying than human ones, and one in three say they would choose AI companions over humans for serious conversations.

The authors question what this means for young people’s emotional development. A generation is learning to form bonds with entities that, despite appearing conscious and empathetic, lack genuine human qualities like authentic empathy, care, and the ability to truly understand another person’s experience.

What Doctors Should Watch For

The researchers suggest that clinicians should begin considering problematic chatbot use as a new risk factor when assessing patients with mental health concerns, especially during holidays when vulnerable populations face heightened risk. They recommend starting with gentle inquiries about chatbot use, followed by more directed questions if appropriate.

Warning signs include compulsive use patterns, anxiety about being unable to access the chatbot, referring to the AI as a friend, and relying on the chatbot for major life decisions. Doctors should pay special attention to patients who believe they have a special relationship with their chatbot that shapes their beliefs or behaviors, or those whose chatbot use is associated with increased social isolation without feedback from trusted human confidants.

The researchers also acknowledge that AI could serve as a bridge to human connection rather than a replacement. Possible helpful applications include AI-enabled communication coaches, social assistive robots, predictive models that identify which individuals might respond best to specific intervention types, and AI-driven analysis to spot markers of loneliness in speech patterns. Future systems might recognize references to loneliness and encourage users to seek support from friends, family, or local services.

Shelmerdine and Nour call for urgent research to better understand the risks associated with human-chatbot interactions. They advocate for developing clinical skills in assessing patients’ AI use, creating evidence-based interventions for problematic dependency, and establishing regulatory frameworks that focus on long-term well-being over engagement metrics. Meanwhile, they emphasize the value of evidence-based strategies for reducing social isolation and loneliness, including increased screening, adapted interventions like cognitive behavioral therapy and social prescribing, public health campaigns, partnerships between healthcare and community organizations, and group-based interventions in nature settings.


Disclaimer: This article discusses research findings and is not a substitute for professional medical advice. If you or someone you know is struggling with loneliness or mental health concerns, please consult a qualified healthcare provider.


Paper Summary

Limitations

The study noted that key research examining ChatGPT use lacked a non-chatbot control group and did not randomize participants’ daily chatbot usage, limiting conclusions about cause and effect. The review emphasized that the long-term effects of chatbot companionship on emotional development remain unknown. Relatively few large-scale evidence-based interventions exist for AI technologies aimed at reducing loneliness in older adults, though several promising tools were identified.

Funding and Disclosures

The article was not commissioned and did not undergo external peer review. Co-author Matthew Nour disclosed that he is a principal applied scientist at Microsoft AI, working on chatbot safety and helpfulness, though this article was written before he joined the company. No other competing interests were declared.

Publication Details

Susan C. Shelmerdine (Department of Clinical Radiology, Great Ormond Street Hospital for Children, London; UCL Great Ormond Street Institute of Child Health; NIHR Great Ormond Street Hospital Biomedical Research Centre; City St George’s, University of London) and Matthew M. Nour (Department of Psychiatry, University of Oxford; Max Planck UCL Centre for Computational Psychiatry and Ageing, University College London; Oxford Health NHS Foundation Trust). Published in The BMJ, December 11, 2025. DOI: 10.1136/bmj.r2509.

Called "brilliant," "fantastic," and "spot on" by scientists and researchers, our acclaimed StudyFinds Analysis articles are created using an exclusive AI-based model with complete human oversight by the StudyFinds Editorial Team. For these articles, we use an unparalleled LLM process across multiple systems to analyze entire journal papers, extract data, and create accurate, accessible content. Our writing and editing team proofreads and polishes each and every article before publishing. With recent studies showing that artificial intelligence can interpret scientific research as well as (or even better) than field experts and specialists, StudyFinds was among the earliest to adopt and test this technology before approving its widespread use on our site. We stand by our practice and continuously update our processes to ensure the very highest level of accuracy. Read our AI Policy (link below) for more information.

Our Editorial Team

Steve Fink

Editor-in-Chief

Sophia Naughton

Associate Editor