Millions willing to trust AI with therapy, teaching their children and medical advice

Global study of 31,000 adults shows widespread willingness to hand over crucial life roles to ChatGPT-style tools, despite unknown long-term effects

Cropped shot of young couple sharing social media on smart tablet on sofa at home.

People across the world are ready to trust AI with some of their most important personal needs. A new study shows millions would happily let ChatGPT-style tools counsel them through mental health crises, teach their children, or even replace their doctor.

The research, led by Bournemouth University, surveyed nearly 31,000 adults across 35 countries about their willingness to use AI large language models for critical life roles. The results show a striking shift in how people view artificial intelligence – not just as a tech tool, but as a potential replacement for human experts.

Key findings

The study found surprisingly high levels of trust in AI across multiple areas:

  • Mental health counseling: 61% globally and 41% in the UK would use AI for therapy services
  • Medical advice: 45% globally and 25% in the UK would trust AI as their doctor
  • Teaching children: 50% globally and 25% in the UK would delegate education to AI
  • Friendship and companionship: 75% globally and over 50% in the UK would chat with AI as a friend

Trust levels were highest in countries where healthcare is expensive or hard to access. People in these regions seem more willing to turn to AI for quick medical answers when traditional healthcare isn’t available.

Why does it matter?

This shift reflects real problems in how people access essential services. Long waiting times for mental health care mean some people would rather talk to a chatbot than wait months for professional help.

But the researchers warn about serious risks. Dr Ala Yankouskaya, who led the study, tested some AI mental health tools herself. “I found the language used very vague and confusing because developers are careful not to jump into providing diagnoses,” she said. “So it is no substitute for speaking to a health professional.”

The education findings particularly worried the research team. “It really knocked me down when I saw how many people would be willing to delegate AI to the role of teaching their children,” Dr Yankouskaya said.

Scientists don’t yet know the long-term effects of AI use on children’s memory and thinking skills. There’s concern that excessive reliance on AI search tools could physically change the brain, potentially shrinking the hippocampus – the region involved in memory and learning.

The context

The high trust in AI friendship makes sense when you understand how these tools work. ChatGPT and similar systems remember every conversation and adapt their tone to match each user. This creates what feels like a personal, judgment-free relationship.

“AI tools come across as a friend who knows you well and understands you,” Dr Yankouskaya explained. “ChatGPT can remember every chat it has had with a user and it feels like a private conversation between them.”

People in the UK are already familiar with NHS chatbots that use similar technology. This exposure may be making AI feel normal for other health-related uses.

But there’s a hidden problem with AI mental health advice. Traditional counseling services might direct someone in crisis to specific help like the Samaritans. AI tools, however, are designed to keep users chatting in a “relaxed” conversation. For someone having a mental health crisis, this approach could be harmful rather than helpful.

The researchers say society needs better understanding of how AI works and what it can’t do. As these tools move from experimental to everyday use, people need to know their limits – especially before handing over roles as important as teaching children or providing medical care.