ChatGPT introduces Trusted Contact feature to alert loved ones about self-harm discussions

OpenAI's new safety tool connects users in crisis with someone they trust, backed by human review

OpenAI has rolled out Trusted Contact, a new safety feature that alerts designated friends or family members when ChatGPT’s systems detect serious self-harm discussions. The feature marks the company’s latest attempt to address growing concerns about AI’s role in mental health conversations.

The tool allows adults to nominate one person they trust to receive notifications if ChatGPT detects concerning conversations about self-harm. While these situations are rare, the feature aims to create another safety net for users in crisis by connecting them with real-world support.

How the alert system works

Setting up Trusted Contact is straightforward but includes multiple safeguards:

  • Users can add one adult from their ChatGPT settings
  • The nominated person must accept the invitation within one week
  • AI systems first flag potentially concerning conversations
  • Specially trained human reviewers examine each case before sending alerts
  • Notifications are deliberately vague to protect privacy – they don’t include chat details or transcripts

When the system triggers an alert, it first tells the user that their Trusted Contact might be notified and suggests conversation starters for reaching out. Only after human review does the designated person receive a brief notification by email, text, or in-app message.

Building on existing safety measures

The feature expands OpenAI’s parental control notifications, which already alert parents when teen accounts show signs of distress. Trusted Contact extends this concept to all adults over 18.

“Psychological science consistently shows that social connection is a powerful protective factor, especially during periods of emotional distress,” said Dr. Arthur Evans, CEO of the American Psychological Association. The organization worked with OpenAI to develop the feature alongside other mental health experts.

OpenAI aims to review and respond to safety notifications within one hour, though the company acknowledges no system is perfect and notifications may not always reflect a user’s exact situation.

Part of broader AI safety conversations

The launch comes as tech companies face increased scrutiny over AI’s impact on mental health. Several high-profile cases have raised questions about chatbots’ responses to vulnerable users, particularly around self-harm discussions.

ChatGPT already includes several safety measures:

  • Refusing to provide self-harm instructions
  • Suggesting breaks after extended use
  • Connecting users with crisis hotlines and emergency services
  • Training with 170+ mental health experts to improve distress detection

The feature reflects growing recognition that AI systems shouldn’t operate in isolation but should connect users to real-world care and relationships. OpenAI developed Trusted Contact with input from its Global Physicians Network of 260+ licensed physicians across 60 countries.

Users can remove or change their Trusted Contact at any time, and designated contacts can also remove themselves through ChatGPT’s help center. The feature is optional and designed to supplement, not replace, professional mental health care and crisis services.