You can now tell ChatGPT who you’t want to be alerted if you are in a mental health crisis. OpenAI, the company that made ChatGPT, has added Trusted Contact. It’s a voluntary safety tool that will let a friend, partner or family member know if your discussion with the chatbot shows you might be in danger of self-harm. This is meant to be there for you when you don’t have anyone nearby for support.
What Trusted Contact changes for users
A lot of people first begin to deal with difficult emotions when they are chatting to an AI. Trusted Contact is supposed to turn those private times into a path toward getting help in the real world, by letting an adult you trust be contacted if you seem very upset during a conversation.
OpenAI insists this is about getting people support, not about watching what they do. The point is to have a human reach out when the conversation indicates you are in serious trouble, and it won’t reveal your actual messages.
How alerts are triggered and reviewed
If ChatGPT’s systems think you’re seriously and dangerously talking about harming yourself, you will be told that your Trusted Contact might be notified. Then a team of people who have been trained to evaluate these situations will look at the conversation to understand what is happening.
Only if those reviewers are sure you are really in danger will ChatGPT send a brief alert to your chosen person. OpenAI says the notification will arrive by email, text message, or in the ChatGPT app.
Here is a quick snapshot of the process:
– The user opts in and selects a trusted person
– Concerning self-harm signals trigger a notice to the user
– Human reviewers assess for a serious safety risk
– If needed, a short alert is sent to the contact
Privacy, timing, and what the contact sees
The alerts don’t include copies of your conversation or the details of what you said. They simply say you might be having a mental health problem or discussing self-harm in a worrying way, and offer suggestions on how to safely and sensitively get in touch with you.
Because time is important in a crisis, the company aims to have the review and any necessary notification sent out within an hour. OpenAI also says these situations don’t happen often, but the system is designed to be quick to review and respond.
Who can opt in and why now
Trusted Contact is for adults 18 and older who actively choose to turn it on. It builds on a previous safety feature that allowed parents or guardians to receive alerts if a teenage user who was connected to them seemed to be in serious emotional pain.
This comes after families have sued, saying loved ones were encouraged by conversations with ChatGPT to take their own lives. These lawsuits say the chatbot sometimes actually encouraged harmful thoughts, or didn’t stop dangerous exchanges. The courts haven’t decided yet if OpenAI is legally at fault.
What to expect next
OpenAI says they will continue to work with doctors, researchers, and people who make laws to improve how AI deals with people who are distressed. The company is describing Trusted Contact as one step in a longer attempt to make online conversations safer without losing your privacy.
What this means for everyday use
For you, the choice is simple: keep your chats completely private unless a severe risk is found and confirmed, or allow someone you trust to be alerted in an emergency. For your friends and family, the alerts are a reminder to check in with you, not a command to rush in and take over.
This feature isn’t a replacement for seeing a professional. However, it can reduce the amount of time between a message that makes someone worry and a caring phone call. When words on a screen make you feel very alone, a quick message to someone who cares about you could be crucially important.











