{"id":92675,"date":"2026-05-07T22:00:32","date_gmt":"2026-05-07T22:00:32","guid":{"rendered":"https:\/\/diyhaven858.wasmer.app\/index.php\/chatgpt-adds-trusted-contact-feature-to-send-alerts-when-conversations-get-dangerous\/"},"modified":"2026-05-07T22:00:32","modified_gmt":"2026-05-07T22:00:32","slug":"chatgpt-adds-trusted-contact-feature-to-send-alerts-when-conversations-get-dangerous","status":"publish","type":"post","link":"https:\/\/diyhaven858.wasmer.app\/index.php\/chatgpt-adds-trusted-contact-feature-to-send-alerts-when-conversations-get-dangerous\/","title":{"rendered":"ChatGPT Adds \u2018Trusted Contact\u2019 Feature to Send Alerts When Conversations Get Dangerous"},"content":{"rendered":"<p> <br \/>\n<br \/><img decoding=\"async\" src=\"https:\/\/gizmodo.com\/app\/uploads\/2025\/05\/OpenAI-ChatGPT-1280x853.jpg\" \/><\/p>\n<div>\n<p data-start=\"186\" data-end=\"300\">OpenAI announced today that it\u2019s rolling out a new mental health-focused safety feature for adult ChatGPT users.<\/p>\n<p data-start=\"302\" data-end=\"515\">Starting today, ChatGPT users can add what the company calls a \u201ctrusted contact\u201d who may be notified if the AI\u2019s automated systems and trained reviewers determine that the user has engaged in discussions about self-harm.<\/p>\n<p data-start=\"517\" data-end=\"1047\">The new feature arrives amid growing scrutiny over the impact AI and other digital platforms can have on mental health. Last year, OpenAI disclosed that 0.07% of its weekly users displayed signs of \u201cmental health emergencies related to psychosis or mania,\u201d while 0.15% expressed risk of \u201cself-harm or suicide,\u201d and another 0.15% showed signs of \u201cemotional reliance on AI.\u201d Considering the company claims that roughly 10% of the world\u2019s population uses ChatGPT weekly, that could amount to nearly three million people.<\/p>\n<p data-start=\"1049\" data-end=\"1281\">The trusted contact feature expands on ChatGPT\u2019s existing parental safety notifications, which alert parents when a linked teen account shows signs of distress. Instagram introduced similar parental alerts earlier this year.<\/p>\n<p data-start=\"1283\" data-end=\"1482\">Now, OpenAI is offering these alerts to its adult users. The company said the feature was developed with guidance from mental health and suicide prevention clinicians, researchers, and organizations.<\/p>\n<p data-start=\"1484\" data-end=\"1773\">\u201cTrusted Contact\u2060 is designed to encourage connection with someone the user already trusts,\u201d the company said in its announcement. \u201cIt does not replace professional care or crisis services, and is one of several layers of safeguards to support people in distress.\u201d<\/p>\n<p data-start=\"1775\" data-end=\"1896\">OpenAI added that ChatGPT will still encourage users to contact crisis hotlines or emergency services when necessary.<\/p>\n<p data-start=\"1898\" data-end=\"2161\">The feature can be enabled by any user 18 years or older through ChatGPT\u2019s settings. From there, users can nominate another adult to serve as their trusted contact by submitting details such as the contact\u2019s phone number and email address.<\/p>\n<p data-start=\"2163\" data-end=\"2332\">The trusted contact will then receive an invitation explaining the feature and will have one week to accept. If they decline, the initial user can nominate another contact instead.<\/p>\n<p data-start=\"2334\" data-end=\"2668\">Once the feature is active, OpenAI\u2019s automated monitoring systems can flag when a user may be discussing self-harm in a manner that suggests a serious safety concern. The system will then notify the user that their trusted contact may be alerted and encourage them to reach out directly. It will even provide some recommended conversation starters.<\/p>\n<p data-start=\"2670\" data-end=\"2832\">The company said a small team of specially trained reviewers will then assess the situation and determine whether notifying the trusted contact is appropriate.<\/p>\n<p data-start=\"2834\" data-end=\"3224\">If OpenAI decides to send an alert, the trusted contact could receive it through email, text message, or an in-app notification. The alert will only explain the general reason self-harm was mentioned and encourage the trusted contact to check in. It will also include guidance on how to navigate those conversations.<\/p>\n<p data-start=\"3226\" data-end=\"3351\">OpenAI noted that the notifications will not include specific details or chat transcripts to protect user privacy.<\/p>\n<\/p><\/div>\n<p><br \/>\n<br \/><a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>OpenAI announced today that it\u2019s rolling out a new mental health-focused safety feature for adult ChatGPT users. Starting today, ChatGPT users can add what the company calls a \u201ctrusted contact\u201d who may be notified if the AI\u2019s automated systems and trained reviewers determine that the user has engaged in discussions about self-harm. The new feature [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":92676,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_daextam_enable_autolinks":"","jetpack_post_was_ever_published":false,"_jetpack_newsletter_access":"","_jetpack_dont_email_post_to_subs":false,"_jetpack_newsletter_tier_id":0,"_jetpack_memberships_contains_paywalled_content":false,"_jetpack_memberships_contains_paid_content":false,"footnotes":"","jetpack_publicize_message":"","jetpack_publicize_feature_enabled":true,"jetpack_social_post_already_shared":true,"jetpack_social_options":{"image_generator_settings":{"template":"highway","default_image_id":0,"font":"","enabled":false},"version":2}},"categories":[11],"tags":[],"class_list":["post-92675","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-tech-news"],"jetpack_publicize_connections":[],"jetpack_featured_media_url":"https:\/\/diyhaven858.wasmer.app\/wp-content\/uploads\/2026\/05\/OpenAI-ChatGPT-1200x675.jpg","jetpack_sharing_enabled":true,"jetpack-related-posts":[],"_links":{"self":[{"href":"https:\/\/diyhaven858.wasmer.app\/index.php\/wp-json\/wp\/v2\/posts\/92675","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/diyhaven858.wasmer.app\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/diyhaven858.wasmer.app\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/diyhaven858.wasmer.app\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/diyhaven858.wasmer.app\/index.php\/wp-json\/wp\/v2\/comments?post=92675"}],"version-history":[{"count":0,"href":"https:\/\/diyhaven858.wasmer.app\/index.php\/wp-json\/wp\/v2\/posts\/92675\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/diyhaven858.wasmer.app\/index.php\/wp-json\/wp\/v2\/media\/92676"}],"wp:attachment":[{"href":"https:\/\/diyhaven858.wasmer.app\/index.php\/wp-json\/wp\/v2\/media?parent=92675"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/diyhaven858.wasmer.app\/index.php\/wp-json\/wp\/v2\/categories?post=92675"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/diyhaven858.wasmer.app\/index.php\/wp-json\/wp\/v2\/tags?post=92675"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}