{"id":93073,"date":"2026-05-08T09:23:12","date_gmt":"2026-05-08T09:23:12","guid":{"rendered":"https:\/\/diyhaven858.wasmer.app\/index.php\/openai-introduces-new-trusted-contact-safeguard-for-cases-of-possible-self-harm\/"},"modified":"2026-05-08T09:23:12","modified_gmt":"2026-05-08T09:23:12","slug":"openai-introduces-new-trusted-contact-safeguard-for-cases-of-possible-self-harm","status":"publish","type":"post","link":"https:\/\/diyhaven858.wasmer.app\/index.php\/openai-introduces-new-trusted-contact-safeguard-for-cases-of-possible-self-harm\/","title":{"rendered":"OpenAI introduces new &#8216;Trusted Contact&#8217; safeguard for cases of possible self-harm"},"content":{"rendered":"<p> <br \/>\n<\/p>\n<div xmlns:default=\"http:\/\/www.w3.org\/2000\/svg\">\n<p id=\"speakable-summary\" class=\"wp-block-paragraph\">On Thursday OpenAI announced a new feature called Trusted Contact, designed to alert a trusted third party if mentions of self-harm are expressed within a conversation. The feature allows an adult ChatGPT user to designate another person as a trusted contact within their account, such as a friend or family member. In cases where a conversation may turn to self-harm, OpenAI will now encourage the user to reach out to that contact. It also sends an automated alert to the contact, encouraging them to check in with the user.<\/p>\n<p class=\"wp-block-paragraph\">OpenAI has faced a wave of lawsuits from the families of people who have committed suicide after talking with its chatbot. In a number of cases, the families say ChatGPT encouraged their loved one to kill themselves \u2014 or even helped them plan it out.<\/p>\n<p class=\"wp-block-paragraph\">OpenAI currently uses a combination of automation and human review to handle potentially harmful incidents. Certain conversational triggers alert the company\u2019s system to suicidal ideations, which then relay the information to a human safety team. The company claims that every time it receives this kind of notification, the incident is reviewed by a human. \u201cWe strive to review these safety notifications\u00a0in under one hour,\u201d the company says. <\/p>\n<p class=\"wp-block-paragraph\">If OpenAI\u2019s internal team decides that the situation represents a serious safety risk, ChatGPT proceeds to send the trusted contact an alert \u2014 either by email, text message, or an in-app notification. The alert is designed to be brief and to encourage the contact to check in with the person in question. It does not include detailed information about what was being discussed, as a means of protecting the user\u2019s privacy, the company says.<\/p>\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" height=\"383\" width=\"680\" src=\"https:\/\/techcrunch.com\/wp-content\/uploads\/2026\/05\/Screenshot-2026-05-07-at-12.26.25-PM.png?w=680\" alt=\"\" class=\"wp-image-3120418\" srcset=\"https:\/\/techcrunch.com\/wp-content\/uploads\/2026\/05\/Screenshot-2026-05-07-at-12.26.25-PM.png 2268w, https:\/\/techcrunch.com\/wp-content\/uploads\/2026\/05\/Screenshot-2026-05-07-at-12.26.25-PM.png?resize=150,84 150w, https:\/\/techcrunch.com\/wp-content\/uploads\/2026\/05\/Screenshot-2026-05-07-at-12.26.25-PM.png?resize=300,169 300w, https:\/\/techcrunch.com\/wp-content\/uploads\/2026\/05\/Screenshot-2026-05-07-at-12.26.25-PM.png?resize=768,432 768w, https:\/\/techcrunch.com\/wp-content\/uploads\/2026\/05\/Screenshot-2026-05-07-at-12.26.25-PM.png?resize=680,383 680w, https:\/\/techcrunch.com\/wp-content\/uploads\/2026\/05\/Screenshot-2026-05-07-at-12.26.25-PM.png?resize=1200,675 1200w, https:\/\/techcrunch.com\/wp-content\/uploads\/2026\/05\/Screenshot-2026-05-07-at-12.26.25-PM.png?resize=1280,720 1280w, https:\/\/techcrunch.com\/wp-content\/uploads\/2026\/05\/Screenshot-2026-05-07-at-12.26.25-PM.png?resize=430,242 430w, https:\/\/techcrunch.com\/wp-content\/uploads\/2026\/05\/Screenshot-2026-05-07-at-12.26.25-PM.png?resize=720,405 720w, https:\/\/techcrunch.com\/wp-content\/uploads\/2026\/05\/Screenshot-2026-05-07-at-12.26.25-PM.png?resize=900,506 900w, https:\/\/techcrunch.com\/wp-content\/uploads\/2026\/05\/Screenshot-2026-05-07-at-12.26.25-PM.png?resize=800,450 800w, https:\/\/techcrunch.com\/wp-content\/uploads\/2026\/05\/Screenshot-2026-05-07-at-12.26.25-PM.png?resize=1536,864 1536w, https:\/\/techcrunch.com\/wp-content\/uploads\/2026\/05\/Screenshot-2026-05-07-at-12.26.25-PM.png?resize=2048,1152 2048w, https:\/\/techcrunch.com\/wp-content\/uploads\/2026\/05\/Screenshot-2026-05-07-at-12.26.25-PM.png?resize=668,375 668w, https:\/\/techcrunch.com\/wp-content\/uploads\/2026\/05\/Screenshot-2026-05-07-at-12.26.25-PM.png?resize=1097,617 1097w, https:\/\/techcrunch.com\/wp-content\/uploads\/2026\/05\/Screenshot-2026-05-07-at-12.26.25-PM.png?resize=708,398 708w, https:\/\/techcrunch.com\/wp-content\/uploads\/2026\/05\/Screenshot-2026-05-07-at-12.26.25-PM.png?resize=50,28 50w\" sizes=\"auto, (max-width: 680px) 100vw, 680px\"\/><figcaption class=\"wp-element-caption\"><span class=\"wp-block-image__credits\"><strong>Image Credits:<\/strong>OpenAI<\/span><\/figcaption><\/figure>\n<p class=\"wp-block-paragraph\">The Trusted Contact feature follows the safeguards the company introduced last September that gave parents the power to have some oversight of their teens\u2019 accounts, including receiving safety notifications designed to alert the parent if OpenAI\u2019s system believes their child is facing a \u201cserious safety risk.\u201d For some time now, ChatGPT has also included automated alerts to seek professional health services, should a conversation trend toward the topic of self-harm.<\/p>\n<p class=\"wp-block-paragraph\">Crucially, Trust Contact is optional and, even if the protection is activated on a particular account, any user can have multiple ChatGPT accounts. OpenAI\u2019s parental controls are also optional, presenting a similar limitation.<\/p>\n<p class=\"wp-block-paragraph\">\u201cTrusted Contact is part of OpenAI\u2019s broader effort to build AI systems that\u00a0help people during difficult moments,\u201d the company wrote in the announcement post. \u201cWe will continue to work with clinicians, researchers, and policymakers to improve how AI systems respond when people may be experiencing distress.\u201d<\/p>\n<div class=\"wp-block-techcrunch-inline-cta\">\n<div class=\"inline-cta__wrapper\">\n<p>Techcrunch event<\/p>\n<div class=\"inline-cta__content\">\n<p>\n\t\t\t\t\t\t\t\t\t<span class=\"inline-cta__location\">San Francisco, CA<\/span><br \/>\n\t\t\t\t\t\t\t\t\t\t\t\t\t<span class=\"inline-cta__separator\">|<\/span><br \/>\n\t\t\t\t\t\t\t\t\t\t\t\t\t<span class=\"inline-cta__date\">October 13-15, 2026<\/span>\n\t\t\t\t\t\t\t<\/p>\n<\/p><\/div>\n<\/p><\/div>\n<\/div>\n<\/div>\n<p><em>When you purchase through links in our articles, we may earn a small commission. This doesn\u2019t affect our editorial independence.<\/em><\/p>\n<p><br \/>\n<br \/><a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>On Thursday OpenAI announced a new feature called Trusted Contact, designed to alert a trusted third party if mentions of self-harm are expressed within a conversation. The feature allows an adult ChatGPT user to designate another person as a trusted contact within their account, such as a friend or family member. In cases where a [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":60216,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_daextam_enable_autolinks":"","jetpack_post_was_ever_published":false,"_jetpack_newsletter_access":"","_jetpack_dont_email_post_to_subs":false,"_jetpack_newsletter_tier_id":0,"_jetpack_memberships_contains_paywalled_content":false,"_jetpack_memberships_contains_paid_content":false,"footnotes":"","jetpack_publicize_message":"","jetpack_publicize_feature_enabled":true,"jetpack_social_post_already_shared":true,"jetpack_social_options":{"image_generator_settings":{"template":"highway","default_image_id":0,"font":"","enabled":false},"version":2}},"categories":[11],"tags":[],"class_list":["post-93073","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-tech-news"],"jetpack_publicize_connections":[],"jetpack_featured_media_url":"https:\/\/diyhaven858.wasmer.app\/wp-content\/uploads\/2026\/03\/GettyImages-2195918462.jpg","jetpack_sharing_enabled":true,"jetpack-related-posts":[],"_links":{"self":[{"href":"https:\/\/diyhaven858.wasmer.app\/index.php\/wp-json\/wp\/v2\/posts\/93073","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/diyhaven858.wasmer.app\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/diyhaven858.wasmer.app\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/diyhaven858.wasmer.app\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/diyhaven858.wasmer.app\/index.php\/wp-json\/wp\/v2\/comments?post=93073"}],"version-history":[{"count":0,"href":"https:\/\/diyhaven858.wasmer.app\/index.php\/wp-json\/wp\/v2\/posts\/93073\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/diyhaven858.wasmer.app\/index.php\/wp-json\/wp\/v2\/media\/60216"}],"wp:attachment":[{"href":"https:\/\/diyhaven858.wasmer.app\/index.php\/wp-json\/wp\/v2\/media?parent=93073"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/diyhaven858.wasmer.app\/index.php\/wp-json\/wp\/v2\/categories?post=93073"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/diyhaven858.wasmer.app\/index.php\/wp-json\/wp\/v2\/tags?post=93073"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}