{"id":45287,"date":"2025-10-28T10:33:36","date_gmt":"2025-10-28T02:33:36","guid":{"rendered":"https:\/\/www.1ai.net\/?p=45287"},"modified":"2025-10-28T10:33:36","modified_gmt":"2025-10-28T02:33:36","slug":"openai-%e6%8a%ab%e9%9c%b2%ef%bc%9a%e6%af%8f%e5%91%a8%e6%9c%89%e8%b6%85%e8%bf%87%e4%b8%80%e7%99%be%e4%b8%87%e4%ba%ba%e4%b8%8e-chatgpt-%e5%80%be%e8%af%89%e8%87%aa%e6%9d%80%e5%80%be%e5%90%91","status":"publish","type":"post","link":"https:\/\/www.1ai.net\/en\/45287.html","title":{"rendered":"OpenAI Disclosure: More than 1 million people per week talk about suicide with ChatGPT"},"content":{"rendered":"<p class=\"translation-text-wrapper\" data-ries-data-process=\"56\" data-group-id=\"group-56\">The news of October 28th<a href=\"https:\/\/www.1ai.net\/en\/tag\/openai\" title=\"[View articles tagged with [OpenAI]]\" target=\"_blank\" >OpenAI<\/a> The latest data was released on Monday at local time, revealing a lot of it <a href=\"https:\/\/www.1ai.net\/en\/tag\/chatgpt\" title=\"[View articles tagged with [ChatGPT]]\" target=\"_blank\" >ChatGPT<\/a> USERS ARE FACING MENTAL HEALTH PROBLEMS WHEN COMMUNICATING WITH ARTIFICIAL SMART CHAT ROBOTS. IT STATES THAT, IN ANY GIVEN WEEK, ABOUT 0.151 TP3T'S ACTIVE USERS WILL ENGAGE IN A DIALOGUE \u201cINCLUDING CLEAR SIGNS OF SUICIDE PLANNING OR INTENT\u201d\u3002<strong>Given that ChatGPT has more than 800 million active users per week, this means that more than 1 million people per week talk about suicide\u3002<\/strong><\/p>\n<p data-ries-data-process=\"56\" data-group-id=\"group-56\"><img decoding=\"async\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2025\/07\/bde0c242j00szj0zx0032d000v900fmp.jpg\" alt=\"ChatGPT is expected to have native support for editing Excel and PPT files, and OpenAI is challenging Microsoft Office.\" \/><\/p>\n<p class=\"translation-text-wrapper\" data-ries-data-process=\"57\" data-group-id=\"group-57\">OpenAI also points out that a similar proportion of users show \"high emotional attachment\" to ChatGPT<strong>ON THE OTHER HAND, HUNDREDS OF THOUSANDS OF USERS EACH WEEK SHOW SIGNS OF MENTAL ILLNESS OR MANIA IN THEIR CONVERSATIONS WITH AI\u3002<\/strong><\/p>\n<p class=\"translation-text-wrapper\" data-ries-data-process=\"58\" data-group-id=\"group-58\">Although OpenAI claims that such dialogues are \u201cvery rare\u201d in their overall use and therefore difficult to measure accurately, companies estimate that these problems still affect hundreds of thousands of users every week\u3002<\/p>\n<p class=\"translation-text-wrapper\" data-ries-data-process=\"59\" data-group-id=\"group-59\">This disclosure is part of OpenAI ' s announcement of progress in upgrading the model ' s capacity to respond to mental health problems. According to the company, over 170 mental health specialists were consulted in the development of the latest edition of ChatGPT. These clinical professionals have observed that the current version of ChatGPT \u201cresponds in a more appropriate and consistent manner than the earlier version\u201d\u3002<\/p>\n<p class=\"translation-text-wrapper\" data-ries-data-process=\"60\" data-group-id=\"group-60\">IN RECENT MONTHS, SEVERAL REPORTS HAVE REVEALED THAT AI CHAT ROBOTS CAN HAVE A NEGATIVE IMPACT ON USERS WITH MENTAL DISTRESS. PREVIOUS STUDIES HAVE FOUND THAT CERTAIN AI CHAT ROBOTS, BY REINFORCING THE DANGEROUS BELIEFS OF USERS THROUGH A POMPOUS RESPONSE, COULD LEAD SOME USERS INTO A VICIOUS CIRCLE OF DELUSIONAL THINKING\u3002<\/p>\n<p class=\"translation-text-wrapper\" data-ries-data-process=\"61\" data-group-id=\"group-61\">AI noted that addressing mental health in ChatGPT is rapidly becoming a major challenge for OpenAI. Currently, the company is being sued by the parents of a 16-year-old boy who, several weeks before the suicide, had told ChatGPT about his suicide. In addition, the Attorney General of California and Delaware has warned OpenAI that the protection of young users using their products must be strengthened - The attitude of the two states may even affect the reorganization plans being undertaken by the companies\u3002<\/p>\n<p class=\"translation-text-wrapper\" data-ries-data-process=\"62\" data-group-id=\"group-62\">Earlier this month, OpenAI Chief Executive Officer Sam Altman wrote on social platform X claiming that the company had \u201csuccessfully alleviated serious mental health problems in ChatGPT\u201d, without providing specific details. The data released on this occasion appear to support this statement, but also give rise to a broader public interest in the universality of the problem. It is worth noting that Altmann also indicated that OpenAI would ease some of its restrictions and even allow adult users to engage in dialogue with AI with content of a sexual nature\u3002<\/p>\n<p class=\"translation-text-wrapper\" data-ries-data-process=\"63\" data-group-id=\"group-63\">On Monday's announcement, OpenAI declared<strong>THE LATEST UPDATED GPT-5 MODEL PRODUCES AN \u201cIDEAL RESPONSE\u201d RATE THAT IS ABOUT 651 TP3T IN RESPONSE TO MENTAL HEALTH-RELATED QUESTIONS<\/strong>I DON'T KNOW. IN A TEST DEDICATED TO ASSESSING AI'S PERFORMANCE IN RESPONSE TO SUICIDE, THE NEW GPT-5 MODEL REACHED A RATIO OF 91%, COMPARED TO THE PREVIOUS VERSION OF 77%, WHICH MET THE COMPANY'S EXPECTED STANDARDS OF CONDUCT\u3002<\/p>\n<p class=\"translation-text-wrapper\" data-ries-data-process=\"64\" data-group-id=\"group-64\">OpenAI also stressed that:<strong>A NEW VERSION OF GPT-5 ALLOWS FOR MORE CONSISTENT COMPLIANCE WITH THE COMPANY ' S SAFETY PROTECTION MECHANISMS DURING LONG CONVERSATIONS<\/strong>I don't know. Previously, the company had acknowledged a decrease in the effectiveness of its security measures in the context of a long dialogue\u3002<\/p>\n<p class=\"translation-text-wrapper\" data-ries-data-process=\"65\" data-group-id=\"group-65\">In addition to the technical improvements, OpenAI indicated that additional specific assessment indicators would be added to measure the most severe mental health risks faced by users. In the future, the AI model's basic security test will incorporate key assessment benchmarks such as \u201cemotional dependence\u201d and \u201cnon-suicide psychological crisis\u201d\u3002<\/p>\n<p class=\"translation-text-wrapper\" data-ries-data-process=\"66\" data-group-id=\"group-66\">At the same time, OpenAI has recently strengthened its control tools for parents of underage users. The company is developing an age projection system that automatically identifies children using ChatGPT and automatically imposes stricter security protection measures\u3002<\/p>\n<p class=\"translation-text-wrapper\" data-ries-data-process=\"67\" data-group-id=\"group-67\">However, it remains unclear whether the mental health challenges surrounding ChatGPT will persist. Although GPT-5 has advanced in terms of safety compared to previous models, some of the ChatGPT responses have been identified by OpenAI itself as \u201cundesirable\u201d. In addition, OpenAI is still providing older and less secure AI models, including GPT-4o, to millions of fee-paying subscribers, which further exacerbates potential risks\u3002<\/p>","protected":false},"excerpt":{"rendered":"<p>On October 28th, OpenAI released the latest data on Monday, local time, revealing that a large number of ChatGPT users are facing mental health problems while communicating with artificial smart chat robots. It states that, in any given week, about 0.151 TP3T's active users will engage in a dialogue \u201cincluding clear signs of suicide planning or intent\u201d. Given that ChatGPT has more than 800 million active users per week, this means that more than 1 million people per week talk about suicide. OpenAI also noted that a similar proportion of users showed \u201chigh emotional attachment\u201d to ChatGPT, while hundreds of thousands of users each week showed signs of mental illness or mania in their conversations with AI. Despite OpenAI<\/p>","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[146],"tags":[177,190],"collection":[],"class_list":["post-45287","post","type-post","status-publish","format-standard","hentry","category-news","tag-chatgpt","tag-openai"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/45287","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/comments?post=45287"}],"version-history":[{"count":0,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/45287\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/media?parent=45287"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/categories?post=45287"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/tags?post=45287"},{"taxonomy":"collection","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/collection?post=45287"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}