OpenAI Disclosure: More than 1 million people per week talk about suicide with ChatGPT

The news of October 28thOpenAI The latest data was released on Monday at local time, revealing a lot of it ChatGPT USERS ARE FACING MENTAL HEALTH PROBLEMS WHEN COMMUNICATING WITH ARTIFICIAL SMART CHAT ROBOTS. IT STATES THAT, IN ANY GIVEN WEEK, ABOUT 0.151 TP3T'S ACTIVE USERS WILL ENGAGE IN A DIALOGUE “INCLUDING CLEAR SIGNS OF SUICIDE PLANNING OR INTENT”。Given that ChatGPT has more than 800 million active users per week, this means that more than 1 million people per week talk about suicide。

OpenAI Disclosure: More than 1 million people per week talk about suicide with ChatGPT

OpenAI also points out that a similar proportion of users show "high emotional attachment" to ChatGPTON THE OTHER HAND, HUNDREDS OF THOUSANDS OF USERS EACH WEEK SHOW SIGNS OF MENTAL ILLNESS OR MANIA IN THEIR CONVERSATIONS WITH AI。

Although OpenAI claims that such dialogues are “very rare” in their overall use and therefore difficult to measure accurately, companies estimate that these problems still affect hundreds of thousands of users every week。

This disclosure is part of OpenAI ' s announcement of progress in upgrading the model ' s capacity to respond to mental health problems. According to the company, over 170 mental health specialists were consulted in the development of the latest edition of ChatGPT. These clinical professionals have observed that the current version of ChatGPT “responds in a more appropriate and consistent manner than the earlier version”。

IN RECENT MONTHS, SEVERAL REPORTS HAVE REVEALED THAT AI CHAT ROBOTS CAN HAVE A NEGATIVE IMPACT ON USERS WITH MENTAL DISTRESS. PREVIOUS STUDIES HAVE FOUND THAT CERTAIN AI CHAT ROBOTS, BY REINFORCING THE DANGEROUS BELIEFS OF USERS THROUGH A POMPOUS RESPONSE, COULD LEAD SOME USERS INTO A VICIOUS CIRCLE OF DELUSIONAL THINKING。

AI noted that addressing mental health in ChatGPT is rapidly becoming a major challenge for OpenAI. Currently, the company is being sued by the parents of a 16-year-old boy who, several weeks before the suicide, had told ChatGPT about his suicide. In addition, the Attorney General of California and Delaware has warned OpenAI that the protection of young users using their products must be strengthened - The attitude of the two states may even affect the reorganization plans being undertaken by the companies。

Earlier this month, OpenAI Chief Executive Officer Sam Altman wrote on social platform X claiming that the company had “successfully alleviated serious mental health problems in ChatGPT”, without providing specific details. The data released on this occasion appear to support this statement, but also give rise to a broader public interest in the universality of the problem. It is worth noting that Altmann also indicated that OpenAI would ease some of its restrictions and even allow adult users to engage in dialogue with AI with content of a sexual nature。

On Monday's announcement, OpenAI declaredTHE LATEST UPDATED GPT-5 MODEL PRODUCES AN “IDEAL RESPONSE” RATE THAT IS ABOUT 651 TP3T IN RESPONSE TO MENTAL HEALTH-RELATED QUESTIONSI DON'T KNOW. IN A TEST DEDICATED TO ASSESSING AI'S PERFORMANCE IN RESPONSE TO SUICIDE, THE NEW GPT-5 MODEL REACHED A RATIO OF 91%, COMPARED TO THE PREVIOUS VERSION OF 77%, WHICH MET THE COMPANY'S EXPECTED STANDARDS OF CONDUCT。

OpenAI also stressed that:A NEW VERSION OF GPT-5 ALLOWS FOR MORE CONSISTENT COMPLIANCE WITH THE COMPANY ' S SAFETY PROTECTION MECHANISMS DURING LONG CONVERSATIONSI don't know. Previously, the company had acknowledged a decrease in the effectiveness of its security measures in the context of a long dialogue。

In addition to the technical improvements, OpenAI indicated that additional specific assessment indicators would be added to measure the most severe mental health risks faced by users. In the future, the AI model's basic security test will incorporate key assessment benchmarks such as “emotional dependence” and “non-suicide psychological crisis”。

At the same time, OpenAI has recently strengthened its control tools for parents of underage users. The company is developing an age projection system that automatically identifies children using ChatGPT and automatically imposes stricter security protection measures。

However, it remains unclear whether the mental health challenges surrounding ChatGPT will persist. Although GPT-5 has advanced in terms of safety compared to previous models, some of the ChatGPT responses have been identified by OpenAI itself as “undesirable”. In addition, OpenAI is still providing older and less secure AI models, including GPT-4o, to millions of fee-paying subscribers, which further exacerbates potential risks。

statement:The content of the source of public various media platforms, if the inclusion of the content violates your rights and interests, please contact the mailbox, this site will be the first time to deal with.
Information

There's an AI encyclopedia from Mask: Grokipedia is on the line, recording over 88.85 million articles

2025-10-28 10:32:12

Information

Volcanic engine online bean bag video generation model 1.0 process: 5 seconds 720P content completed in 10 seconds

2025-10-28 10:35:01

Search