{"id":48336,"date":"2026-01-06T12:12:49","date_gmt":"2026-01-06T04:12:49","guid":{"rendered":"https:\/\/www.1ai.net\/?p=48336"},"modified":"2026-01-06T12:12:49","modified_gmt":"2026-01-06T04:12:49","slug":"openai%ef%bc%9a%e5%85%a8%e7%90%83%e6%af%8f%e5%a4%a9%e6%9c%89%e8%b6%85%e8%bf%87-4000-%e4%b8%87%e4%ba%ba%e4%bd%bf%e7%94%a8-chatgpt-%e8%8e%b7%e5%8f%96%e5%81%a5%e5%ba%b7%e4%bf%a1%e6%81%af","status":"publish","type":"post","link":"https:\/\/www.1ai.net\/en\/48336.html","title":{"rendered":"OpenAI: More than 4 million people worldwide use ChatGPT to access health information every day"},"content":{"rendered":"<p>On January 6, according to AI <a href=\"https:\/\/www.1ai.net\/en\/tag\/openai\" title=\"[View articles tagged with [OpenAI]]\" target=\"_blank\" >OpenAI<\/a> A report to the American News site Axios shows that more than 4 million people use it every day <a href=\"https:\/\/www.1ai.net\/en\/tag\/chatgpt\" title=\"[View articles tagged with [ChatGPT]]\" target=\"_blank\" >ChatGPT<\/a> Access to health information\u3002<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-48337\" title=\"f3bf1dfej00t8fd0c00eld000v9000kup\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2026\/01\/f3bf1dfej00t8fd0c00eld000v900kup.jpg\" alt=\"f3bf1dfej00t8fd0c00eld000v9000kup\" width=\"1125\" height=\"750\" \/><\/p>\n<p>The American health-care system is known for its complexity and lack of transparency in information, and today Americans are increasingly relying on artificial intelligence tools to deal with it\u3002<\/p>\n<p>Smart tool Knit analyses anonymous interactive data on ChatGPT and conducts user research. The results show that, in responding to medical problems, patients see ChatGPT as a trusted \u201chelper\u201d\u3002<\/p>\n<p>The context in which users rely on ChatGPT is very broad: reading medical bills, identifying excessive charges, claiming insurance denials, and, in cases where access is restricted, some users even use it for self-diagnosis or self-management of health matters\u3002<\/p>\n<p>Globally, in all ChatGPT dialogues, more than 5% content and<a href=\"https:\/\/www.1ai.net\/en\/tag\/%e5%8c%bb%e7%96%97%e5%81%a5%e5%ba%b7\" title=\"[Sees articles with [medical health] labels]\" target=\"_blank\" >Healthcare<\/a>Relevant\u3002<\/p>\n<p>OpenAI found that the number of health insurance-related consultations that users submit to ChatGPT every week ranged from 1.6 million to 1.9 million, and that the consultations covered health-care programme comparison, billing and other safeguards-related issues\u3002<\/p>\n<p>In rural areas where medical resources are scarce, the average number of health-related information sent by users per week is close to 600,000. Of the 10 ChatGPT medical conversations, 7 took place outside regular clinic hours\u3002<\/p>\n<p>Patients can enter background information on their own symptoms, medical advice previously given by the doctor, and health problems into ChatGPT, which provides early warning of the severity of certain illnesses. In the absence of timely medical access, this advice helps patients to determine whether they can wait for an appointment or need immediate access to an emergency\u3002<\/p>\n<p>In its report, OpenAI states that \u201creliability will be significantly enhanced if responses are able to combine individualized information on patients, such as medical insurance programme documents, clinical guidance and data from the health service platform\u201d<\/p>\n<p>It should be noted, however, that the recommendations given by ChatGPT may be erroneous or even potentially dangerous, especially in the context of mental health dialogue. At present, OpenAI is facing multiple lawsuits for the events in question, with some plaintiffs claiming that their relatives and friends had committed self-inflicted injuries or suicide after using the tool\u3002<\/p>\n<p>1AI NOTED THAT NEW REGULATIONS HAVE BEEN INTRODUCED IN SEVERAL STATES IN THE UNITED STATES TO REGULATE THE USE OF ARTIFICIAL SMART CHAT ROBOTS, TO EXPLICITLY PROHIBIT THE PROVISION OF MENTAL HEALTH GUIDANCE IN APPLICATIONS OR SERVICES, AND TO INTERVENE IN USER TREATMENT DECISIONS\u3002<\/p>\n<p>OpenAI states that efforts are being made to optimize ChatGPT's response mechanisms in a health-care environment. Companies continuously assess relevant models to reduce harmful or misleading responses, while working with clinicians to identify potential risks and promote tool optimization\u3002<\/p>\n<p>ACCORDING TO THE COMPANY, THE GPT-5 MODEL PREFERS TO ASK USERS FOR ADDITIONAL INFORMATION, TO RETRIEVE THE INTERNET FOR THE LATEST RESEARCH RESULTS, TO USE MORE RIGOROUS REPRESENTATIONS, AND TO DIRECT USERS TO A PROFESSIONAL MEDICAL ASSESSMENT, IF NECESSARY\u3002<\/p>","protected":false},"excerpt":{"rendered":"<p>On January 6, according to a report provided by the artificial intelligence company OpenAI to the American news site Axios, over 40 million people worldwide access health information every day through ChatGPT. The American health-care system is known for its complexity and lack of transparency in information, and today Americans are increasingly relying on artificial intelligence tools to deal with it. Smart tool Knit analyses anonymous interactive data on ChatGPT and conducts user research. The results show that, in responding to medical problems, patients see ChatGPT as a trusted \u201chelper\u201d. Users rely on ChatGPT for a wide range of events: interpretation of medical bills, recognition of overcharges, denial of claim insurance; access to medical care<\/p>","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[146],"tags":[177,190,2689],"collection":[],"class_list":["post-48336","post","type-post","status-publish","format-standard","hentry","category-news","tag-chatgpt","tag-openai","tag-2689"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/48336","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/comments?post=48336"}],"version-history":[{"count":0,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/48336\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/media?parent=48336"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/categories?post=48336"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/tags?post=48336"},{"taxonomy":"collection","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/collection?post=48336"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}