On January 6, according to AI OpenAI A report to the American News site Axios shows that more than 4 million people use it every day ChatGPT Access to health information。

The American health-care system is known for its complexity and lack of transparency in information, and today Americans are increasingly relying on artificial intelligence tools to deal with it。
Smart tool Knit analyses anonymous interactive data on ChatGPT and conducts user research. The results show that, in responding to medical problems, patients see ChatGPT as a trusted “helper”。
The context in which users rely on ChatGPT is very broad: reading medical bills, identifying excessive charges, claiming insurance denials, and, in cases where access is restricted, some users even use it for self-diagnosis or self-management of health matters。
Globally, in all ChatGPT dialogues, more than 5% content andHealthcareRelevant。
OpenAI found that the number of health insurance-related consultations that users submit to ChatGPT every week ranged from 1.6 million to 1.9 million, and that the consultations covered health-care programme comparison, billing and other safeguards-related issues。
In rural areas where medical resources are scarce, the average number of health-related information sent by users per week is close to 600,000. Of the 10 ChatGPT medical conversations, 7 took place outside regular clinic hours。
Patients can enter background information on their own symptoms, medical advice previously given by the doctor, and health problems into ChatGPT, which provides early warning of the severity of certain illnesses. In the absence of timely medical access, this advice helps patients to determine whether they can wait for an appointment or need immediate access to an emergency。
In its report, OpenAI states that “reliability will be significantly enhanced if responses are able to combine individualized information on patients, such as medical insurance programme documents, clinical guidance and data from the health service platform”
It should be noted, however, that the recommendations given by ChatGPT may be erroneous or even potentially dangerous, especially in the context of mental health dialogue. At present, OpenAI is facing multiple lawsuits for the events in question, with some plaintiffs claiming that their relatives and friends had committed self-inflicted injuries or suicide after using the tool。
1AI NOTED THAT NEW REGULATIONS HAVE BEEN INTRODUCED IN SEVERAL STATES IN THE UNITED STATES TO REGULATE THE USE OF ARTIFICIAL SMART CHAT ROBOTS, TO EXPLICITLY PROHIBIT THE PROVISION OF MENTAL HEALTH GUIDANCE IN APPLICATIONS OR SERVICES, AND TO INTERVENE IN USER TREATMENT DECISIONS。
OpenAI states that efforts are being made to optimize ChatGPT's response mechanisms in a health-care environment. Companies continuously assess relevant models to reduce harmful or misleading responses, while working with clinicians to identify potential risks and promote tool optimization。
ACCORDING TO THE COMPANY, THE GPT-5 MODEL PREFERS TO ASK USERS FOR ADDITIONAL INFORMATION, TO RETRIEVE THE INTERNET FOR THE LATEST RESEARCH RESULTS, TO USE MORE RIGOROUS REPRESENTATIONS, AND TO DIRECT USERS TO A PROFESSIONAL MEDICAL ASSESSMENT, IF NECESSARY。