Following OpenAI's step, Anthropic announces on-line AI medical services

On January 12th, local time 11thAnthropic ANNOUNCED, AI PRODUCTS UNDER FLAG Claude A medical service that meets the requirements of the U.S. Health Insurance Flows and Responsibilities Act will be on line and directed toHospitals, medical institutions and individual usersOpen for processing protected health data. Meanwhile, Claude has consolidated a variety of scientific databases and enhanced capacity to support biomedical research。

In terms of individual user-oriented functions, users can export their own health data from applications such as Apple Health and Function Health to more efficiently organize medical records and share them with health service providers。

Following OpenAI's step, Anthropic announces on-line AI medical services

Anthropic is currently conducting financing consultations with a valuation of about $350 billion (note: the current exchange rate is approximately 2.4 trillion yuan). Mike Krieg, Chief Product Officer of Anthropic, co-founder of Instagram, said that once the regulatory and data compliance issues were properly addressedTHE HEALTH SECTOR WILL BE ONE OF THE MOST INFLUENTIAL APPLICATIONS OF AII don't know. The goal of this new tool is to make peopleGetting more knowledge from their own dataInformation is also more fully available in communication with health service providers。

According to Anthropic, one of the largest non-profit medical systems in the United States - I don't know. The Banner Health System has over 22,000 clinical staff in use of Claude, of whom 85% considers work efficiency and accuracy to have improved. The company's still workingNord and Nord and Stanford are working together.

According to Bloomberg, Anthropic faces not only challenges from OpenAI, but also competition from traditional technology firms and emerging start-ups. The current “large factories” are trying to apply AI to areas such as drug research and development, medical administrative processes and patient record analysis, but applications also pose new risks to privacy protection and safety。

Anthropic stressed that its medical responses were based on authoritative sources such as PubMed and NPI registers, and thatThey don't use medical user data to train models.

statement:The content of the source of public various media platforms, if the inclusion of the content violates your rights and interests, please contact the mailbox, this site will be the first time to deal with.
Information

Mask: Three years in Netsla Optimus, the human robot will be above the top human surgeon

2026-1-12 12:16:22

Information

THE FIRST AI SERVICE IN THE COUNTRY IS IN SECOND INSTANCE IN THE YELLOW SENTENCE CASE, AND THE DEVELOPERS ARE ACCUSED OF BREAKING THE MORAL LIMITS OF THE BIG MODEL

2026-1-12 17:52:39

Search