-
Study: Training Data Containing 0.001% of Misinformation Is Enough to "Poison" Medical AI Models
January 14, 2011 - A study from New York University has revealed the potential risks of large-scale language models (LLMs) in medical information training. The study shows that even training data containing as little as 0.001% of incorrect information can cause the model to output inaccurate medical answers. Data "poisoning" is a relatively simple concept: LLM is usually trained on large amounts of text, mostly from the Internet. By injecting specific information into the training data, it is possible for the model to treat this information as fact when generating answers. This approach doesn't even require direct access to the LLM itself, just the purpose...- 2.7k
❯
Search
Scan to open current page
Top
Checking in, please wait
Click for today's check-in bonus!
You have earned {{mission.data.mission.credit}} points today!
My Coupons
-
¥CouponsLimitation of useExpired and UnavailableLimitation of use
before
Limitation of usePermanently validCoupon ID:×Available for the following products: Available for the following products categories: Unrestricted use:Available for all products and product types
No coupons available!
Unverify
Daily tasks completed:
