RESEARCH FOUND THAT THE PROBABILITY OF THE MAINSTREAM AI CHAT ROBOT SPREADING FALSE INFORMATION IS TWICE AS HIGH AS LAST YEAR

September 15, according to Newsguard's research, 35% will be retransmitted as of August this year when the 10 generation AI tools deal with real-time news topicsmisinformationAND LAST AUGUST THIS FIGURE WAS 18%。

RESEARCH FOUND THAT THE PROBABILITY OF THE MAINSTREAM AI CHAT ROBOT SPREADING FALSE INFORMATION IS TWICE AS HIGH AS LAST YEAR

The surge in the dissemination of false information is linked to a major trade-off. WhenChatbotsWITH THE INTRODUCTION OF REAL-TIME NETWORK SEARCH, THEY NO LONGER REFUSE TO ANSWER USER QUESTIONS - THE REJECTION RATE DROPPED FROM 311 TP3T IN AUGUST 2024 TO 01 TP3T ONE YEAR LATER. HOWEVER, THIS CHANGE HAS ENABLED THESE AI ROBOTS TO ACCESS THE CONTAMINATED NETWORK INFORMATION ECOSYSTEM: IN THE SYSTEM, THE UNDESIRABLE ACTOR INTENTIONALLY DISSEMINATES FALSE INFORMATION, WHICH THE AI SYSTEM REPEATS。

Such problems do not arise for the first time. Last year, Newsguard marked 966 AI-generated news sites in 16 languages. These sites often use generic names such as “ibusiness Day”, imitating formal media outlets, and disseminate false news。

AI noted that the breakdown of the performance of the various AI models showed that the Inflection model performed the worst, with a high probability of dissemination of false information as high as 56,67%; immediately followed by Perplexity, an error rate of 46,67%. The ratio of ChatGPT to Meta 's AI model for the dissemination of false information is 40%; that of Copilot (MSC should chat) and Mistral is 3667%. The two best performing models are Claude and Gemini, with error rates of 10% and 16.67%, respectively。

Perplexity's performance is particularly down. In August 2024, the model still had a perfect rate of disclosure of false information of 100%; one year later, its probability of disseminating false information was close to 50%。

THE INTRODUCTION OF A WEB SEARCH FUNCTION WAS INTENDED TO SOLVE THE PROBLEM OF AI ANSWERING OUTMODED QUESTIONS, BUT INSTEAD CREATED NEW PROBLEMS FOR THE SYSTEM. THESE CHAT ROBOTS BEGAN TO OBTAIN INFORMATION FROM UNRELIABLE SOURCES, “DISTURBING NEWS PUBLICATIONS 100 YEARS AGO WITH RUSSIAN PROPAGANDA AGENCIES USING SIMILAR NAMES”。

Newsguard describes this as a fundamental flaw: “The early AI uses a `no harm ' strategy to avoid the risk of spreading false information by refusing to answer questions.”

Today, it is more difficult than ever to distinguish between facts and false information, as the web-based information ecosystem is filled with false information。

OpenAI has acknowledged that language models always produce “fanta content” (i.e. false or unfounded information generated by AI) because the rationale behind these models is to predict “the next word that is most likely to happen” rather than to pursue “the truth”. The company indicated that it was working to develop new technologies that would enable future models to “notify uncertainty” rather than firmly fabricated information. It is not clear, however, whether this method will solve the deeper problem of AI chat robots spreading false information — a problem that requires AI to truly understand “what is real and what is false”, which is still difficult to achieve。

statement:The content of the source of public various media platforms, if the inclusion of the content violates your rights and interests, please contact the mailbox, this site will be the first time to deal with.
Information

OpenAI GPT-5 has a PhD? Google DeepMind CEO: Nonsense

2025-9-15 11:21:14

Information

Stable Audio 2.5 Enterprise Audio Generation AI Model Release, called "3 Minute Track 2 seconds completed"

2025-9-15 11:24:53

Search