INTERNATIONAL RESEARCH: AI ASSISTANT OFTEN DISTORTS NEWS CONTENT, 45% ANSWERS CRITICAL QUESTIONS

ON OCTOBER 27, A RECENT STUDY COORDINATED BY THE EUROPEAN BROADCASTING UNION (EBU) AND LED BY THE BRITISH BROADCASTING CORPORATION (BBC) FOUND THAT TODAY, AS AN ARTIFICIAL INTELLIGENCE ASSISTANT TO MILLIONS OF PEOPLE'S DAILY ACCESS TO INFORMATIONIt's often distorted when testing which language, region or platformpublic informationcontent.

INTERNATIONAL RESEARCH: AI ASSISTANT OFTEN DISTORTS NEWS CONTENT, 45% ANSWERS CRITICAL QUESTIONS

THIS UNPRECEDENTED, BROADEST AND LARGEST INTERNATIONAL STUDY WAS LAUNCHED IN NAPLES BY THE EBU PRESS CONFERENCE. TWENTY-TWO PUBLIC MEDIA OUTLETS (PSMS) FROM 18 COUNTRIES AND 14 LANGUAGES PARTICIPATED IN THE STUDY, REVEALING FOUR MAIN THEMES AI Many systemic issues in the tool。

According to AI, professional journalists from various sectors involved in the public media assessed the more than 3,000 responses generated by ChatGPT, Copilot, Gemini and Perforce on the basis of key criteria, such as accuracy, indication of sources, distinction between facts and views, and provision of background information。

The main studies found that:

  • OF ALL THE ARTIFICIAL INTELLIGENCE ANSWERS, 45% HAS AT LEAST ONE MAJOR PROBLEM。
  • THE RESPONSE OF 31% CONTAINED SERIOUS QUESTIONS ABOUT SOURCES OF INFORMATION, INCLUDING MISSING, MISLEADING OR MISQUOTING SOURCES。
  • THE RESPONSE OF 20% CONTAINS SIGNIFICANT ACCURACY QUESTIONS, INCLUDING FICTIONAL DETAILS AND OUTDATED INFORMATION。
  • Gemini performed the worst, with its 761 TP3T reply having significant questions, more than twice the number of other AI assistants, mainly due to its poor information traceability capability。
  • COMPARED TO THE RESULTS OF THE BBC STUDY EARLIER THIS YEAR, SOME OF THE AI TOOLS HAVE IMPROVED, BUT THE ERROR RATE IS STILL HIGH。

AN ARTIFICIAL INTELLIGENCE ASSISTANT HAS GRADUALLY REPLACED THE TRADITIONAL SEARCH ENGINE AS THE PREFERRED INFORMATION PORTAL FOR MANY USERS. ACCORDING TO THE REUTERS NEWS INSTITUTE DIGITAL NEWS REPORT 2025, THERE ARE 71 TP3T ONLINE NEWS CONSUMERS, WHO RECEIVE NEWS THROUGH AI AIDES, WHILE AMONG THE 25-YEAR-OLDS, THE PERCENTAGE IS AS HIGH AS 151 TP3T。

According to Jean Philip De Tender, the Director-General and Deputy Director-General of Media, EBU: “The study clearly shows that these problems are not isolated incidents, but are systemic, transnational and multilingual. We believe that this is jeopardizing public confidence in the media. When one cannot judge what is to be trusted, it may end up believing nothing, which will weaken the basis for participation in a democratic society.”

Peter Archer, director of the BBC Generating A.I. Project, said, "We look forward to AI and believe that it will help us create more value for the audience. But the premise is that people must be able to trust what they read, see and access. Despite some improvement, these AI assistants still have significant problems. We hope that these technologies will succeed and are willing to work with AI enterprises to create a positive impact for the audience and society.”

The research team launched the News Information Integrity in AI Associates Toolkit, which aims to provide practical solutions to the problems revealed in the report. The toolkit covers the enhancement of the quality of AI ' s assistant response and the enhancement of user media literacy. Based on the large number of cases and insights collected in this study, the Toolkit focuses on two core issues: What kind of AI assistant response is good? And “What problems are urgently in need of repair?”

IN ADDITION, EBU AND ITS MEMBER AGENCIES ARE URGING EU AND NATIONAL REGULATORS TO STRICTLY ENFORCE EXISTING LAWS ON THE AUTHENTICITY OF INFORMATION, DIGITAL SERVICES REGULATIONS AND MEDIA PLURALISM. GIVEN THE RAPID TECHNOLOGICAL DEVELOPMENT OF AI, CONTINUED INDEPENDENT MONITORING IS ESSENTIAL. TO THIS END, EBU IS EXPLORING THE ESTABLISHMENT OF A NORMAL, ROLLING RESEARCH MECHANISM TO FOLLOW THE PERFORMANCE OF AI ASSISTANTS OVER TIME。

THIS STUDY WAS BASED ON A PRELIMINARY STUDY RELEASED BY THE BBC IN FEBRUARY 2025, WHEN IT REVEALED FOR THE FIRST TIME SERIOUS DEFICIENCIES IN THE HANDLING OF NEWS CONTENT BY AI. THE SECOND PHASE OF THE STUDY, WHICH EXTENDED ITS SCOPE TO THE GLOBAL LEVEL, FURTHER CONFIRMED THAT SUCH ISSUES WERE UNIVERSAL AND NOT LIMITED TO SPECIFIC LANGUAGES, MARKETS OR AN AI ASSISTANT。

ADDITIONAL RESEARCH PUBLISHED BY THE BBC ON THE SAME DAY SHOWS THAT PUBLIC HABITS AND PERCEPTIONS OF THE USE OF AI ASSISTANTS FOR NEWS ARE ALSO WORRYING: MORE THAN ONE THIRD OF BRITISH ADULTS NOW BELIEVE THAT THE AI-GENERATED SUMMARIES ARE ACCURATE, WHILE THE PROPORTION OF THOSE UNDER 35 IS CLOSE TO HALF。

THESE FINDINGS RAISE MAJOR CONCERNS: MANY PEOPLE MISTAKENLY BELIEVE THAT AI PRODUCED AN ACCURATE, BUT NOT SO; WHEN THEY FOUND AN ERROR, THEY OFTEN BLAMED BOTH NEWS AGENCIES AND AI DEVELOPERS - EVEN IF THE ERROR WAS CAUSED ENTIRELY BY AI'S ASSISTANT. OVER TIME, SUCH PROBLEMS COULD SERIOUSLY UNDERMINE PUBLIC CONFIDENCE IN THE NEWS ITSELF AND ITS BRAND。

statement:The content of the source of public various media platforms, if the inclusion of the content violates your rights and interests, please contact the mailbox, this site will be the first time to deal with.
Information

Google Gemini New Skills: A PPT with a hint, a document

2025-10-27 11:32:21

Information

xAI Grok's new image of "virtual girlfriend"

2025-10-27 11:38:23

Search