Wikipedia cracks down on AI-generated bad entries: deletes them as soon as they're discovered

Aug. 6, 2012 - According to a report by 404 Media on 5Wikipediaeditors have just adopted a new policy in response to the large number of AI The status quo of generating entries to flood the platform. This new policy gives administrators the authority, subject to certain conditions, toQuickly delete AI-generated articles.

Wikipedia cracks down on AI-generated bad entries: deletes them as soon as they're discovered

Ilyas Lebleu, a founding member of WikiProject AI Cleanup, said, "While some signs can help us recognize AI content (Examples include wording, use of dashes, bulleted lists of bolded headings, etc.), but these features are usually not absolutely clear, and we don't want to mistakenly delete entries just because a passage sounds like it was written by AI."

"Overall, the proliferation of AI content has been described as a take on Wikipedia 'existential threat': Our workflows have always relied on (often lengthy) discussions and consensus building, and the fact that AI can generate large amounts of false content extremely efficiently will be a serious problem if there are no corresponding rapid deletion mechanisms. Of course, AI content may not be particularly bad.Humans can write equally bad content -- but human output is nowhere near as fast as AI. our tools are designed for a completely different scale of content production."

1AI has learned from the report that the wiki's proposed solution is to quickly remove entries that are clearly generated by AI and meet two general conditions. One is that the entries contain "user-oriented communication content", i.e., they contain statements that are clearly responses to user prompts from the Big Language Model, such as"Here's your Wikipedia entry ......", "As of my last training update ......", and "As a large language model."And so on. This type of language is typical of the features we've been using for a long time in recognizing AI-generated social media posts and scientific papers.

Lebleu said that the team "has seen a lot of these features" and, more importantly, that these kinds of statements suggest that the submitterWithout even reading through the content of the article it was posted on. "If the user has not checked even this most basic of questions, then we can reasonably presume that they have not reviewed any of the content at all and have merely copied and pasted it, and that such entries are no different from white noise."

The second scenario that qualifies for fast deletion is that the entryObvious errors in citations, which is also a common failing of the Big Language Model. This includes external links to books, articles or papers listed that simply do not exist, do not open, or have link content that is completely irrelevant to the topic.

Lebleu stated that expedited deletion was a "stopgap measure".Can handle the most visible casesHe added that AI is a potentially useful tool that may have a positive impact on Wikipedia in the future, but that the AI problem will persist because there is still a large amount of AI content that does not necessarily qualify for both deletions. He added that AI has the potential to be a useful tool that may have a positive impact on Wikipedia in the future.

statement:The content of the source of public various media platforms, if the inclusion of the content violates your rights and interests, please contact the mailbox, this site will be the first time to deal with.
Information

Exploring AI Writing Code Extreme: Claude Opus 4.1 Models Debut, Software Engineering Capabilities Reach New Heights

2025-8-6 11:46:59

Information

Google Gives Benefits to College Students Across the U.S.: $1 Billion in AI Training and Subscriptions Over the Next Three Years

2025-8-7 11:21:20

Search