{"id":45041,"date":"2025-10-22T12:17:57","date_gmt":"2025-10-22T04:17:57","guid":{"rendered":"https:\/\/www.1ai.net\/?p=45041"},"modified":"2025-10-22T12:17:57","modified_gmt":"2025-10-22T04:17:57","slug":"%e6%9c%80%e6%96%b0%e7%a0%94%e7%a9%b6%ef%bc%9a%e6%8c%81%e7%bb%ad%e5%96%82%e5%85%bb%e4%bd%8e%e8%b4%a8%e6%96%87%e6%9c%ac%e4%bc%9a%e5%af%b9-ai-%e9%80%a0%e6%88%90%e4%b8%8d%e5%8f%af%e9%80%86%e3%80%8c","status":"publish","type":"post","link":"https:\/\/www.1ai.net\/en\/45041.html","title":{"rendered":"RECENT STUDY: CONTINUED FEEDING OF LOW-QUALITY TEXT CAUSES IRREVERSIBLE BRAIN DAMAGE TO AI"},"content":{"rendered":"<p>On October 22, according to news, recently, in a paper entitled \"Big language models may be \"brain decay\" issued jointly by several higher education institutions, the research team stated that:<a href=\"https:\/\/www.1ai.net\/en\/tag\/%e5%a4%a7%e5%9e%8b%e8%af%ad%e8%a8%80%e6%a8%a1%e5%9e%8b\" title=\"[View articles tagged with [large-scale language model]]\" target=\"_blank\" >Large Language Models<\/a>After continuous exposure to the text of low-quality social networking platforms, there is a similar human \"cognitive degradation\"\u3002<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-45042\" title=\"852b737cj00t4imku0078d000u000htm\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2025\/10\/852b737cj00t4imku0078d000u000htm.jpg\" alt=\"852b737cj00t4imku0078d000u000htm\" width=\"1080\" height=\"641\" \/><\/p>\n<p>RESEARCHERS ARE BUILDING \"SPAM DATA\" AND \"CHECK DATA\" FROM X PLATFORMS AND CONDUCTING CONTINUOUS PRE-TRAINING EXPERIMENTS ON VARIOUS MODELS\u3002<\/p>\n<p>The results show that when models are exposed to a high proportion of \u201cwaste data\u201d, they show a significant decline in reasoning, long text understanding, security and personal orientation\u3002<\/p>\n<p>The error analysis shows that the main problem with the model is \"thinking leaps\": there is a growing tendency to cut or skip key lines of reasoning needed to solve the problem\u3002<\/p>\n<p>Researchers have further compared different types of social media posts and found that \"participation\" is the strongest toxicity indicator - the easier it is to transmit viral content, the easier it is to degrade model perception\u3002<\/p>\n<p>Even more worrying is the persistence of the recession. Even if high-quality data are added to the follow-up phase for fine-tuning or continuing pre-training, the model can only be partially restored and there are still signs of drift\u3002<\/p>\n<p>The research team noted that this phenomenon is similar to the \"brain corruption\" of humans in long-term exposure to fragmentation, low-nutrient information, and highlighted the key role of data quality in ongoing training in large models. The research team called on industry to include \"cognitive health screening\" in the model maintenance process to avoid long-term degradation of capacity\u3002<\/p>\n<p>\u2022 research project: https:\/\/llm-brain-rot.github.io\/<\/p>","protected":false},"excerpt":{"rendered":"<p>ON OCTOBER 22ND, THE RESEARCH TEAM NOTED THAT, IN A JOINT PAPER ENTITLED \"BIG LANGUAGE MODELS ARE LIKELY TO BE \"BRAIN DECAY\"!\" PUBLISHED BY SEVERAL HIGHER EDUCATION INSTITUTIONS, LARGE LANGUAGE MODELS, AFTER CONTINUOUS EXPOSURE TO LOW-QUALITY ONLINE SOCIAL PLATFORM TEXT, ARE LIKELY TO EXPERIENCE SIMILAR HUMAN \"COGNITIVE DEGRADATION\". RESEARCHERS ARE BUILDING \"SPAM DATA\" AND \"CHECK DATA\" FROM X PLATFORMS AND CONDUCTING CONTINUOUS PRE-TRAINING EXPERIMENTS ON VARIOUS MODELS. THE RESULTS SHOW THAT WHEN MODELS ARE EXPOSED TO A HIGH PROPORTION OF \u201cWASTE DATA\u201d, THEY SHOW A SIGNIFICANT DECLINE IN REASONING, LONG TEXT UNDERSTANDING, SECURITY AND PERSONAL ORIENTATION. THE ERROR ANALYSIS SHOWS THAT THE MAIN PROBLEM WITH THE MODEL IS \"THINKING LEAPS\": THERE IS A GROWING TENDENCY TO CUT OR SKIP KEY LINES OF REASONING NEEDED TO SOLVE THE PROBLEM. RESEARCHERS HAVE FURTHER COMPARED DIFFERENT TYPES OF SOCIETY<\/p>","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[146],"tags":[411,371],"collection":[],"class_list":["post-45041","post","type-post","status-publish","format-standard","hentry","category-news","tag-ai","tag-371"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/45041","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/comments?post=45041"}],"version-history":[{"count":0,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/45041\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/media?parent=45041"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/categories?post=45041"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/tags?post=45041"},{"taxonomy":"collection","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/collection?post=45041"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}