{"id":14730,"date":"2024-07-04T09:39:09","date_gmt":"2024-07-04T01:39:09","guid":{"rendered":"https:\/\/www.1ai.net\/?p=14730"},"modified":"2024-07-04T09:39:09","modified_gmt":"2024-07-04T01:39:09","slug":"%e7%a0%94%e7%a9%b6%e6%98%be%e7%a4%ba%ef%bc%8cai-%e7%94%9f%e6%88%90%e7%9a%84%e6%96%87%e7%ab%a0%e6%bb%a5%e7%94%a8%e7%89%b9%e5%ae%9a%e8%af%8d%e6%b1%87","status":"publish","type":"post","link":"https:\/\/www.1ai.net\/en\/14730.html","title":{"rendered":"AI-generated articles misuse certain words, study shows"},"content":{"rendered":"<p>An analysis of scientific papers in the past decade shows that researchers have found<a href=\"https:\/\/www.1ai.net\/en\/tag\/%e4%ba%ba%e5%b7%a5%e6%99%ba%e8%83%bd%e6%a8%a1%e5%9e%8b\" title=\"_Other Organiser\" target=\"_blank\" >Artificial Intelligence Model<\/a>There is an overuse of some &quot;style&quot; words that were rarely used a few years ago.<\/p>\n<p>In a new study that has not yet been peer-reviewed, researchers used a novel approach, similar to epidemiology, to reveal how large language models tend to misuse certain words by analyzing &quot;excessive word usage&quot; in biomedical papers. The results provide interesting insights into the impact of artificial intelligence in academia, showing that at least 10% abstracts were processed using large language models in 2024.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-14731\" title=\"202405161743155421_7\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/07\/202405161743155421_7.jpg\" alt=\"202405161743155421_7\" width=\"1000\" height=\"666\" \/><\/p>\n<p>Source Note: The image is generated by AI, and the image is authorized by Midjourney<\/p>\n<p>The study was an extensive analysis of 14 million biomedical abstracts published in PubMed between 2010 and 2024. The researchers used papers published before 2023 as a benchmark, comparing them to papers published when large language models such as ChatGPT were widely used. They found that some words that were once considered &quot;uncommon&quot;, such as &quot;deep&quot;, are now used 25 times more frequently than in the past, while other words, such as &quot;show&quot; and &quot;emphasize&quot;, have seen similar increases. However, some &quot;common&quot; words have also increased: words like &quot;potential&quot;, &quot;discovery&quot; and &quot;key&quot; have increased in frequency by up to 4%.<\/p>\n<p>The researchers note that this significant increase is essentially unprecedented without some urgent global event to explain it. They found that among the redundant words between 2013 and 2023, nouns closely related to real-life events appeared, such as &quot;Ebola,&quot; &quot;coronavirus,&quot; and &quot;lockdown.&quot; However, among the redundant words in 2024, almost all of them were &quot;style&quot; words. In terms of quantity, of the 280 redundant &quot;style&quot; words in 2024, two-thirds were verbs and about one-fifth were adjectives.<\/p>\n<p>Based on these redundant style words as &quot;markers&quot; used by ChatGPT, the researchers estimate that about 15% of papers published in non-English-speaking countries such as China, South Korea, and Taiwan are now processed by AI, compared to 3% in English-speaking countries such as the UK. Therefore, large language models may be an effective tool for non-native speakers to succeed in a field dominated by English.<\/p>","protected":false},"excerpt":{"rendered":"<p>Analysis of nearly a decade's worth of scientific papers has revealed that researchers have found that artificial intelligence models misuse a number of \"style\" words that were rarely used just a few years ago. In a new study, which has not yet been peer-reviewed, researchers used a novel approach, similar to epidemiology, to reveal how large language models tend to misuse certain vocabulary by analyzing \"redundant vocabulary use\" in biomedical papers. The findings provide interesting insights into the impact of AI in academia, suggesting that at least 10% of abstracts were processed using large language models in 2024. Image source note: Image generated by AI, image license provider Midjourney The study was a review of 140 abstracts published on PubMed between 2010 and 2024.<\/p>","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[146],"tags":[599],"collection":[],"class_list":["post-14730","post","type-post","status-publish","format-standard","hentry","category-news","tag-599"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/14730","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/comments?post=14730"}],"version-history":[{"count":0,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/14730\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/media?parent=14730"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/categories?post=14730"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/tags?post=14730"},{"taxonomy":"collection","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/collection?post=14730"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}