{"id":12255,"date":"2024-06-05T09:38:07","date_gmt":"2024-06-05T01:38:07","guid":{"rendered":"https:\/\/www.1ai.net\/?p=12255"},"modified":"2024-06-05T09:38:07","modified_gmt":"2024-06-05T01:38:07","slug":"%e6%96%af%e5%9d%a6%e7%a6%8f%e5%9b%a2%e9%98%9f%e4%b8%ba%e6%8a%84%e8%a2%ad%e6%b8%85%e5%8d%8e%e7%b3%bb%e9%9d%a2%e5%a3%81%e6%99%ba%e8%83%bd-ai-%e6%a8%a1%e5%9e%8b%e9%81%93%e6%ad%89%ef%bc%9allama3-v","status":"publish","type":"post","link":"https:\/\/www.1ai.net\/en\/12255.html","title":{"rendered":"Stanford team apologizes for plagiarizing Tsinghua&#039;s AI model: Llama3-V model will be removed"},"content":{"rendered":"<p data-vmark=\"0409\">recently<a href=\"https:\/\/www.1ai.net\/en\/tag\/%e6%96%af%e5%9d%a6%e7%a6%8f%e5%a4%a7%e5%ad%a6\" title=\"[Sees articles with tags on Stanford University]\" target=\"_blank\" >Stanford University<\/a>The AI research team&#039;s Llama3-V open source model was accused of plagiarizing Tsinghua&#039;s star startup<a href=\"https:\/\/www.1ai.net\/en\/tag\/%e9%9d%a2%e5%a3%81%e6%99%ba%e8%83%bd\" title=\"[View articles tagged with [face smart]]\" target=\"_blank\" >Wall-facing intelligence<\/a>The open source model &quot;Little Cannon&quot; MiniCPM-Llama3-V 2.5 developed by the company has caused heated discussions online.<\/p>\n<p data-vmark=\"0e5c\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-12256\" title=\"55eaa870-6322-4b9e-aee3-8867d0d92eb2\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/06\/55eaa870-6322-4b9e-aee3-8867d0d92eb2.jpg\" alt=\"55eaa870-6322-4b9e-aee3-8867d0d92eb2\" width=\"940\" height=\"501\" \/><\/p>\n<p>Image source: Pexels<\/p>\n<p data-vmark=\"eea5\">On May 29, a Stanford AI team announced online that it only cost $500 to train a SOTA multimodal large model that surpasses GPT-4V, but netizens soon discovered that the model structure and code used in the project were highly similar to those of &quot;Little Steel Cannon&quot;, with only some variable names changed.<\/p>\n<p data-vmark=\"b2a3\">The Mianbi Intelligent team confirmed late at night on June 2 that the Stanford model could not only recognize the ancient characters from the Warring States Period in the &quot;Tsinghua Bamboo Slips&quot;, but also that even the incorrect recognition results were exactly the same as those of the MiniCPM model. The Mianbi Intelligent team spent several months scanning and manually annotating these ancient characters from the Tsinghua Bamboo Slips, and they had never been made public, thus confirming the fact of plagiarism.<\/p>\n<p data-vmark=\"b2a3\">At 1:27 am Beijing time this morning, two authors of the Stanford Llama3-V team, Siddharth Sharma and Aksh Garg, formally apologized to the MiniCPM team on the social platform X for this academic misconduct and promised to remove all Llama3-V models. IT Home noticed that they had published an apology letter with similar content a few hours ago, but it was quickly deleted.<\/p>","protected":false},"excerpt":{"rendered":"<p>Recently, a Stanford AI research team's Llama3-V open-source model has been accused of plagiarizing the open-source model MiniCPM-Llama3-V 2.5 developed by Tsinghua University's star startup Facing Intelligence, which has been hotly debated on the Internet. Pexels On May 29, a Stanford AI team claimed online that it would only cost $500 to train a large SOTA multimodal model that surpassed GPT-4V, but netizens soon realized that the model structure and code used in the project were highly similar to MiniCPM, with only some variable names changed. In the late night of June 2, the Facing Wall Intelligence team confirmed that the Stanford model not only could recognize the ancient characters of the Warring States period in the \"Tsinghua Simplified Texts\", but also could recognize the errors in the \"Tsinghua Simplified Texts\".<\/p>","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[146],"tags":[1998,2184],"collection":[],"class_list":["post-12255","post","type-post","status-publish","format-standard","hentry","category-news","tag-1998","tag-2184"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/12255","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/comments?post=12255"}],"version-history":[{"count":0,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/12255\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/media?parent=12255"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/categories?post=12255"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/tags?post=12255"},{"taxonomy":"collection","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/collection?post=12255"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}