{"id":699,"date":"2023-10-21T17:08:47","date_gmt":"2023-10-21T09:08:47","guid":{"rendered":"https:\/\/www.1ai.net\/?p=699"},"modified":"2023-10-21T17:08:47","modified_gmt":"2023-10-21T09:08:47","slug":"%e4%ba%ba%e5%b7%a5%e6%99%ba%e8%83%bd%e6%a8%a1%e5%9e%8b%e9%80%8f%e6%98%8e%e5%ba%a6%e8%af%84%e4%bc%b0%ef%bc%9allama-2%e4%bd%8d%e5%88%97%e7%ac%ac%e4%b8%80%ef%bc%8cgpt-4%e9%80%8f%e6%98%8e%e5%ba%a6","status":"publish","type":"post","link":"https:\/\/www.1ai.net\/en\/699.html","title":{"rendered":"AI model transparency assessment: Llama 2 ranks first, GPT-4 has poor transparency"},"content":{"rendered":"<p>In recent years, the transparency of mainstream models in the field of artificial intelligence has become a focus. Stanford University, MIT, Princeton University and other institutions have jointly proposed the &quot;Basic Model Transparency Index&quot; to evaluate the transparency of the top ten mainstream<a href=\"https:\/\/www.1ai.net\/en\/tag\/ai%e6%a8%a1%e5%9e%8b\" title=\"[View articles tagged with [AI models]]\" target=\"_blank\" >AI Models<\/a>The results show that<a href=\"https:\/\/www.1ai.net\/en\/tag\/llama\" title=\"_Other Organiser\" target=\"_blank\" >Llama<\/a>2nd rank<span class=\"spamTxt\">First<\/span>, while models such as GPT-4 are less transparent.<\/p>\n<p>Despite the growing social impact of AI models, there are still many questions about how these models are built, trained, and used, including data sources, labor treatment, etc. However, the evaluation system has also sparked some controversy, with some developers believing that it is too naive to require companies to disclose trade secrets.<\/p>\n<p class=\"article-content__img\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-700\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2023\/10\/6383347948735393505500575.png\" alt=\"\" width=\"554\" height=\"338\" \/><\/p>\n<p>Paper address: https:\/\/arxiv.org\/pdf\/2310.12941.pdf<\/p>\n<p>Nevertheless, transparency is crucial to the development and application of AI models, especially in the field of generative AI, because models have the potential to improve productivity but may also be used to harm others. Lack of transparency may lead to the abuse of models, so developers need to pay more attention to transparency, including the disclosure of model construction, functions, risks, etc.<\/p>\n<p>However, most major basic model developers currently fail to provide sufficient transparency, which highlights the urgent need for transparency improvement in the AI industry. At the same time, open source basic models such as Llama2 and BLOOMZ received high scores, but there is still room for improvement because only a very small number of developers transparently display the limitations of the model and other key information.<\/p>\n<p>In the current policy debate, whether AI models should be open source has become a controversial focus, but whether open source or closed source, transparency is a key factor to ensure that the negative impact of AI models is controlled.<\/p>","protected":false},"excerpt":{"rendered":"<p>In recent years, the focus has been on the transparency of mainstream models in the area of artificial intelligence, and the \u201cBasic Model Transparency Index\u201d has been developed by institutions such as Stanford University, MIT and Princeton University to assess the transparency of the 10 main mainstream AI models. The results showed that Llama 2 ranked first, while models such as GPT-4 were less transparent. Despite the growing social impact of AI models, there are many questions about how they are constructed, trained and used, including data sources, labour entitlements, etc. However, the assessment system has also given rise to some controversy, with some developers finding it too naive to require companies to disclose business secrets. Address: https:\/\/arxiv.org\/pdf\/2310.1241.pdf<\/p>","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[146],"tags":[167,184],"collection":[],"class_list":["post-699","post","type-post","status-publish","format-standard","hentry","category-news","tag-ai","tag-llama"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/699","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/comments?post=699"}],"version-history":[{"count":0,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/699\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/media?parent=699"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/categories?post=699"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/tags?post=699"},{"taxonomy":"collection","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/collection?post=699"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}