{"id":10288,"date":"2024-05-14T09:37:54","date_gmt":"2024-05-14T01:37:54","guid":{"rendered":"https:\/\/www.1ai.net\/?p=10288"},"modified":"2024-05-14T09:37:54","modified_gmt":"2024-05-14T01:37:54","slug":"%e8%8b%b1%e5%9b%bd%e6%8e%a8%e5%87%ba%e5%bc%80%e6%ba%90%e5%85%8d%e8%b4%b9-ai%e8%af%84%e4%bc%b0%e5%b9%b3%e5%8f%b0-inspect%ef%bc%8c%e5%8f%af%e4%b8%ba%e6%a8%a1%e5%9e%8b%e7%9f%a5%e8%af%86-%e6%8e%a8","status":"publish","type":"post","link":"https:\/\/www.1ai.net\/en\/10288.html","title":{"rendered":"The UK launches an open-source, free AI assessment platform, Inspect, which can score model knowledge\/reasoning capabilities"},"content":{"rendered":"<p data-vmark=\"b725\"><a href=\"https:\/\/www.1ai.net\/en\/tag\/%e8%8b%b1%e5%9b%bd\" title=\"[Sees articles with [British] labels]\" target=\"_blank\" >U.K.<\/a>The AI Safety Institute recently launched a new<a href=\"https:\/\/www.1ai.net\/en\/tag\/inspect\" title=\"[See articles with [Inspect] labels]\" target=\"_blank\" >Inspect<\/a>&quot;of <a href=\"https:\/\/www.1ai.net\/en\/tag\/ai%e6%a8%a1%e5%9e%8b\" title=\"[View articles tagged with [AI models]]\" target=\"_blank\" >AI Models<\/a>Security Assessment Platform,<span class=\"accentTextColor\">The platform adopts open source licensing and is open to AI engineers around the world for free.<\/span>, allowing engineers to evaluate the performance and safety of their own models.<\/p>\n<p data-vmark=\"6285\">The Inspect platform consists of three main frameworks: Dataset, Solver, and Scorer, which can be used to evaluate specific aspects of each AI model.<span class=\"accentTextColor\">Contains the model&#039;s core knowledge reserves, reasoning capabilities, and autonomous capabilities.<\/span>, the relevant framework will score each item one by one according to the model test results; in addition to a series of built-in testers, Inspect also allows developers to plug in other test frameworks with Python.<\/p>\n<p data-vmark=\"7209\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-10289\" title=\"d4d0fde7-203c-4415-9f35-8339c0f1bafc\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/05\/d4d0fde7-203c-4415-9f35-8339c0f1bafc.png\" alt=\"d4d0fde7-203c-4415-9f35-8339c0f1bafc\" width=\"1440\" height=\"754\" \/><\/p>\n<p data-vmark=\"2b89\">Ian Hogarth, director of the UK AI Safety Institute, claimed that the reason they launched the Inspect platform was &quot;belief in the power of open source&quot;, which can encourage more people to contribute, while also improving the transparency and reproducibility of AI models and reducing costs for engineers.<\/p>","protected":false},"excerpt":{"rendered":"<p>The United Kingdom Institute for Artificial Intelligence Security (AI Safetty Institute) recently launched an AI Model Security Assessment Platform called \u201cInspect\u201d, which is open free of charge to global AI engineers and allows engineers to assess their own model performance and safety. The Inspect platform consists of three main frameworks, namely, a \u201cdata set\u201d, a \u201csurveyor\u201d and a scoring machine\u201d, which can be used to assess specific aspects of the capabilities of each section of the AI model, including core knowledge reserves, reasoning and autonomy of the model, and the framework will be rated on a case-by-case basis based on the results of the model tests; in addition to the set of inner testers, the Inspect allows developers to use<\/p>","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[146],"tags":[167,2583,552],"collection":[],"class_list":["post-10288","post","type-post","status-publish","format-standard","hentry","category-news","tag-ai","tag-inspect","tag-552"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/10288","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/comments?post=10288"}],"version-history":[{"count":0,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/10288\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/media?parent=10288"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/categories?post=10288"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/tags?post=10288"},{"taxonomy":"collection","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/collection?post=10288"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}