{"id":25043,"date":"2024-12-13T06:28:37","date_gmt":"2024-12-12T22:28:37","guid":{"rendered":"https:\/\/www.1ai.net\/?p=25043"},"modified":"2024-12-12T21:29:55","modified_gmt":"2024-12-12T13:29:55","slug":"mlcommons-%e5%8f%91%e5%b8%83-pc-ai-%e5%9f%ba%e5%87%86%e6%b5%8b%e8%af%95-mlperf-client-%e9%a6%96%e4%b8%aa%e5%85%ac%e5%bc%80%e7%89%88%e6%9c%ac-0-5","status":"publish","type":"post","link":"https:\/\/www.1ai.net\/en\/25043.html","title":{"rendered":"MLCommons Releases First Public Version 0.5 of PC AI Benchmark MLPerf Client"},"content":{"rendered":"<p>Open Machine Learning Engineering Consortium <a href=\"https:\/\/www.1ai.net\/en\/tag\/mlcommons\" title=\"_Other Organiser\" target=\"_blank\" >MLCommons<\/a> MLPerf Client for Measuring AI Performance on Consumer PCs Announced Yesterday, California Local Time <a href=\"https:\/\/www.1ai.net\/en\/tag\/%e5%9f%ba%e5%87%86%e6%b5%8b%e8%af%95\" title=\"[See articles with [baseline test] labels]\" target=\"_blank\" >benchmarking<\/a>version 0.5 of the<strong>The first public version of this test<\/strong>.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-25044\" title=\"31b5245bj00sodusd004nd000v900hjp\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/12\/31b5245bj00sodusd004nd000v900hjp.jpg\" alt=\"31b5245bj00sodusd004nd000v900hjp\" width=\"1125\" height=\"631\" \/><\/p>\n<p>MLCommons says the MLPerf Client benchmark was created as a result of a collaborative effort among stakeholders such as AMD, Intel, Microsoft, NVIDIA, Qualcomm and top PC OEMs, all of which contributed a wealth of expertise and resources to the test.<\/p>\n<p>The MLPerf Client Benchmark version 0.5 is based on Meta's Llama 2 7B open-source LLM and contains four AI tasks (content generation, creative writing, summarization of two texts of different lengths) and currently supports both DirectML ONNX and OpenVINO (Intel GPUs only) acceleration paths.<\/p>\n<p>The benchmarking<strong>Initially supports Windows x86-64 devices only<\/strong>, which can be run on the following devices:<\/p>\n<ul>\n<li>AMD Radeon RX 7900 series discrete graphics cards;<\/li>\n<li>Minimum 8GB video memory for Intel Riptide solo graphics;<\/li>\n<li>NVIDIA GeForce RTX 40-series discrete graphics with a minimum of 12GB of video memory;<\/li>\n<li>Riptide AI 9 series processor (minimum 32GB RAM);<\/li>\n<li>Core Ultra 200-series processor with Intel's Razzle Dazzle core (minimum 16GB RAM).<\/li>\n<\/ul>\n<p>MLCommons represents the MLPerf Client.\u00a0<strong>Support for macOS and Windows on Arm devices in the future<\/strong>It will also support additional hardware acceleration paths and introduce broader test scenarios with a range of AI models.<\/p>\n<p data-vmark=\"b017\">Attached is the link to the MLPerf Client v0.5 official webpage:<\/p>\n<ul class=\"list-paddingleft-2\">\n<li>\n<p data-vmark=\"f36d\"><a href=\"https:\/\/mlcommons.org\/benchmarks\/client\/\" target=\"_blank\" rel=\"noopener\"><span class=\"link-text-start-with-http\">https:\/\/mlcommons.org\/benchmarks\/client\/<\/span><\/a><\/p>\n<\/li>\n<\/ul>","protected":false},"excerpt":{"rendered":"<p>MLCommons, the open machine learning engineering consortium, yesterday announced the release of version 0.5 of the MLPerf Client benchmark for measuring AI performance on consumer PCs, the first public version of the test. MLCommons said the MLPerf Client Benchmark is the result of a collaborative effort by stakeholders such as AMD, Intel, Microsoft, NVIDIA, Qualcomm, and the top PC OEMs, all of whom contributed their expertise and resources to the test. Version 0.5 of the MLPerf Client benchmark is based on Meta's Llama 2 7B open-source LLM and includes four AI tasks (<\/p>","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[146],"tags":[5206,204,5192],"collection":[],"class_list":["post-25043","post","type-post","status-publish","format-standard","hentry","category-news","tag-mlcommons","tag-204","tag-5192"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/25043","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/comments?post=25043"}],"version-history":[{"count":0,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/25043\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/media?parent=25043"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/categories?post=25043"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/tags?post=25043"},{"taxonomy":"collection","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/collection?post=25043"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}