{"id":7377,"date":"2024-04-08T09:10:39","date_gmt":"2024-04-08T01:10:39","guid":{"rendered":"https:\/\/www.1ai.net\/?p=7377"},"modified":"2024-04-08T09:10:48","modified_gmt":"2024-04-08T01:10:48","slug":"amd%ef%bc%9a%e9%94%90%e9%be%99-8040-%e7%b3%bb%e5%88%97%e5%a4%84%e7%90%86%e5%99%a8-ai%e6%80%a7%e8%83%bd%e5%ae%8c%e8%83%9c%e8%8b%b1%e7%89%b9%e5%b0%94%e9%85%b7%e7%9d%bfultra%e5%a4%84%e7%90%86%e5%99%a8","status":"publish","type":"post","link":"https:\/\/www.1ai.net\/en\/7377.html","title":{"rendered":"AMD: Ryzen 8040 series processor AI performance beats Intel Core Ultra processor"},"content":{"rendered":"<p data-vmark=\"a2a5\"><a href=\"https:\/\/www.1ai.net\/en\/tag\/amd\" title=\"[SEE ARTICLES WITH [AMD] LABELS]\" target=\"_blank\" >AMD<\/a> recently published a series of benchmarks claiming that its Riptide Mobile 7040 Phoenix series and 8040 series processors are capable of running the<a href=\"https:\/\/www.1ai.net\/en\/tag\/%e5%a4%a7%e5%9e%8b%e8%af%ad%e8%a8%80%e6%a8%a1%e5%9e%8b\" title=\"[View articles tagged with [large-scale language model]]\" target=\"_blank\" >Large Language Models<\/a> (LLMs), with a performance lead of up to<a href=\"https:\/\/www.1ai.net\/en\/tag\/%e8%8b%b1%e7%89%b9%e5%b0%94\" title=\"[Sees articles with [Intel] labels]\" target=\"_blank\" >Intel<\/a>The latest Core Ultra Meteor Lake CPU reaches 79%.<\/p>\n<p data-vmark=\"2747\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-7379\" title=\"871245fb-b148-48b4-a6b2-4a90a5c18a77\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/04\/871245fb-b148-48b4-a6b2-4a90a5c18a77.jpg\" alt=\"871245fb-b148-48b4-a6b2-4a90a5c18a77\" width=\"650\" height=\"365\" \/><\/p>\n<p data-vmark=\"946f\">This test compares the AMD Renegade 7 7840U and Intel Core Ultra 7 155H processors, both of which are equipped with hardware neural network processing units (NPUs.) AMD showed multiple slides comparing the performance of these two processors for large language models such as Mistral 7b, Llama v2, and Mistral Instruct 7B. Performance. In the Llama v2 dialog test, the AMD processor outperformed Core Ultra 7 by 141 TP3T using Q4 bitwidth, while in the Mistral Instruct test, the AMD processor was 171 TP3T faster using the same bitwidth.<\/p>\n<p data-vmark=\"7d37\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-7380\" title=\"98fd8d18-9649-4bfa-80ad-041129a83d3f\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/04\/98fd8d18-9649-4bfa-80ad-041129a83d3f.jpg\" alt=\"98fd8d18-9649-4bfa-80ad-041129a83d3f\" width=\"999\" height=\"562\" \/><\/p>\n<p data-vmark=\"e5fd\">In the same large language modeling tests, AMD processors also showed an advantage in responsiveness. For example, the AMD processor outperformed the Core Ultra 7 by 791 TP3T in the \"Generate First Token for Example Prompt\" speed test of the Llama v2 dialog test, and by 411 TP3T in the Mistral Instruct test.<\/p>\n<p data-vmark=\"2852\">AMD also showed another set of test results covering the Llama v2 7B dialog test at various bitwidths, block sizes and quality levels. Overall, the Rui Long 7 7840U outperforms the Intel competition by an average of 551 TP3T, with performance gains of even 701 TP3T at the Q8 bitwidth setting, although AMD also said that in real-world applications running a large language model, a better balance can be achieved with 4-bit K M quantization, while tasks such as coding that require extreme precision can be performed with 5-bit K M quantization. settings.<\/p>\n<p data-vmark=\"0d1f\">For the moment, AMD is temporarily ahead of Intel in artificial intelligence performance. Although the Oracle 7040 series structure is comparable to Meteor Lake on theoretical performance (TOPS), at the end of last year Tom's Hardware found that AMD usually beat Meteor Lake on AI loads. This appears to be more of a software optimization issue than a hardware or driver issue\u3002<\/p>\n<p data-vmark=\"c045\">This performance gap shouldn't last long, though. After all, the development of neural network processing units is just beginning. If more apps don't adopt the OpenVINO framework, expect Intel to change its strategy and look for an optimization that's easier for developers to adopt. Intel is also planning to release its next-generation Lunar Lake mobile CPU architecture later this year, which will reportedly deliver 3x the AI performance of Meteor Lake (as well as a significant increase in IPC performance in the core part of the CPU).<\/p>\n<p data-vmark=\"5654\">All in all, AMD currently has the edge in neural network processing unit performance, especially with the RuiLong 8040 series processors that have stronger NPU performance. However, with the release of Intel's Lunar Lake architecture at the end of the year and its AI optimization plans, the tide could turn.<\/p>","protected":false},"excerpt":{"rendered":"<p>AMD recently released a series of benchmarks claiming that its Riptide Mobile 7040 Phoenix Series and 8040 Series processors outperform Intel's latest Core Ultra Meteor Lake CPUs by up to 79% when running Large Language Models (LLMs). The tests compare AMD's Riptide 7 7840U and Intel's Core Ultra 7 155H processors, both equipped with hardware neural network processing units (NPUs). This test compares the AMD Raidon 7 7840U and Intel Core Ultra 7 155H processors, both of which are equipped with hardware neural network processing units (NPUs).<\/p>","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[146],"tags":[333,371,240],"collection":[],"class_list":["post-7377","post","type-post","status-publish","format-standard","hentry","category-news","tag-amd","tag-371","tag-240"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/7377","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/comments?post=7377"}],"version-history":[{"count":0,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/7377\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/media?parent=7377"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/categories?post=7377"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/tags?post=7377"},{"taxonomy":"collection","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/collection?post=7377"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}