{"id":17652,"date":"2024-08-10T09:48:03","date_gmt":"2024-08-10T01:48:03","guid":{"rendered":"https:\/\/www.1ai.net\/?p=17652"},"modified":"2024-08-10T09:48:03","modified_gmt":"2024-08-10T01:48:03","slug":"qwen2-math-%e5%bc%80%e6%ba%90ai%e6%a8%a1%e5%9e%8b%e5%8f%91%e5%b8%83%ef%bc%9a%e9%98%bf%e9%87%8c%e9%80%9a%e4%b9%89%e5%8d%83%e9%97%ae%e5%ae%b6%e6%97%8f%e6%96%b0%e6%88%90%e5%91%98%ef%bc%8c%e6%95%b0","status":"publish","type":"post","link":"https:\/\/www.1ai.net\/en\/17652.html","title":{"rendered":"Qwen2-Math open source AI model released: a new member of Alitong YiQianwen family, with mathematical ability exceeding GPT-4o"},"content":{"rendered":"<p class=\"pgc-p\" data-track=\"31\" data-pm-slice=\"1 1 []\"><strong>Ali<a href=\"https:\/\/www.1ai.net\/en\/tag\/%e9%80%9a%e4%b9%89%e5%8d%83%e9%97%ae\" title=\"[View articles tagged with [Tongyi Thousand Questions]]\" target=\"_blank\" >Thousand Questions on Tongyi<\/a> Qwen2 <a href=\"https:\/\/www.1ai.net\/en\/tag\/%e5%bc%80%e6%ba%90\" title=\"[View articles tagged with [open source]]\" target=\"_blank\" >Open Source<\/a>A new member of the family <a href=\"https:\/\/www.1ai.net\/en\/tag\/qwen2-math\" title=\"[See articles with [Qwen2-Math] label]\" target=\"_blank\" >Qwen2-Math<\/a>,<\/strong>There are three versions with 1.5 billion parameters, 7 billion parameters and 72 billion parameters. It is a language model built based on Qwen2 LLM and is specifically used for solving mathematical problems.<\/p>\n<h1 class=\"pgc-h-arrow-right\" spellcheck=\"false\" data-track=\"39\">Introduction<\/h1>\n<p data-track=\"40\">Qwen2-Math is a series of language models built on Qwen2 LLM specifically for solving mathematical problems. Its mathematical capabilities significantly surpass open source models and even closed source models (such as GPT-4o). The official hopes to contribute to the scientific community in solving advanced mathematical problems that require complex multi-step logical reasoning.<\/p>\n<div class=\"pgc-img\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-17653\" title=\"get-260\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/08\/get-260.jpg\" alt=\"get-260\" width=\"1024\" height=\"576\" \/><\/div>\n<h1 class=\"pgc-h-arrow-right\" spellcheck=\"false\" data-track=\"41\">performance<\/h1>\n<p data-track=\"42\">The team evaluated our math-specific model Qwen2-Math on a series of math benchmarks. The results on Math showed that its largest math-specific model Qwen2-Math-72B-Instruct surpassed the most advanced models, including GPT-4o, Claude-3.5-Sonnet, Gemini-1.5-Pro, and Llama-3.1-405B.<\/p>\n<div class=\"pgc-img\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-17654\" title=\"get-261\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/08\/get-261.jpg\" alt=\"get-261\" width=\"1024\" height=\"576\" \/><\/div>\n<div class=\"pgc-img\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-17655\" title=\"get-262\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/08\/get-262.jpg\" alt=\"get-262\" width=\"1488\" height=\"746\" \/><\/div>\n<h1 class=\"pgc-h-arrow-right\" spellcheck=\"false\" data-track=\"45\">Developing multilingual models<\/h1>\n<p data-track=\"46\">According to reports, the new model series Qwen2-Math focuses on mathematical skills and currently only supports English. The team plans to launch a bilingual model that supports English and Chinese, and develop a multilingual model.<\/p>\n<p>&nbsp;<\/p>","protected":false},"excerpt":{"rendered":"<p>Ali Tongyi Qwen2 open source family welcomes a new member, Qwen2-Math, with three versions of 1.5 billion parameters, 7 billion parameters, and 72 billion parameters, which is a language model based on the Qwen2 LLM and dedicated to mathematical problem solving. Introduction Qwen2-Math is a series of specialized mathematical problem solving language models built on the Qwen2 LLM, whose mathematical power significantly exceeds that of open-source models, and even closed-source models such as GPT-4o, and which are officially intended to contribute to the scientific community's efforts to solve high-level mathematical problems requiring complex multi-step logical reasoning. The performance team evaluated our math-specific model, Qwen2-Math, on a series of math benchmarks. on Math<\/p>","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[146],"tags":[3948,219,331],"collection":[],"class_list":["post-17652","post","type-post","status-publish","format-standard","hentry","category-news","tag-qwen2-math","tag-219","tag-331"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/17652","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/comments?post=17652"}],"version-history":[{"count":0,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/17652\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/media?parent=17652"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/categories?post=17652"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/tags?post=17652"},{"taxonomy":"collection","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/collection?post=17652"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}