{"id":25101,"date":"2024-12-14T01:20:52","date_gmt":"2024-12-13T17:20:52","guid":{"rendered":"https:\/\/www.1ai.net\/?p=25101"},"modified":"2024-12-13T17:22:46","modified_gmt":"2024-12-13T09:22:46","slug":"%e5%be%ae%e8%bd%af%e6%8e%a8%e5%87%ba-14b-%e5%8f%82%e6%95%b0%e5%b0%8f%e8%af%ad%e8%a8%80%e6%a8%a1%e5%9e%8b-phi-4%ef%bc%9a%e4%b8%93%e6%94%bb%e6%95%b0%e5%ad%a6%e7%ad%89%e9%a2%86%e5%9f%9f%e5%a4%8d%e6%9d%82","status":"publish","type":"post","link":"https:\/\/www.1ai.net\/en\/25101.html","title":{"rendered":"Microsoft Introduces Phi-4, a 14B Parametric Small Language Model: Specializing in Complex Reasoning in Mathematics and Other Areas"},"content":{"rendered":"<p><a href=\"https:\/\/www.1ai.net\/en\/tag\/%e5%be%ae%e8%bd%af\" title=\"[View articles tagged with [Microsoft]]\" target=\"_blank\" >Microsoft<\/a>today announced Phi-4, a 14B-parameter \"state-of-the-art\" small language model (SLM) that, in addition to traditional language processing, excels at<strong>math and other fields<a href=\"https:\/\/www.1ai.net\/en\/tag\/%e5%a4%8d%e6%9d%82%e6%8e%a8%e7%90%86\" title=\"[Sees articles with [complicated reasoning] labels]\" target=\"_blank\" >complex inference<\/a><\/strong>Phi-4 is the latest addition to the Phi family of small language models, and officials say it demonstrates Microsoft's continued exploration of the possibilities of SLM boundaries.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-25102\" title=\"6e8618c3j00sofdzb0048d000v900hkp\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/12\/6e8618c3j00sofdzb0048d000v900hkp.jpg\" alt=\"6e8618c3j00sofdzb0048d000v900hkp\" width=\"1125\" height=\"632\" \/><\/p>\n<p>Officials say it benefits from technological advances in a variety of areas, including the use of<strong>High-quality synthetic datasets, carefully selected high-quality organic data<\/strong>, as well as post-training innovations, Phi-4 outperforms mathematical reasoning in the<strong>Models of the same type and on a larger scale<\/strong>The results of the benchmarks are summarized in the following table. Its performance on the math competition problem outperformed several larger scale models including Gemini Pro 1.5.1AIttached is a technical paper on the benchmarking results:<a href=\"https:\/\/arxiv.org\/abs\/2412.08905\">Click here to go<\/a><\/p>\n<p>Microsoft has announced that it is making \"powerful and responsible\" AI capabilities available to all customers using the Phi family of models, including Phi-3.5-mini. Phi-4 is now available on Azure AI Foundry.<\/p>","protected":false},"excerpt":{"rendered":"<p>Microsoft today announced Phi-4, a 14B-parameter \"state-of-the-art\" small language model (SLM) that specializes in complex reasoning in areas such as mathematics, in addition to traditional language processing. Phi-4 is the latest addition to the Phi family of small language models, and officials say it demonstrates Microsoft's commitment to continue exploring the boundaries of SLMs. Possibilities. Phi-4 outperforms similar and larger models in mathematical reasoning thanks to a number of technological advances, including the use of high-quality synthetic datasets, carefully selected high-quality organic data, and post-training innovations, officials said. It outperforms several larger-scale models, including Gemini Pro 1.5, on mathematical competition problems.1AI Attached is a technical paper on the benchmark results: point<\/p>","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[146],"tags":[5219,3565,280],"collection":[],"class_list":["post-25101","post","type-post","status-publish","format-standard","hentry","category-news","tag-5219","tag-3565","tag-280"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/25101","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/comments?post=25101"}],"version-history":[{"count":0,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/25101\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/media?parent=25101"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/categories?post=25101"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/tags?post=25101"},{"taxonomy":"collection","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/collection?post=25101"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}