{"id":23609,"date":"2024-11-21T22:36:18","date_gmt":"2024-11-21T14:36:18","guid":{"rendered":"https:\/\/www.1ai.net\/?p=23609"},"modified":"2024-11-21T22:36:18","modified_gmt":"2024-11-21T14:36:18","slug":"%e6%8e%a8%e7%90%86%e6%a8%a1%e5%9e%8b-deepseek-r1-lite-%e9%a2%84%e8%a7%88%e7%89%88%e4%b8%8a%e7%ba%bf%ef%bc%8c%e5%8f%b7%e7%a7%b0%e5%aa%b2%e7%be%8e-openai-o1-preview","status":"publish","type":"post","link":"https:\/\/www.1ai.net\/en\/23609.html","title":{"rendered":"Preview of inference model DeepSeek-R1-Lite goes live, claims to rival OpenAI o1-preview"},"content":{"rendered":"<p>November 21st.<a href=\"https:\/\/www.1ai.net\/en\/tag\/deepseek\" title=\"[View articles tagged with [DeepSeek]]\" target=\"_blank\" >DeepSeek<\/a> announced that the newly developed<a href=\"https:\/\/www.1ai.net\/en\/tag\/%e6%8e%a8%e7%90%86%e6%a8%a1%e5%9e%8b\" title=\"[View articles tagged with [inference model]]\" target=\"_blank\" >inference model<\/a> The preview version of DeepSeek-R1-Lite is now available.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-23610\" title=\"14e1d863j00snb1va0071d000u000ozp\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/11\/14e1d863j00snb1va0071d000u000ozp.jpg\" alt=\"14e1d863j00snb1va0071d000u000ozp\" width=\"1080\" height=\"899\" \/><\/p>\n<p>Officially, the DeepSeek R1 series of models are trained using reinforcement learning, and the reasoning process involves a great deal of reflection and validation, with chains of thought that can be tens of thousands of words long. The series of models on math, code, and a variety of complex logical reasoning tasks, the<strong>Achieved reasoning results comparable to OpenAI o1-preview, and showed users the complete thought process of o1 that was not publicly available.<\/strong>.<\/p>\n<p>The DeepSeek-R1-Lite preview model has been judged in the AIME, the highest difficulty level in the American Mathematics Competition (AMC), as well as in the world's top programming competitions (codeforces), among others.<strong>Outperforms well-known models such as GPT-4o<\/strong>.<\/p>\n<p>DeepSeek-R1-Lite's reasoning process is long and includes a great deal of reflection and validation. The graph below shows how the model's score on a math competition closely correlates with the length of reflection allowed by the test.<\/p>\n<p>1AI notes that DeepSeek-R1-Lite is still in the iterative development stage, and only supports web use, not API calls for the time being.DeepSeek-R1-Lite also uses a smaller base model, which can't fully unleash the potential of a long chain of thought.<\/p>\n<p>official claim<strong>Official version of DeepSeek-R1 model to be fully open source<\/strong>The company also provides technical reporting and deployment of API services to the public.<\/p>","protected":false},"excerpt":{"rendered":"<p>November 21st, DeepSeek announced that the preview version of its newly developed inference model DeepSeek-R1-Lite is officially online. Officially, the DeepSeek R1 series of models are trained using reinforcement learning, and the reasoning process includes a great deal of reflection and verification, and the chain of thought can be tens of thousands of words long. The models achieve reasoning results comparable to OpenAI o1-preview in math, code, and a variety of complex logical reasoning tasks, and show users the complete thinking process that o1 did not disclose. The DeepSeek-R1-Lite preview models were tested on AIME, the highest difficulty level in the American Mathematics Competition (AMC), as well as the world's top programming competition (codefor).<\/p>","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[146],"tags":[3606,5023],"collection":[],"class_list":["post-23609","post","type-post","status-publish","format-standard","hentry","category-news","tag-deepseek","tag-5023"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/23609","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/comments?post=23609"}],"version-history":[{"count":0,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/23609\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/media?parent=23609"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/categories?post=23609"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/tags?post=23609"},{"taxonomy":"collection","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/collection?post=23609"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}