{"id":52424,"date":"2026-04-24T14:17:45","date_gmt":"2026-04-24T06:17:45","guid":{"rendered":"https:\/\/www.1ai.net\/?p=52424"},"modified":"2026-04-24T14:17:45","modified_gmt":"2026-04-24T06:17:45","slug":"%e8%bf%88%e5%85%a5%e7%99%be%e4%b8%87%e4%b8%8a%e4%b8%8b%e6%96%87%e6%99%ae%e6%83%a0%e6%97%b6%e4%bb%a3%ef%bc%9adeepseek-v4-%e6%a8%a1%e5%9e%8b%e9%a2%84%e8%a7%88%e7%89%88%e6%ad%a3%e5%bc%8f%e4%b8%8a","status":"publish","type":"post","link":"https:\/\/www.1ai.net\/en\/52424.html","title":{"rendered":"Into the millions of context inclusive age: DeepSeek-V4 model preview officially online and synchronized open source"},"content":{"rendered":"<p>April 24th news, this morning<a href=\"https:\/\/www.1ai.net\/en\/tag\/deepseek\" title=\"[View articles tagged with [DeepSeek]]\" target=\"_blank\" >DeepSeek<\/a>V4 <a href=\"https:\/\/www.1ai.net\/en\/tag\/%e6%a8%a1%e5%9e%8b\" title=\"_Other Organiser\" target=\"_blank\" >Model<\/a>Preview version formally online and synchronized<a href=\"https:\/\/www.1ai.net\/en\/tag\/%e5%bc%80%e6%ba%90\" title=\"[View articles tagged with [open source]]\" target=\"_blank\" >Open Source<\/a>.<\/p>\n<p>DeepSeek-V4 has a million-word super-long context, leading both domestic and open source areas in Agent capabilities, world knowledge and reasoning. The model is divided into two versions by size:<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-52428\" title=\"92608cb0j00tdzisv002cd000u05tp\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2026\/04\/92608cb0j00tdzisv002cd000u0005tp.jpg\" alt=\"92608cb0j00tdzisv002cd000u05tp\" width=\"1080\" height=\"209\" \/><\/p>\n<p>Access to the website chat.deepseek.com or official App<strong>,<\/strong>You can talk to the latest DeepSeek-V4 and explore a new experience of a 1M super-long context. The API service has been updated simultaneously and can be called by changing the model_name for Deepseek-v4-pro or Deepseek-v4-flash\u3002<\/p>\n<p>DeepSeek-V4 Model Open Source Link:<\/p>\n<ul>\n<li>https:\/\/huggingface.co\/collections\/deepseek-ai\/deepseek-v4<\/li>\n<li>https:\/\/modelscope.cn\/collections\/deepseek-ai\/DeepSeek-V4<\/li>\n<\/ul>\n<p>DeepSeek-V4 Technical Report:<\/p>\n<ul>\n<li>https:\/\/huggingface.co\/deepseek-ai\/DeepSeek-V4-Pro\/blob\/main\/DeepSeek_V4.pdf<\/li>\n<\/ul>\n<p>The official presentation of the attached two models is as follows:<\/p>\n<p>DeepSeek-V4-Pro<\/p>\n<ul>\n<li><strong>Agent has significantly improved its capacity:<\/strong>The Agent capability of DeepSeek-V4-Pro has increased significantly compared to the previous model. V4-Pro has reached the best level of the current open source model in the Agentic Coding assessment and performed equally well in other Agent-related assessments. At present, DeepSeek-V4 has become the Agenic Coding model used by employees within the company, and feedback has been assessed as using a better experience than Sonnet 4.5, with delivery of a quality close to the Opus 4.6 non-thinking model, but there is still a gap with Opus 4.6\u3002<\/li>\n<li><strong>Rich world knowledge:<\/strong>DeepSeek-V4-Pro is a significant leader in other open-source models in the world knowledge assessment and is only slightly lower than the top closed-source model Gemini-Pro-3.1\u3002<\/li>\n<li><strong>Top-of-the-world reasoning:<\/strong>In mathematics, STEM, and competition-type codes, DeepSeek-V4-Pro has outperformed all currently publicly evaluated open-source models and achieved excellent results in the top-level closed-source model of the shoulder world\u3002<\/li>\n<\/ul>\n<p>DeepSeek-V4-Flash<\/p>\n<ul>\n<li>Compared to DeepSeek-V4-Pro, DeepSeek-V4-Flash has been a little weak in world knowledge reserves, but has shown a near-debative ability. V4-Flash provides faster and more economical API services than is possible because model parameters and activations are smaller\u3002<\/li>\n<li>In the Agent assessment, DeepSeek-V4-Flash matches DeepSeek-V4-Pro on a simple task, but there is still a gap in a difficult task\u3002<\/li>\n<\/ul>\n<p>DeepSeek-V4 has created an entirely new system of attention, condensed token dimensions, combined with the DSA DeepSeek Sparse Attention, which achieves the global lead in context capabilities and significantly lowers the need for computing and visibility than traditional methods\u3002<strong>From now on, the 1M (one million) context will be the frame of all DeepSeek official services\u3002<\/strong><\/p>\n<p>DeepSeek-V4 adapts and optimises for mainstream Agent products such as Claude Code, OpenClaw, OpenCode, CodeBudy, etc., with improved performance in terms of code tasks, document generation tasks, etc. The following figure is an example of a PPT page generated by V4-Pro in an Agent frame:<\/p>\n<p>V4-Pro and V4-Flash maximum context length is 1M<strong>,<\/strong>Both support<strong>logical thinking<\/strong>and<strong>Thinking mode<\/strong>, where the mode of reflection supports the reflection strength (high\/ max) of the reflection_effort parameter. It is suggested to use a reflection mode for complex Agent scenarios and set the strength max\u3002<\/p>\n<p>The two models of the old API interface, Deepseek-chat and Deepseek-reassoner, will be discontinued in three months (2026-07-24). At the current stage, these two model names point to deepseek-v4-flash, respectively\u00a0<strong>\u266a Non-thinking and thinking \u266a<\/strong>.<\/p>","protected":false},"excerpt":{"rendered":"<p>Message from April 24, this morning, the DeepSeek-V4 model preview was officially online and synchronized with the open source. DeepSeek-V4 has a million-word super-long context, leading both domestic and open source areas in Agent capabilities, world knowledge and reasoning. The model is divided into two versions by size: the entry of the official network chat.deepseek.com or the official App will allow dialogue with the latest DeepSeek-V4 to explore the full new experience of 1M memory of the extra-long context. API services are updated in sync and can be called by changing model_name for Deepseek-v4-pro or Deepseek-v4-flash<\/p>","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[146],"tags":[3606,219,1489],"collection":[],"class_list":["post-52424","post","type-post","status-publish","format-standard","hentry","category-news","tag-deepseek","tag-219","tag-1489"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/52424","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/comments?post=52424"}],"version-history":[{"count":0,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/52424\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/media?parent=52424"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/categories?post=52424"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/tags?post=52424"},{"taxonomy":"collection","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/collection?post=52424"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}