{"id":52583,"date":"2026-04-30T11:39:18","date_gmt":"2026-04-30T03:39:18","guid":{"rendered":"https:\/\/www.1ai.net\/?p=52583"},"modified":"2026-04-30T11:39:18","modified_gmt":"2026-04-30T03:39:18","slug":"deepseek-%e5%86%85%e6%b5%8b%e3%80%8c%e8%af%86%e5%9b%be%e6%a8%a1%e5%bc%8f%e3%80%8d%ef%bc%8c%e5%a4%9a%e6%a8%a1%e6%80%81%e6%96%b0%e6%a8%a1%e5%9e%8b%e6%88%96%e5%b0%86%e5%8f%91%e5%b8%83","status":"publish","type":"post","link":"https:\/\/www.1ai.net\/en\/52583.html","title":{"rendered":"DeepSeek internal speculation mode, new multimodular model or will be released"},"content":{"rendered":"<p>April 30th.<a href=\"https:\/\/www.1ai.net\/en\/tag\/deepseek\" title=\"[View articles tagged with [DeepSeek]]\" target=\"_blank\" >DeepSeek<\/a> YESTERDAY, A \"FIBULATION MODEL\" TEST WAS LAUNCHED, WHICH PARALLELS THE EXISTING \"QUICK MODE\" AND \"EXPERT MODEL\" WITH FULL MULTIMODULAR IMAGE UNDERSTANDING, NOT SIMPLE OCR TEXT RECOGNITION\u3002<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-52586\" title=\"c8172134j00teafg70000nd000uhsm\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2026\/04\/c8172134j00teafg7000nd000u000hsm.jpg\" alt=\"c8172134j00teafg70000nd000uhsm\" width=\"1080\" height=\"640\" \/><\/p>\n<p>In real terms, DeepSeek is more accurate in general, and answers are available in half a second without opening the thinking mode. Common scenes such as film dramas, abstract pictures, and commodity maps are well identified and understood\u3002<\/p>\n<p>Of even greater interest is the process of thinking: beyond describing the content of the picture, the issuer\u2019s identity, image metaphors and subtexts will be actively pursued, and self-corrected many times in the reasoning process, even before drawing conclusions, the spontaneous list of issues and the verification of the premise, which presents a logic of reasoning close to human reading habits\u3002<\/p>\n<p>However, there are still clear limitations to the mapping model. In the classic \"numbering fingers\" test, DeepSeek made a mistake for the first time, claiming that he was unconscious, but that he was able to give the right answer after the user had directed or hinted\u3002<\/p>\n<p>Moreover, the mapping process does not support web-based searches, relying on the model's own knowledge base, and is not able to identify relatively new things, such as Apple's mascot \"Finder sauce\" launched this year\u3002<\/p>\n<p>And just yesterday, DeepSeek Multi-Mode Team Researcher Xiaokang Chen wrote \"now, we see you. \" on X, with a map of DeepSeek's whale mascot from \"black eyes\" to \"open eyes,\" which was widely interpreted as a warning that the new multi-mode model was coming online\u3002<\/p>","protected":false},"excerpt":{"rendered":"<p>On April 30th, DeepSeek launched the \"Diagram Model\" test yesterday, alongside the existing \"fast mode\" and \"expert model\" with full multimodular image understanding, not simple OCR text recognition. In real terms, DeepSeek is more accurate in general, and answers are available in half a second without opening the thinking mode. Common scenes such as film dramas, abstract pictures, and commodity maps are well identified and understood. Of even greater concern is the process of thinking: in addition to describing the content of the picture, the issuer ' s identity, image metaphors and subtexts will be actively asked, and self-corrected many times in the reasoning process, even before drawing conclusions, a spontaneous list of issues and a one-by-one verification of the premise that is close to humans<\/p>","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[146],"tags":[3606,1863],"collection":[],"class_list":["post-52583","post","type-post","status-publish","format-standard","hentry","category-news","tag-deepseek","tag-1863"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/52583","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/comments?post=52583"}],"version-history":[{"count":0,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/52583\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/media?parent=52583"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/categories?post=52583"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/tags?post=52583"},{"taxonomy":"collection","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/collection?post=52583"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}