-
DeepSeek's new model is exposed: MODEL1 code predicts a new architecture, which is expected to be released in February
According to news from January 21, early in the month, DeepSeek will launch a new-generation flagship AI model — DeepSeek V4 — in mid-February this year. On 20 January, on the first anniversary of DeepSeek-R1, developers discovered that DeepSeek had updated a series of FlashMLA codes in GitHub, across 114 files 28 references to unknown '...'- 935
-
Leung Wensai's new paper came to light: DeepSeek V4 or introduced a new memory structure
On January 13th, in the early hours of this morning, DeepSeek opened a new architecture module, "Engram", and published a technical paper simultaneously, which was re-emerged in the author's name. It has been learned that the Engram module, by introducing a scalable searchable memory structure, provides a completely new slender dimension for the larger model, different from the traditional Transformer and MoE. DeepSeek notes in his paper that the current dominant model is structurally inefficient in dealing with two types of tasks: "table" memories that rely on fixed knowledge, and complex reasoning..- 1.3k
-
DeepSeek V4 Large Models were released before and after the Spring Festival: AI Programming Capabilities exceeded OpenAI GPT and Anthropic Claude
In January 10th, The Information reported that DeepSeek would launch a new flagship AI model - DeepSeek V4 - in mid-February this year during the new calendar year, and would be better able to write codes. Internal tests indicate that its AI programming performance is expected to outperform industry leaders, including OpenAI GPT and Anthropic Claude. According to the source, DeepSeek V4 is here.. -
DeepSeek Launching New Paper: Presenting New MHC Structures, List of Authors
On January 2nd, DeepSeek published a new paper proposing a new structure called mHC. According to the presentation, the study aims to address the instability of traditional superconnection in large-scale model training while maintaining its significant performance gains. The first three authors of the paper were Zhenda Xie, Yixuan Wei, Huanqi Cao. It is worth mentioning that DeepSeek's founder and CEO Liang Wenzine is also on the list of authors. 1AI with summary section..- 1.4k
-
DeepSeek V3.2 Published: Logic against Shoulder GPT-5, first Speciale
On December 2nd, DeepSeek V3.2 was released in the official edition, enhancing Agent's ability to integrate thinking and reasoning. Two official versions of the model were published today: DeepSeek-V3.2 and DeepSeek-V3.2-Special. Official web pages, App and API have been updated to the official DeepSeek-V3.2. The Speciale version is currently available only in the form of a temporary API service for community assessment and research. The new model technical report has been published simultaneously- 2.2k
-
DeepSeek lead, AI, real-time trade-offs, yielding 9.68%
According to news from October 27th, "Nu Jiwon" reported that the latest results of the Open Source Project "AI-Trader" led by the team of professors from the University of Hong Kong, DeepSeek was ranked first in the real US stock exchange experiment with a return rate of 9.68%, significantly exceeding the top international models of GPT, Claude and Gemini. In the experiment, the research team allocated $10,000 to each of the five AI models and allowed them to trade autonomously in the market for 100 components in NASDAQ for almost a month. The rule is strict- 4.2k
-
Top 6 of the world, AI Lives, Deepseek, three days' payout, 361 TP3T, proud of the men
News from October 22, the technology media coincentral released a post yesterday (21 October) reporting that American Research Corporation Nof1 launched the "Alpha Arena" AI Investment Action Competition, and that the DeepSeek Chat V3.1 model performed well, adding $10,000 in principal to $13647.9 in three days, achieving an alarming return of over 361 TP3T. Nof1 to test the ability of top language models to trade in real-market environments, to..- 1.9k
-
Single card day processing 200,000 pages of documents, DeepSeek-OCR open-source
According to news from October 21, the DeepSeek team has recently released a new study, DeepSeek-OCR, which proposes a "text-based optical compression" approach that provides groundbreaking thinking for long text processing for large models. Research shows that by rendering long text into images and then turning to visual token, it is possible to significantly reduce the calculation costs while maintaining high accuracy. Experimental data show that the OCR decoded accuracy rate is as high as 971 TP3T at a rate of less than 10 times; even at 20 times higher, the accuracy rate remains at..- 1.6k
-
Time magazine publishes annual best inventions: DeepSeek R1, AirPods Pro 3
On October 11th, Time magazine officially published a 2025 "Best Innovations" list of 300 innovations from around the world, covering areas such as artificial intelligence, consumer electronics, medical health, and green energy. In the AI, DeepSeek R1 was successfully selected by the Chinese team. The model, with its cost-effective and efficient core advantage, was seen as a powerful challenge to the existing large language model patterns, demonstrating a new breakthrough in China ' s global AI competition. Ai..- 2.2k
-
The DeepSeek-V3.2-Exp model is officially published and is open, and API has significantly reduced prices
On September 29th, DeepSeek officially released today the DeepSeek-V3.2-Exp model, an experimental version. As an intermediate step towards a new generation of structures, the V3.2-Exp introduced DeepSeek Sparse Attention (Note: A Rare Focusing Mechanism) on the basis of V3.1-Terminus, to explore optimization and validation of the training and reasoning efficiency of long texts. DeepSeek Spa.. -
If after 10 years, DeepSeek, how to keep China behind America, the answer is open source
According to news reports from the interface on September 28th, on September 27th, the CEO Lee Reacon stated at the Yangtze CEO's 20th anniversary homecoming celebrations that DeepSeek's central contribution to China's AI development was to promote the development of open source ecology. “If, after 10 years, we recall how DeepSeek left China behind the United States, the answer was not its technological capability per se, but it brought about an era of China (the big model).” Lee mentioned again that since DeepSeek's opening, a number of companies in the country have been opening up.. -
AI generation of papers, three days of DeepSeek's first draft
As a college student, a graduate student, a new graduate teacher, or as a new science student, White, do you often feel unable to write? It's hard to read, it's hard to understand, it's not efficient to write. Today, let's show you how to quickly and efficiently finish a first draft paper with DeepSeek! DeepSeek: Not just a chat robot- 10.9k
-
DeepSeek Statement: Protection against fraud under the guise of “deep search”
On 19 September, DeepSeek issued an official statement: recently, illegal elements had been impersonating DeepSeek's official or active employees, falsifying work plates, operating licences, etc., and defrauding users on various platforms in the name of “calculative leasing”, “equity financing, etc. The act seriously infringes the interests of the user and damages the reputation of the company. (a) In-depth requests have never required the user to pay a personal or unofficial account, and any request for a private transfer is fraudulent- 1.3k
-
DeepSeek-R1 Thesis is on the cover of Nature, which is written by Liang Wenbing
The 18 September message, which was jointly completed by the DeepSeek team and published as a communication author by Liang Wenfeng, contained a DeepSeek-R1 research paper on the reasoning model, on the cover of issue 645 of the international authoritative journal Nature. Compared to the first edition of DeepSeek-R1 published in January this year, this paper reveals more details of model training. DeepSeek-R1 is also known to be the first globally peer-reviewed dominant language model. Nature evaluates that almost all mainstream models..- 2.6k
-
QuestMobile Report: Bean Bread Bread Beats DeepSeek, Chinese Original August, AI App
September 16, QuestMobile Today ' s August 2025 AI application industry monthly report shows that, as of August 2025, the Internet and AI technology business original App user size was 277 million, the application plugin (In-App AI) was 622 million, and the two main AI applications were 645 million; the mobile phone manufacturer AI assistant user size was 529 million; the PC application user size was 2.04 billion, of which the web user size was -- 2.5k
-
MICRO-INTELLIGENCE AI SEARCH ACCESS LEVEL ONE, WITH OPTIONS FOR DEEP THINKING, UPLOADING PICTURES, ETC
Messages of September 12, the latest day, the Twitter search interface was updated, the Twitter AI search entered the first level of the portal, and the relevant buttons were visible by clicking on the search box at the top of the first page. It is divided into three sections: deep thinking, uploading pictures, uploading files. In addition to the Deep Thinking option, which allows for the DeepSeek-R1 model or the Quest T1 model, the user can choose the quick answer option to provide the most common answer. The uploading of pictures or taking of photographs can be asked on the basis of the pictures and support for such functions as graphics, question-taking, search of goods, etc. Uploading a file or a public sign can summarize..- 1.8k
-
Baidu Releases ERNIE Model X1.1 DeepThinking Model, Overall Performance Surpasses DeepSeek R1
SEPTEMBER 9, NEWS. TODAY, THE WAVE SUMMIT IN-DEPTH LEARNING DEVELOPER CONFERENCE 2025, BEIJING. THE 100-DEGREE CHIEF TECHNICAL OFFICER AND DIRECTOR OF THE CENTRE FOR ADVANCED LEARNING TECHNOLOGY AND APPLIED NATIONAL ENGINEERING RESEARCH OFFICIALLY RELEASED THE MAGNA CARTA MODEL X1.1 DEEP THOUGHT MODEL. IT WAS DESCRIBED THAT THE X1-DEEP THINKING MODEL WAS BASED ON THE IN-DEEP THINKING MODEL BASED ON G4.5 TRAINING AND X1.1 WAS UPGRADED AGAIN. THE MODEL IS SIGNIFICANTLY ENHANCED IN TERMS OF DE FACTO, COMMAND COMPLIANCE, INTELLIGENCE, ETC. 1AI NOTES THAT USERS ARE ALREADY ABLE TO..- 2.6k
-
DeepSeek Developing More Advanced Model: AI Intelligence Body Capabilities, Says Source, Aiming for Year-End Release
Sept. 5 (Bloomberg) -- DeepSeek is developing an artificial intelligence model with the capabilities of more advanced AI intelligences, Bloomberg said today, citing people familiar with the matter, in an effort to compete with U.S. rivals such as OpenAI at the new frontier of technology. The model the company is building is capable of completing multi-step operations on behalf of the user with only minimal instructions from the user, and can continuously learn and improve based on previous operations. Sources say DeepSeek founder Wenfeng Liang is driving the team with the goal of releasing the new product in the last quarter of this year. In January, D... -
Tencent Yuanbao Access DeepSeek V3.1 latest version, computer / web version can be experienced!
August 23 news, Tencent Yuanbao yesterday announced that it has officially accessed the latest version of DeepSeek V3.1, the computer version, web version of the first experience. According to the official introduction, this model update, bringing two major breakthroughs: Think faster, inspiration second Compared to the previous generation, DeepSeek V3.1-Think can give the answer in a shorter period of time, helping you to catch inspiration faster and efficiently complete the work. Stronger Agent Capabilities The new model dramatically improves tool usage and intelligent body capabilities to help you easily handle complex tasks. 1AI notes that the depth seek...- 1.4k
-
DeepSeek-V3.1 Released, Officials Explain First Steps Toward AI Agent Era
August 22nd, DeepSeek officially released DeepSeek-V3.1 yesterday. The upgrade includes the following major changes: Hybrid reasoning architecture: one model supports both thinking and non-thinking modes; higher thinking efficiency: compared with DeepSeek-R1-0528, DeepSeek-V3.1-Think can give answers in a shorter time; stronger agent capability: through Post-Training optimization, the performance of the new model has been improved in tool usage and intelligent body tasks. Stronger Agent Capability: Through Post-Training optimization, the performance of the new model is greatly improved in tool usage and intelligent body tasks. ...- 1.8k
-
OpenAI CEO: We're open source because of DeepSeek
Early this morning, OpenAI CEO Sam Altman warned that the U.S. may be underestimating the complexity and severity of China's progress in AI, and that export controls alone may not be a reliable solution, according to CNBC, "I'm worried about China". He mentioned that the AI race between the US and China is intricate and complex, and that the implications are much more important than simply scores leading or lagging behind. altman pointed out that "China is probably developing faster in terms of its reasoning capabilities, in addition to many other dimensions such as research, products, and so on, so it's not possible to say in simple terms who's ahead between the US and China". Although the U.S. ...- 1.5k
-
AGENCY: DeepSeek's Traffic Share Has Fluctuated Significantly Over the Past Year
Recently, data agency Similarweb released the "Global AI Tracker 8.5" report, which shows that in the past year up to August 1, 2025, the trend of visits to generative AI platforms has diverged significantly. According to Similarweb's data, DeepSeek's traffic share has experienced significant fluctuations over the past year: 12 months ago, DeepSeek had not yet entered the top of the list; while six months ago, its share once climbed to 9.2%...- 1.7k
-
Jen-Hsun Huang praises DeepSeek, says China's pace of innovation can't be stopped
July 21 news, according to CCTV news reports, the United States nvidia company founder and chief executive officer jen-hsun huang in the acceptance of the general station "face to face" column interview praised the DeepSeek, and said that the AI is an extremely complex system, China's innovation ability is amazing. According to Jen-Hsun Huang, the pace of Chinese innovation is impossible to be stopped, and I believe that NVIDIA can make an important contribution.AI is an extremely complex system, as complex as a multilayer cake, whose chip is only the bottom layer, above which there are also systems, network technology, AI infrastructure, software, AI algorithms, and the uppermost layer of the application...- 24k
-
Kimi K2 Takes First Place in Open Source Modeling Over DeepSeek R1
July 19, 2011 - LMArena, the leading ranking of big models, has released its latest rankings, with the recently released Kimi K2 taking the top spot over DeepSeek R1 for open source models. According to LMArena, Kimi K2 took fifth place in LMArena's overall ranking based on its performance and 3,000 community votes. It's worth noting that Kimi K2 and DeepSeek R1 are the two Chinese models in the top 10 of the LMArena charts, but...- 28.6k
❯
Search
Scan to open current page
Top
Checking in, please wait
Click for today's check-in bonus!
You have earned {{mission.data.mission.credit}} points today!
My Coupons
-
¥CouponsLimitation of useExpired and UnavailableLimitation of use
before
Limitation of usePermanently validCoupon ID:×Available for the following products: Available for the following products categories: Unrestricted use:Available for all products and product types
No coupons available!
Unverify
Daily tasks completed:























