-
OpenAI revealed to launch open-source AI models next week, o3 mini level reasoning capability
7 月 10 日消息,科技媒体 The Verge 今天(7 月 10 日)发布博文,报道称 OpenAI 公司正酝酿推出开源 AI模型,可能会加剧其和微软之间的分歧。 消息称 OpenAI 公司正和微软公司重新谈判,希望重组成为一家盈利性公司。在此之际,OpenAI 正准备发布一个开源的大语言 AI模型,这可能会在两家公司之间造成更大的隔阂。 注:OpenAI 是闭源 AI 模型的典型代表,此前…- 383
-
Skywork-R1V 3.0 Released and Open-Sourced by Kunlun World Wide Web, Multimodal Reasoning Capability Approaches Human Expert Levels
July 9, 2011 - Kunlun World Wide has just released an announcement announcing the launch of the latest Skywork-R1V 3.0 version and open source. According to KLM, Skywork-R1V 3.0 deeply stimulates the cross-modal reasoning ability of the model through reinforcement learning strategies in the post-training phase, achieving a double leap in complex logic modeling and cross-disciplinary generalization. Skywork-R1V 3.0 is based on the previous generation of inference model, Skywork-R1V 2.0, which distills the data for "cold start", and builds high-quality multimodal inference training by rejecting sampling... -
WebSailor, Ali Tongyi's Open Source Web Intelligence, Tops Open Source Web Intelligence List
July 7 news, today AliCloud announced that Tongyi officially open source network intelligent body WebSailor, the intelligent body has a strong reasoning and retrieval capabilities, after the release of the intelligent body review set BrowseComp topped the list of open source network intelligent body. 1AI notes that the current WebSailor construction program and some of the data set has been open source in Github. According to AliCloud, WebSailor web intelligences can be applied to retrieval tasks in complex scenarios, and for fuzzy questions can be quickly searched in different webpages and reasoned to check...- 5.7k
-
B station open source anime video generation model AniSora V3 version, faster, higher quality
July 7, 2011 - AniSora, the open source anime video generation model from the B Station team, was updated on July 2 to the AniSora V3 preview version. As part of the Index-AniSora project, the V3 version further optimizes the generation quality, motion smoothness and style diversity, providing a more powerful tool for anime, manga and VTuber content creators. AniSora supports one-click generation of video footage in a wide range of anime styles, including drama clips, national anime, manga adaptations, VTube...- 1.3k
-
ByteDance open-sources Trae-Agent, the core component of its AI IDE tools
July 7 news, byte jump's AI native integrated development environment (IDE) Trae announced on July 4 that it officially open-sourced its core component, Trae-Agent. Trae officials also said that they are looking for active users and intelligent body developers who are willing to contribute to the construction of an open ecosystem of intelligent bodies. According to the GitHub page, Trae Agent is an intelligent body based on the LLM generic software engineering task. It provides a CLI interface that understands natural language commands and uses various...- 1.1k
-
Apple Releases DiffuCode-7B-cpGRPO Programming AI Model: Based on Qwen 2.5-7B, Can Generate Code Out of Order
July 5, Apple quietly released an open source AI model called DiffuCode-7B-cpGRPO on Hugging Face, which has innovative features in generating code out of order and with performance comparable to top open source coding models. Note: Traditional Large Language Models (LLMs) generate code in a left-to-right, top-to-bottom order, just like the way most humans read text. This is mainly because these LLMs use Autoregression...- 4.3k
-
Ali Tongyi open-sources its first audio generation model ThinkSound: think like a "professional sound engineer"
July 4 news, Ali "Tongyi big model" public number today announced that the Tongyi laboratory's first audio generation model ThinkSound is now officially open source, will break the imagination of the "silent picture" limitations. ThinkSound applies CoT (Chain-of-Thought) to audio generation for the first time, allowing AI to learn to "think through" the relationship between picture events and sound step by step, thus realizing high-fidelity, strongly synchronized spatial audio generation - not just "picture dubbing". - This enables high-fidelity, strongly synchronized spatial audio generation - not just "looking at the picture and dubbing", but actually "understanding the picture". ...- 9.3k
-
Smart Spectrum Open Source Next Generation Universal Visual Language Model
Yesterday, Smart Spectrum officially launched and open-sourced a new generation of general-purpose visual language models, GLM-4.1V-Thinking, which is claimed to be "a key leap from perception to cognition for the GLM series of visual models". Specifically, GLM-4.1V-Thinking is a general-purpose inference-based large model that supports multimodal inputs such as images, videos, documents, etc., and is designed for complex cognitive tasks. It introduces the "Chain of Thought Reasoning (CoT Reasoning)" on the basis of the GLM-4V architecture, and adopts the "Reinforcement Learning Strategies for Lesson Sampling (RLCS)", which is a system...- 611
-
Microsoft Open Sources GitHub Copilot Chat Extension for VS Code to Help Automate AI Programming
July 2, 2011 - Tech media outlet bleepingcomputer published a blog post yesterday (July 1) reporting that Microsoft has open-sourced the GitHub Copilot Chat extension for Visual Studio Code under the MIT license. This means that the development community can gain insight into the complete implementation of the chat-based coding assistant, including details on the implementation of the "Intelligent Body Mode," the contextual data sent to the Large Language Model (LLM), and the design of the system prompts. Microsoft has made the G...- 897
-
Together with Huawei, Shanghai Ruijin Hospital Open Sources RuiPath Pathology Models
June 30, 2011 - Shanghai Ruijin Hospital and Huawei today open-sourced the core visual base model of the RuiPath pathology model, which covers seven major high-incidence cancer types such as lung cancer and colorectal cancer, and covers 90% new cancer cases per year in China. According to reports, the model was released on February 18, based on Huawei's DCS AI solution, and the open source content includes core visual architecture and multi-cancer test datasets, marking an important breakthrough in the field of digital pathology AI in China. As the first open source pathology model for medical institutions in Shanghai, RuiPa...- 869
-
Huawei Announces Open Source Pangu 7B Dense and 72B Hybrid Expert Models
June 30, Huawei today officially announced the open source of the Pangu 7 billion-parameter dense model, the Pangu Pro MoE 72 billion-parameter hybrid expert model, and the Rise-based model inference technology. Huawei said, "This move is another key initiative for Huawei to practice the Rise ecological strategy, promote the research and innovative development of large model technology, and accelerate the application and value creation of AI in thousands of industries." The Pangu Pro MoE 72B model weights and base inference code have been officially launched on the open source platform. The ultra-large-scale MoE model inference code based on the Rise...- 797
-
Baidu officially open source Wenxin Big Model 4.5 series of models
June 30, Baidu today officially open source Wenxin Big Model 4.5 series of models, covering 47B, 3B activation parameters of the Mixed Expert (MoE) model, and 0.3B parameters of the dense model and other 10 models, and realize the pre-training weights and reasoning code completely open source. At present, Wuxin Big Model 4.5 open source series can be downloaded and deployed on platforms such as Flying Paddle Star River Community and HuggingFace, and the open source model API service can also be used on Baidu Intelligent Cloud Qianfan Big Model Platform. Wuxin Big Model 4.5 was released in ... -
Industry's first: Tencent's hybrid-A13B model is released and open-sourced, and can be deployed on one low-end GPU card under extreme conditions.
June 27, 2011 - Tencent's hybrid model family today announced the release of its newest member, the hybrid-A13B model, which is the industry's first 13B-level MoE open-source hybrid reasoning model. As a large model based on the Mixed-Mode of Expertise (MoE) architecture, with 80 billion total parameters and 13 billion activation parameters, Mixed-Mode-A13B is claimed to be "comparable to the top open-source models in terms of effectiveness, while dramatically reducing inference latency and computational overhead. Tencent hybrid said that this is undoubtedly good news for individual developers and small and medium-sized enterprises (SMEs), extreme conditions only ...- 637
-
First in China: NetEase has open-sourced "Ziyi 3 Math Model", which can run on a single consumer GPU
June 23 news, NetEase Yudao today announced the open source "Zi said 3" series of large models of mathematical models (Confucius3-Math), claiming that it is the first focus on mathematics education, can be efficiently run on a single consumer-grade GPU open source reasoning model. Netease Yodao official test data show that in CK12-math (Internal), GAOKAO-Bench (Math), MathBench (K12), MATH500 and other data sets, 14B lightweight "ZiYao 3 mathematical model" each ...- 632
-
MiniMax Introduces M1, the World's First Open Source Large-Scale Hybrid Architecture for Inference Modeling: 456B Parameters, Superior Performance over DeepSeek-R1
On June 17, MiniMax announced that it will release important updates for five consecutive days. The first update today is MiniMax-M1, the first open source inference model. According to the official introduction, MiniMax-M1 is the world's first open source inference model with large-scale hybrid architecture, and MiniMax says: M1 is the best open source model in complex productivity-oriented scenarios, surpassing domestic closed-source models and approaching the most advanced models overseas, while offering the industry's best price/performance ratio. The official blog also mentioned that M1 is based on two major technologies. The official blog also mentioned that based on two major technological innovations, Min...- 844
-
Harvard open-sources AI training dataset 'Institutional Books 1.0', covering 983,000 books in its collection
With the support of Microsoft and OpenAI, the Harvard Law School Library officially open-sourced its first open dataset for AI training, "Institutional Books 1.0," last week. The dataset is said to contain 983,000 books in Harvard's collection, covering 245 languages and a total of 242 billion tokens,1AI attached the project address (https://huggingface.co/datasets/institutional/institutional-). ...- 1.5k
-
Tencent open source hybrid 3D 2.1 large model: the first full-link open source industrial-grade 3D generation of large models, PC can also "run"
June 14, Tencent's official public number announced early this morning that at CVPR2025 (one of the top meetings in the field of computer vision), Tencent announced the open source of the hybrid 3D 2.1 model, which is the first industrial-grade 3D generation model open-sourced by the whole chain. The new model not only optimizes the quality of geometry generation, but also opens up the PBR (Physics Based Rendering) material generation large model, which further improves the texture and light and shadow performance of 3D assets, and says goodbye to the "plastic feeling". According to the official introduction, the new model optimizes detail modeling, resulting in higher mesh accuracy and better topological consistency... -
French AI Lab Mistral Launches Magistral Series of Inference Models, Small Version Now Open Source
June 11 (Bloomberg) -- French AI lab Mistral announced Tuesday that it is entering the field of inferential AI models. On June 10, Mistral officially launched its first family of inferential models, Magistral, which solves problems in a step-by-step process designed to improve consistency and reliability in disciplines such as math and physics. This series of models is designed to improve consistency and reliability in disciplines such as mathematics and physics through step-by-step problem solving, similar to other inference models such as OpenAI's o3 model and Google's Gemini 2.5 Pro. The Magistral series consists of two versions: Magist...- 394
-
Ali open source Qwen3 new model Embedding and Reranker, bringing powerful multi-language, cross-language support
June 6 news, Ali early this morning open source Qwen3-Embedding series of models (Embedding and Reranker), designed for text representation, retrieval and sorting tasks, based on the Qwen3 base model for training. According to officials, the Qwen3-Embedding family has demonstrated excellent performance in text representation and sorting tasks in multiple benchmark tests. The Qwen3-Embedding family has the following characteristics: Excellent generalization: The Qwen3-Embedding family achieves industry...- 1.1k
-
Microsoft Open Source Releases Athena Intelligence: AI Reinvents Teams Workflow, Code PR Reviews Up to 58%
June 5, Microsoft released a blog post yesterday (June 4) announcing that the Teams application has integrated an AI intelligence body called Athena to optimize the product development process, and that the related source code has been open-sourced and hosted on the GitHub platform for customized use by organizations and individuals. 1AI quoted the blog post as introducing Athena, which has been publicly demonstrated to developers at the Build 2025 developer conference, without users having to switch between multiple applications. 1AI quoted the blog post as saying that Microsoft had publicly demonstrated Athena to developers at the Build 2025 developer conference, and that without the need for users to switch between multiple apps, Athena can intelligently determine the next step in the workflow, assisting team members directly in the ...- 817
-
Xiaomi's multimodal large model MiMo-VL open source, officially said to be leading in many aspects Qwen2.5-VL-7B
Xiaomi MiMo's official public number announced on May 30 that Xiaomi MiMo-VL, Xiaomi's large multimodal model, is now officially open source. Officially, it is significantly ahead of the benchmark multimodal model Qwen2.5-VL-7B of the same size in a number of tasks such as generalized Q&A and comprehensible reasoning in pictures, videos, languages, etc., and it is comparable to the dedicated model in the GUI Grounding task, which is coming for the Agent era. MiMo-VL-7B maintains the text-only reasoning capability of MiMo-7B while ...- 767
-
Hugging Face Launches HopeJR and Reachy Mini, Open Source Humanoid Robots
TechCrunch (May 29) published a blog post reporting that AI development platform Hugging Face has made a further foray into robotics with the launch of its latest open-source humanoid robots, HopeJR and Reachy Mini. HopeJR is a full-sized robot with 66 degrees of freedom, capable of walking, waving its arms, and other complex movements. HopeJR is a full-sized robot with 66 degrees of freedom, capable of walking, waving its arms, and other complex movements, while Reachy Mini is a smaller, desktop robot... -
Alibaba open source autonomous search AI intelligence body WebAgent
May 30 news, Alibaba yesterday on Github open source its innovative autonomous search AI Agent - WebAgent, with end-to-end autonomous information retrieval and multi-step reasoning capabilities, can be like a human in the network environment active perception, decision-making and action. For example, when users want to understand the latest research results in a particular field, WebAgent can actively search multiple academic databases, filter out the most relevant literature, and conduct in-depth analysis and summarization according to user needs. According to the introduction, WebAgent can not only recognize the literature in...- 898
-
Ant Group Announces Official Open Source Unified Multimodal Large Model Ming-lite-omni, Bering Releases New Multimodal Large Model
On May 28th, the Ant-Ling Big Model (Ling) team officially open-sourced the unified multimodal big model Ming-lite-omni. According to the introduction, Ming-lite-omni is an all-modal model based on the MoE architecture constructed by Ling-lite, with a total parameter of 22B and an activation parameter of 3B, which supports "cross-modal fusion and unification", "understanding and unification", and "generation unification". It supports "cross-modal fusion and unification" and "comprehension and generation unification". In several comprehension and generation evaluations, Ming-lite-omni with only 3B of parameter activation, performs as well as 10B of...- 1.9k
❯
Search
Scan to open current page
Top
Checking in, please wait
Click for today's check-in bonus!
You have earned {{mission.data.mission.credit}} points today!
My Coupons
-
¥CouponsLimitation of useExpired and UnavailableLimitation of use
before
Limitation of usePermanently validCoupon ID:×Available for the following products: Available for the following products categories: Unrestricted use:Available for all products and product types
No coupons available!
Unverify
Daily tasks completed: