-
Jellyfish: A one-stop AI tool to generate short dramas (demon screen shorts/ microsyncs) with one key to change the script to a spectroscopy
Jellyfish is a one-stop AI-generated comic video production tool that supports the automatic generation of spectroscopy, role, scene and full video from novel text. The overall location is the "AI Shorts Factory". Its core objective is to upgrade short-time production from a manual/semi-automatic mode to an industrial stream. Not only is the tool fully open-source, supporting local deployment and secondary development, but it also strikes at the technical level more accurately at the worst point of pain for the generation of AI video. The Jellyfish feature Enteres the script: Just provide text scripts in support of Chinese and English. Smart Mirror..- 499
-
NemoClaw: Open-source AI Smart Platform, Open-source tool for deploying secure AI assistants
NVIDIA NemoClaw is an open source tool for deploying security AI assistants. It is installed in a single-key format to help users quickly build and run secure, autonomous AI assistants, which are used in various scenarios. NemoClaw has enhanced the security of AI assistants and simplified the deployment process. NemoClaw function One key secure deployment: Rapidly deploys a secure, continuous AI assistant by a single order. NemoClaw combines security and privacy controls to make it easier for developers to build and run AI assistants. Supporting Ren..- 1k
-
OpenAI free of charge to open-source project developers for six months ChatGPT Pro subscription without hard indicators such as Star number, monthly downloads
On March 7th, OpenAI announced today the Codex Open Source Scheme, which provides free-of-charge subscriptions to ChatGPT Pro for open source project maintainers/ developers. OpenAI states that open source maintainers have undertaken important work in the global software ecosystem in silence and that the Codex Open Source Fund has supported a number of projects requiring API over the past year totalling $1 million (note: the current exchange rate is approximately RMB 6917,000). At the same time, get a free CHATGPT Pro..- 667
-
ALI CEO CONFIRMED LIN JOON-SOO'S DEPARTURE: THE OPEN SOURCE STRATEGY REMAINS UNCHANGED AND THE AI INPUT CONTINUES TO INCREASE
YESTERDAY, ARI BABA CEDO WU MIO SONG SENT AN IN-HOUSE E-MAIL TO ALL THE STAFF OF THE GENERAL INFORMATION LABORATORY, FORMALLY CONFIRMING THAT THE CHIEF OF TECHNICAL SERVICES, LIN JUN-HYUN, HAD LEFT. IN THE LETTER, WU YUI SONG ANNOUNCED THAT THE COMPANY WOULD SET UP A BASIC MODEL SUPPORT GROUP, WHICH WOULD BE CO-ORDINATED BY WU YUI MYUNG HIMSELF, THE HEAD OF THE GENERAL LABORATORY, ZHOU YAN MAN AND FAN XIAN, TO SUPPORT THE BUILDING OF THE BASIC MODEL. APPSO UNDERSTANDS THAT A NEW ROUND OF ORGANIZATIONAL STRATEGIC ADJUSTMENTS IS UNDER WAY WITHIN ALI, WITH A PLAN TO UPGRADE THE BASE MODEL AS A WHOLE, AND TO BRING IN THE TOP TECHNICAL SKILLS ON A LARGE SCALE- 616
-
Ali Desktop Agent Tool Copaw Open source: Free access to local models, support nails, flying books, QQQ etc
On March 2nd, Aliyun announced today that the Agent tool Copaw will be officially open for secondary development based on Copaw, free access to local models, the preparation of Skills and access to proprietary message applications to meet more customized scene needs. It has been described that CoPaw Native Support for Chat Software and Platforms such as Nailing, Flying Book, QQ, Discord, iMessage, has multiple Skills, which can be deployed locally by a single key or a key cloud end through Aliyun's Calcator nest and the demonic community to create space..- 1.8k
-
NanoClaw: Open source lightweight personal AI assistant, safe OpenClaw replacement
NanoClaw is an open-source AI assistant, a lightweight alternative to OpenClaw, each Agent runs in a separate sandbox and only accesss a visible mounted directory. NanoClaw supports multi-channel access to whatsApp, Telegram, Discord and others, and pioneers the Agent Swams cluster collaboration capacity of personal AI assistants. NanoClaw abandons traditional configurations and users use natural language commands to make Claude Code change the source code directly to bespoke..- 2.9k
-
Ming-Flash-Omni 2.0 large and open-source model released by the ants group, more visible, better heard and more stable
On February 11, an ants group open source released a large full-mode model Ming-Flash-Omni 2.0. In a number of open benchmarking tests, the model has been prominent in key competencies such as visual language understanding, voice-controllable generation, image generation and editing. Ming-Flash-Omni 2.0 was described as the first industry-wide unified audio generation model to generate both voice, environmental sound and music in the same track. Users can exercise precision control over sound, speed, tone, volume, emotions and dialects by using only natural language instructions. Model..- 3.6k
-
On the dark side of the moon, the strongest open source is launched
On January 28th, the dark side of the month officially launched the latest version of the flagship model, Kimi K2.5, to the public yesterday, to achieve a full upgrade in visual, multimodular understanding, code generation and intelligence capabilities. It was described that Kimi K2.5, using original multimodular structures, supported text, image and video input and was able to perform tasks such as image analysis, video analysis and visual programming. Official displays show that models can generate 3D models based on plans, re-engineer web interfaces from video and achieve higher accuracy path planning and visual debugging in image reasoning tasks..- 1.3k
-
Aliyun Open Source 6B Parameter Z-Image Base Model, Generating Photo Rejection of AI "People's Face"
On January 28th, Ali Yunyuan officially launched the Z-Image Base Model today, January 28th. The model is 6B in size, preserves the full weight distribution for the non-distillation base model, supports the CFG pilot mechanism, and provides a training base for fine-tuning missions such as LoRA, ControlNet. Z-Image claims to break the writing limits of a single dimension: whether it's phototolealism that pursues the shadows, or the dynamic and digital arts that have emotional tension, Z-Image captures and reconstructs every..- 1.8k
-
Clawdbot is here to install a 7x24-hour AI assistant
A recent open-source AI assistant, Clawdbot, is very hot on the offline: it can run on the server for 7 x 24 hours, and the user sends it a message through the instant communication platform, directing it to do all kinds of work. It's no longer useful to feel its ability, as illustrated by the fact that Clawdbot is better at "do it" than a normal AI chat robot, and in the case of the above, he did all the work of downloading the YouTube plugin. How can we have it? I. Requirements for the deployment of Clawdbot. 1, Telegram: This is the simplest, official push..- 4.9k
-
Clawdbot Installation and Introduction (Putting Hands New)
What is Clawdbot? Clawdbot is an open-source, self-hosted AI Assistant Framework maintained by community developers (web:clawd.bot), which allows you to integrate AI models (e.g., anthropic Claude, Openai GPT or other API-supported models) into chat applications. Through natural language conversations, you can get AI to execute server commands, read and write files, search the Internet, manage calendars, send mail, control other services, even access mobile cameras or push notifications..- 39.7k
-
Coderrr: A powerful open-source AI coding assistant CLI tool that prepares, debugs and publishes code
Coderrr is an open-source AI coding assistant designed to accelerate the development process. Through natural language descriptions, it generates codes, debugs and deploys, and applies to various development scenarios. Coderrr function AI driver code generation: Generate code for direct input production according to natural language description, and simplify the development process. Self-recovered error recovery: autoanalysing errors and retrying them using amendments to improve code quality and development efficiency. Multi-cycle dialogue iterative development: it is done through natural dialogue to make the development process more fluid and compatible with human thinking. Code library smart understanding:..- 1.4k
-
MASK DECLARED OPEN SOURCE X, NEW RECOMMENDED ALGORITHM: FULL DISCLOSURE OF CORE CODE
In the news of January 21, yesterday, Iron Mask announced that X had officially opened a new recommended algorithm and simultaneously made public its complete code warehouse on GitHub. The X Engineering Team wrote yesterday that the new algorithm is based on the same transform structure as the xAI 's Grok model, covering all core logic recommended by the platform for determining "natural content" and "advertising content." Mask added that X would update the algorithm every four weeks in the future, with a developer's note, so that the outside world could understand the changes in the referral mechanism. He's..- 1.2k
-
BEFORE GOOGLE CEO SCHMIDT: EUROPE EITHER INVESTS IN OPEN SOURCE AI OR DEPENDS ON THE CHINESE MODEL
On January 21st, according to Bloomberg, prior to Google CEO, Eric Schmidt, a technology investor, stated on Tuesday that Europe must invest in building its own open-source AI laboratory and solve the problem of soaring energy prices, otherwise it would soon find itself dependent on China’s model. Schmitt said on Tuesday at the World Economic Forum in Davos: “In the United States, businesses are largely turning to closed sources, which means that these technologies will be purchased, authorized, etc. At the same time, China ' s approach is largely open-minded and open-source. Unless Europe is willing for Europe..- 5.8k
-
Step3-VL-10B, performance equivalent to a hundred billion-scale large model
On January 21st, Step3-VL-10B open source for step-to-step stars. It was described that with only 10B parameters, Step3-VL-10B had reached the same scale SOTA in a series of benchmark tests such as visual perception, logical reasoning, mathematical competitions and general dialogue. 1AI with the text of the official presentation as follows: Only 10B parameter, Step3-VL-10B in visual perception,.. -
Genre GLM-4.7-Flash model release and open source, free of charge
On January 20, the IQ-GLM-4.7-Flash model was officially released and opened today, January 20th. GLM-4.7-Flash is a hybrid thinking model with a total parameter of 30B and a activated parameter of 3B, claiming to be a homogenous SOTA model, providing a new option for light quantification deployments that takes into account performance and efficiency. As of this date, the GLM-4.7-Flash will replace the GLM-4.5-Flash, go online on the open-think platform BigModel.cn and be available for free call..- 1.7k
-
Nano Banana Pro, a new rival, multimodular SOTA model for the first nationally produced chip in Union of Thoughts
On January 14, according to the news, Union Hua was declared today as an open-source new generation image generation model, GMM-Image, based on the roll-out of Atlas 800T A2 equipment and the MindSpore AI framework to complete the full process from data to training, the first SOTA multi-modulate model to complete full training on a national chip. GLM-Image combines image generation with language models using an autonomous and innovative "self-return + proliferation decoder" hybrid structure. 1AI with GLM-Image core highlights as follows.. -
The country's first open-source vertical large-language model in the field of general agriculture, “Sunon”
On January 14, the official of Nanjing Agricultural University announced that, last week, at the sub-forum of the annual conference of the Higher Agroforestry Education Branch of the Chinese Institute of Higher Education in China in 2025, “Technology for the transformation of the full dimension of agro-forestry education”, the Nanjing Agricultural University of Nanjing had officially launched Sinong, the Siinong language model. The model is the first open-source vertical-language model in the country for the general area of agriculture and the first large-language model for agriculture, which was developed by Nanjing University of Agriculture. The publication of the Sinon Language Model marked a new breakthrough in research and application of artificially intelligent basic models in agriculture at Nanjing Agricultural University. "Sunon.. -
10 trillion tokens! Weeda contributed to the largest open source data set in the world and pushed four open source AI models
On January 6, in the CES 2026 keynote address held today, Chief Executive Officer Hoang In-hoon of Inweida delivered a keynote speech announcing a large-scale expansion of his open-source model bank, the release of new models and data sets covering the four main areas of language, robotics, autopilot and medicine, and further acceleration of industry-wide AI innovation. Weeda contributed to the Open Source Training Framework and the world's largest open multi-modular data set, including 10 trillion language training tokens, 500,000 robot tracks, 455,000 protein structures and 100 TB vehicle sensors..- 1.4k
-
OPEN SOURCE TRANSLATION MODEL 1.5: CELL PHONE 1GB MEMORY TO RUN, GOING BEYOND COMMERCIAL API
On December 31, in the news of the announcement of the open-source translation model 1.5, the text consists of two models: Tencent-HY-MT1.5-1.8B and Tencent-HY-MT1.5-7B, in support of 33 language translations and 5 Chinese/linguistics, in addition to common languages such as Chinese, English and Japanese, as well as small languages such as Czech, Marathi, Estonian and Icelandic. Now both models are placing on the M.O. Online, and open-source communities like Github and Huggingface..- 3.5k
-
Open-source virtual human video generation model LongCat-Video-Avatar: It's all human when it's called "no talking."
On December 19, according to a tweet from the LongCat public, the company LongCat team officially released and opened the SOTA-class virtual human video generation model, LongCat-Video-Avatar. The model is based on the LongCat-Video base and continues the core design of "One Model for Multitask" and supports core functions such as Audio-Text-to-Video, Audio-Text-Image-to-Video and video continuation, as well as.. -
Mi suddenly released a new model: DeepSeek-V3.2
On December 17th, a new MiMo-V2-Flash model was officially released and opened in Mimo-V2-Flash, first looking at performance: a total of 30.9 billion MiMo-V2-Flash, an active parameter of 15 billion, using an expert hybrid structure (MoE), and also able to bend wrists with these front-source models DeepSeek-V3.2 and Kimi-K2. Remove the "open source" label, MiMo-V2-Flash The real killer It's a radical innovation in architecture that pulls the reasoning to 150 to..- 4.2k
-
Ali Tun Yi has launched a new version of the voice model: 3 seconds to "replicate" 9 languages, 18 dialects
On December 16, in a message, the Master Model was announced by the official public sign, two “hundred-hear” voice models were officially opened and two models were upgraded. According to the introduction, it takes three seconds to get your voice seamlessly transacted in languages, dialects and emotions -- Mandarin, Chinese, Japanese, English, happy, angry nine languages, 18 dialects. Upgrade Fun-CosyVoice3 Model Upgrade: First package delayed reduction of 50%, doubling the accuracy of Chinese and English and supporting 9 languages 18 dialects, translingual cloning and emotional control; ..- 4.8k
-
VoxCPM 1.5 Voice generation AI model open source: high-exploited sample audio cloning, double efficiency
On December 11, the Spectator announced that the VoxCPM version 1.5 was officially on line, while continuously optimizing the developer ' s development experience, it also brought about a number of core competency upgrades. VoxCPM is a speech generation base model of 0.5B parameter size, first released in September this year. 1AI with VoxCPM 1.5 Update bright spots: High-exploration Sample Audio Cloning: AudioVAE Sampling Rate raised from 16kHz to 441kHz, with models that make better and more detailed noises based on high-quality audio; ..- 4.2k
❯
Search
Scan to open current page
Top
Checking in, please wait
Click for today's check-in bonus!
You have earned {{mission.data.mission.credit}} points today!
My Coupons
-
¥CouponsLimitation of useExpired and UnavailableLimitation of use
before
Limitation of usePermanently validCoupon ID:×Available for the following products: Available for the following products categories: Unrestricted use:Available for all products and product types
No coupons available!
Unverify
Daily tasks completed:






















