{"id":33833,"date":"2025-04-23T11:41:44","date_gmt":"2025-04-23T03:41:44","guid":{"rendered":"https:\/\/www.1ai.net\/?p=33833"},"modified":"2025-04-23T12:25:21","modified_gmt":"2025-04-23T04:25:21","slug":"sand-ai-%e5%8f%91%e5%b8%83%e5%bc%80%e6%ba%90%e8%a7%86%e9%a2%91%e7%94%9f%e6%88%90%e6%a8%a1%e5%9e%8b-magi-1%ef%bc%8c%e6%b8%85%e5%8d%8e%e7%89%b9%e5%a5%96%e5%be%97%e4%b8%bb%e5%9b%a2%e9%98%9f%e8%a7%86","status":"publish","type":"post","link":"https:\/\/www.1ai.net\/en\/33833.html","title":{"rendered":"Sand AI Releases Open Source Video Generation Model MAGI-1, Tsinghua Special Prize Winner Team's Video Generation AI Brushes the Screen Overnight"},"content":{"rendered":"<p>In the field of video generation, there is another heavyweight<a href=\"https:\/\/www.1ai.net\/en\/tag\/%e5%bc%80%e6%ba%90\" title=\"[View articles tagged with [open source]]\" target=\"_blank\" >Open Source<\/a>Players. April 21, 2025 Marr Prize and Tsinghua Special Prize Winner Cao Yue's Start-up Company <a href=\"https:\/\/www.1ai.net\/en\/tag\/sand-ai\" title=\"[See articles with [Sand AI] label]\" target=\"_blank\" >Sand AI<\/a> Launched its own big model for video generation --<a href=\"https:\/\/www.1ai.net\/en\/tag\/magi-1\" title=\"[SEE ARTICLES WITH [MAGI-1] LABELS]\" target=\"_blank\" >MAGI-1<\/a>. This is a world model for generating videos by autoregressive prediction of video block sequences, with natural and smooth generation, and several versions available for download.<\/p>\n<p>&nbsp;<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-33834\" title=\"a582d8b1j00sv5je4001md000u000qep\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2025\/04\/a582d8b1j00sv5je4001md000u000qep.jpg\" alt=\"a582d8b1j00sv5je4001md000u000qep\" width=\"1080\" height=\"950\" \/><\/p>\n<p>According to the official description, the video generated by MAGI-1 has the following characteristics:<\/p>\n<p>1\u3001Smooth and lag-free, with unlimited sequels. It can generate continuous long video scenes in one shot without awkward editing or strange splicing, just as smooth and natural as a movie.<\/p>\n<p>MAGI-1 is the only model with second-by-second timeline control -- you can sculpt every second exactly as you envisioned it.<\/p>\n<p>3, the movement is more natural, more vibrant. A lot of AI-generated videos, the screen action is either slow, or stiff and rigid, the amplitude is too small. magi-1 overcomes these problems, generates more smooth and dynamic movements, and the scene switching is smoother.<\/p>\n<p>MAGI-1 is based on the diffusion converter architecture and introduces technological innovations such as Block Causal Attention, Parallel Attention Blocks, Sandwich normalization, etc., to achieve efficient video generation through block generation (24 frames per block). Its unique pipeline design supports parallel processing, and up to four blocks can be generated at the same time, which greatly improves efficiency.<\/p>\n<p>The model is licensed under the Apache 2.0 license, and the code, weights, and inference tools are open on GitHub and Hugging Face, providing powerful authoring tools for developers worldwide.<\/p>\n<p>The model supports flexible inference budgets through fast distillation technology, and excels in physical behavior prediction and temporal consistency for long narratives and complex dynamic scenes.MAGI-1's \"Unlimited Video Expansion\" feature allows for the seamless extension of video content, and in combination with \"second-by-second timeline control,\" users can achieve scene transitions and fine-grained editing through block-by-block cueing to meet the needs of film and television production and storytelling. The MAGI-1's \"Unlimited Video Extension\" feature allows for seamless extension of video content, and combined with \"second-by-second timeline control\", users can realize scene transitions and fine editing through block-by-block cueing to meet the needs of film and television production and storytelling.<\/p>\n<p>In image-to-video tasks, the model demonstrates high-fidelity output with a native resolution of 1440x2568px, with smooth motion and realistic details. As an open source model, MAGI-1 provides Docker deployment support. The 24B-parameter version requires 8 H100 GPUs, and the future 4.5B version will be adapted to a single RTX 4090, lowering the threshold for use.<\/p>\n<p>Community feedback praised its generation quality and ability to follow instructions, rating it over Kling 1.6 and Wan 2.1, but there is still room for optimization in non-realistic style content.<\/p>\n<p>In the highly competitive video generation space, MAGI-1 stands out with its open source and self-regenerating architecture. Sand AI plans to release a lighter version and deepen hardware optimization, which may drive real-time generation, virtual reality, and other applications in the future.<\/p>\n<p>Github Page: https:\/\/github.com\/SandAI-org\/Magi-1<\/p>\n<p>Hugging Face: https:\/\/huggingface.co\/sand-ai\/MAGI-1<\/p>","protected":false},"excerpt":{"rendered":"<p>Another heavyweight open source player has emerged in the video generation space. On April 21, 2025, Sand AI, the startup of Marr Prize and Tsinghua Special Prize winner Cao Yue, launched its own big model for video generation -- MAGI-1. This is a world model that generates videos by autoregressive prediction of video block sequences, with natural and smooth generation effects, and multiple versions available for download.   According to the official introduction, the video generated by MAGI-1 has the following characteristics: 1, high smoothness, no lag, can be unlimited renewal. It can generate continuous long video scenes in one shot without awkward editing or strange splicing, just as smooth and natural as a movie. 2\u3001Precise timeline control.MAGI-1 is the only timeline control with second-level<\/p>","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[146],"tags":[6409,6408,219,460],"collection":[],"class_list":["post-33833","post","type-post","status-publish","format-standard","hentry","category-news","tag-magi-1","tag-sand-ai","tag-219","tag-460"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/33833","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/comments?post=33833"}],"version-history":[{"count":0,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/33833\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/media?parent=33833"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/categories?post=33833"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/tags?post=33833"},{"taxonomy":"collection","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/collection?post=33833"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}