{"id":30666,"date":"2025-03-13T17:31:14","date_gmt":"2025-03-13T09:31:14","guid":{"rendered":"https:\/\/www.1ai.net\/?p=30666"},"modified":"2025-03-13T17:31:14","modified_gmt":"2025-03-13T09:31:14","slug":"%e6%bd%9e%e6%99%a8%e7%a7%91%e6%8a%80%e6%8e%a8%e5%87%ba%e5%bc%80%e6%ba%90%e8%a7%86%e9%a2%91%e7%94%9f%e6%88%90%e6%a8%a1%e5%9e%8b-open-sora-2-0%ef%bc%8c%e6%80%a7%e8%83%bd%e6%8e%a5%e8%bf%91-openai-sora","status":"publish","type":"post","link":"https:\/\/www.1ai.net\/en\/30666.html","title":{"rendered":"Lucent Technologies Launches Open-Sora 2.0, an Open Source Video Generation Model with Performance Close to OpenAI Sora"},"content":{"rendered":"<p>Today, March 13th.<a href=\"https:\/\/www.1ai.net\/en\/tag\/%e6%bd%9e%e6%99%a8%e7%a7%91%e6%8a%80\" title=\"[Sees the article with the [morning technology] label]\" target=\"_blank\" >Lucent Technologies<\/a>Announcing the release of Open-Sora 2.0 and a comprehensive<a href=\"https:\/\/www.1ai.net\/en\/tag\/%e5%bc%80%e6%ba%90\" title=\"[View articles tagged with [open source]]\" target=\"_blank\" >Open Source<\/a>Model weights, inference code and distributed training of the whole process.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-30667\" title=\"221fd3f8j00st22el005od000u000g9p\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2025\/03\/221fd3f8j00st22el005od000u000g9p.jpg\" alt=\"221fd3f8j00st22el005od000u000g9p\" width=\"1080\" height=\"585\" \/><\/p>\n<p>It is described as a new open source SOTA <a href=\"https:\/\/www.1ai.net\/en\/tag\/%e8%a7%86%e9%a2%91%e7%94%9f%e6%88%90%e6%a8%a1%e5%9e%8b\" title=\"_Other Organiser\" target=\"_blank\" >Video Generation Model<\/a>In the past few years, the company has successfully trained a commercial-grade 11B-parameter video generation model with only $200,000 (Note: Currently, it's about 1,449,000 RMB), or 224 GPUs, with a performance that matches that of Tencent's Mixed Meta and 30B-parameter Step-Video.<\/p>\n<p>According to Lucent Technologies, after upgrading from Open-Sora 1.2 to 2.0, the performance gap with the OpenAI Sora closed-source model \"has been reduced from 4.52% to only 0.69%, which is almost a full performance parity\".<\/p>\n<p>References:<\/p>\n<ul>\n<li><strong>GitHub Open Source Repository<\/strong>:: https:\/\/github.com\/hpcaitech\/Open-Sora<\/li>\n<li>Technical report: https:\/\/github.com\/ hpcaitech \/ Open-Sora-Demo \/ blob \/ main \/ paper \/ Open_Sora_2_tech_report.pdf<\/li>\n<\/ul>","protected":false},"excerpt":{"rendered":"<p>March 13 news, today, Lucent Technologies announced the launch of Open-Sora2.0, and fully open source model weights, inference code and distributed training of the entire process. According to reports, this is a new open source SOTA video generation model, only 200,000 U.S. dollars (Note: the current about 1.449 million yuan), that is, 224 GPUs to successfully train a large model of commercial-grade 11B parameter video generation, the performance of Tencent mixed yuan and 30B parameter Step-Video. Lucent Technologies, said that the update from Open-Sora1.2 to the After upgrading from Open-Sora 1.2 to Open-Sora 2.0, Lucent Technologies said that the performance gap with the OpenAI Sora closed-source model \"has been reduced from 4.52% to only 0.5TP3T.<\/p>","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[146],"tags":[219,5964,460],"collection":[],"class_list":["post-30666","post","type-post","status-publish","format-standard","hentry","category-news","tag-219","tag-5964","tag-460"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/30666","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/comments?post=30666"}],"version-history":[{"count":0,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/30666\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/media?parent=30666"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/categories?post=30666"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/tags?post=30666"},{"taxonomy":"collection","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/collection?post=30666"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}