{"id":14339,"date":"2024-06-30T09:11:32","date_gmt":"2024-06-30T01:11:32","guid":{"rendered":"https:\/\/www.1ai.net\/?p=14339"},"modified":"2024-06-30T09:11:32","modified_gmt":"2024-06-30T01:11:32","slug":"sora%e5%bc%ba%e6%95%8c%e6%9d%a5%e5%95%a6%ef%bc%81runway%e7%9a%84-gen-3-alpha%e5%bc%80%e5%90%af%e6%b5%8b%e8%af%95","status":"publish","type":"post","link":"https:\/\/www.1ai.net\/en\/14339.html","title":{"rendered":"Sora&#039;s strong enemy is coming! Runway&#039;s Gen-3 Alpha begins testing"},"content":{"rendered":"<div class=\"pgc-img\" data-pm-slice=\"0 0 []\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-14340\" title=\"get-814\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/06\/get-814.jpg\" alt=\"get-814\" width=\"1108\" height=\"574\" \/><\/div>\n<p data-track=\"1\" data-pm-slice=\"1 1 []\">On June 29, the famous generative AI platform<a href=\"https:\/\/www.1ai.net\/en\/tag\/runway\" title=\"[See articles with [Runway] label]\" target=\"_blank\" >Runway<\/a>announced that its Bunsen video platform, Gen-3Alpha, is opening up to some users the<a href=\"https:\/\/www.1ai.net\/en\/tag\/%e6%b5%8b%e8%af%95\" title=\"[See articles with [test] labels]\" target=\"_blank\" >test<\/a>.<\/p>\n<p data-track=\"2\">Gen-3Alpha is the latest product launched by Runway on the 17th of this month. Compared with the previous generation, it achieves substantial improvement in light and shadow, quality, composition, textual semantic restoration, physical simulation, and action consistency\/coherence, and refers to OpenAI's Sora.<\/p>\n<p data-track=\"14\">It should be noted that<strong>Gen-3 can't generate a soundtrack, the sounds for all these pieces are added by themselves<\/strong>. Currently, only Google's VideoFX can generate videos with music.<\/p>\n<p data-track=\"15\">Someone also made a short video with Gen-3 that focuses on the inspirational story of racing, dreaming, and never giving up. The story framing, dribbling, and close-ups are all good for a complete microfilm story.<\/p>\n<p data-track=\"27\">The following video is a demonstration of Gen-3's powerful textual semantic reduction capabilities, and the author states that her prompt was \"hand-drawn pencil art style rabbit fur girl\".<\/p>\n<p data-track=\"28\"><strong>The rabbit fur here is a misnomer, it should be \"rabbit-eared girl\", but the final Gen-3 result is still very correct!<\/strong>.<\/p>\n<div class=\"pgc-img\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-14341\" title=\"get-815\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/06\/get-815.jpg\" alt=\"get-815\" width=\"554\" height=\"466\" \/><\/div>\n<p data-track=\"71\">Runway says it will soon be available to everyone as testing continues.<\/p>\n<p data-track=\"72\">Gen-3 address: https:\/\/runwayml.com\/blog\/introducing-gen-3-alpha\/<\/p>","protected":false},"excerpt":{"rendered":"<p>On June 29th, Runway, a famous generative AI platform, announced that its text-based video platform Gen-3Alpha opened beta testing to some users. Gen-3Alpha is the latest product launched by Runway on the 17th of this month, compared with the previous generation, light and shadow, quality, composition, textual semantic restoration, physics simulation, and action consistency\/coherence to achieve substantial improvements, pointing to the OpenAI's Sora. It should be noted that Gen-3 can not generate the background music, the sound of these all works are added by yourself! . Currently, only Google's VideoFX can generate videos with music. Someone has also created a short video with Gen-3, which focuses on the inspirational story of racing, dreaming, and never giving up. Story framing, dribbling, close-ups<\/p>","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[146],"tags":[195,1694],"collection":[],"class_list":["post-14339","post","type-post","status-publish","format-standard","hentry","category-news","tag-runway","tag-1694"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/14339","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/comments?post=14339"}],"version-history":[{"count":0,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/14339\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/media?parent=14339"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/categories?post=14339"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/tags?post=14339"},{"taxonomy":"collection","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/collection?post=14339"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}