{"id":37147,"date":"2025-06-10T17:25:28","date_gmt":"2025-06-10T09:25:28","guid":{"rendered":"https:\/\/www.1ai.net\/?p=37147"},"modified":"2025-06-10T17:25:28","modified_gmt":"2025-06-10T09:25:28","slug":"%e7%81%ab%e5%b1%b1%e5%bc%95%e6%93%8e%e6%98%8e%e6%97%a5%e5%8f%91%e5%b8%83%e5%85%a8%e6%96%b0%e8%b1%86%e5%8c%85%e8%a7%86%e9%a2%91%e7%94%9f%e6%88%90%e6%a8%a1%e5%9e%8b%ef%bc%8c%e6%94%af%e6%8c%81%e6%97%a0","status":"publish","type":"post","link":"http:\/\/www.1ai.net\/en\/37147.html","title":{"rendered":"Volcano Engine Releases New Beanbag Video Generation Model Tomorrow, Supports Seamless Multi-Camera Narrative"},"content":{"rendered":"<p>June 10 (Bloomberg) - ByteDance<a href=\"http:\/\/www.1ai.net\/en\/tag\/%e7%81%ab%e5%b1%b1%e5%bc%95%e6%93%8e\" title=\"[Sees articles with labels]\" target=\"_blank\" >Volcano Engine<\/a>The official public website announced today that it will be released on June 11th<strong>all-new<a href=\"http:\/\/www.1ai.net\/en\/tag\/%e8%b1%86%e5%8c%85\" title=\"[View articles tagged with [beanbag]]\" target=\"_blank\" >Bean curd<\/a><a href=\"http:\/\/www.1ai.net\/en\/tag\/%e8%a7%86%e9%a2%91%e7%94%9f%e6%88%90%e6%a8%a1%e5%9e%8b\" title=\"_Other Organiser\" target=\"_blank\" >Video Generation Model<\/a><\/strong>.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-37148\" title=\"d31218d1j00sxmvhe001rd000i900o5p\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2025\/06\/d31218d1j00sxmvhe001rd000i900o5p.jpg\" alt=\"d31218d1j00sxmvhe001rd000i900o5p\" width=\"657\" height=\"869\" \/><\/p>\n<p>According to reports, the new beanbag video generation model has a number of \"hardcore capabilities\", 1AI with examples as follows:<\/p>\n<ul>\n<li><strong>Supports seamless multi-camera narratives<\/strong>That is, through efficient model structure, multimodal positional coding and multitask unified modeling, the model can support unique and stable multicamera representations.<\/li>\n<li><strong>Supports multi-movement and on-the-fly mirroring<\/strong>The video is a rich set of scenes, subjects and behavioral actions, which can respond more accurately to users' fine-grained commands, and smoothly generate complex video content with multiple subjects, multiple actions and random camera movements.<\/li>\n<li><strong>Supports stable motion and true aesthetics<\/strong>That is to say, the picture and subject dynamics are more natural, better structured, with a lower crash rate, and different styles of video content can be generated according to the instructions, such as realism, animation, film and television, and advertisements.<\/li>\n<\/ul>","protected":false},"excerpt":{"rendered":"<p>June 10 news, byte jumping volcano engine official public number announced today, will be released on June 11, the new beanbag video generation model. According to the introduction, the new beanbag video generation model has a number of \"hard core capabilities\", 1AI with examples as follows: support seamless multi-camera narrative, that is, through the efficient model structure, multi-modal positional coding and multi-tasking unified modeling, the model can support unique and stable multi-camera expression. Supports multi-action and on-the-fly camera movement, i.e., after fully learning rich scenes, subjects and behavioral actions, the model can more accurately respond to users' fine commands and smoothly generate complex video content with multiple subjects, multiple actions and on-the-fly camera movement. Supports stable motion and real beauty, i.e., more natural picture and subject dynamics, better structure, lower crash rate, and can generate video content according to commands.<\/p>","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[146],"tags":[3066,460,2248],"collection":[],"class_list":["post-37147","post","type-post","status-publish","format-standard","hentry","category-news","tag-3066","tag-460","tag-2248"],"acf":[],"_links":{"self":[{"href":"http:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/37147","targetHints":{"allow":["GET"]}}],"collection":[{"href":"http:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"http:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"http:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"http:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/comments?post=37147"}],"version-history":[{"count":0,"href":"http:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/37147\/revisions"}],"wp:attachment":[{"href":"http:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/media?parent=37147"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"http:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/categories?post=37147"},{"taxonomy":"post_tag","embeddable":true,"href":"http:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/tags?post=37147"},{"taxonomy":"collection","embeddable":true,"href":"http:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/collection?post=37147"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}