{"id":31158,"date":"2025-03-20T20:21:29","date_gmt":"2025-03-20T12:21:29","guid":{"rendered":"https:\/\/www.1ai.net\/?p=31158"},"modified":"2025-03-20T20:23:03","modified_gmt":"2025-03-20T12:23:03","slug":"%e9%98%b6%e8%b7%83%e6%98%9f%e8%be%b0-step-video-ti2v-%e5%9b%be%e7%94%9f%e8%a7%86%e9%a2%91%e6%a8%a1%e5%9e%8b%e5%bc%80%e6%ba%90%ef%bc%9a%e8%bf%90%e5%8a%a8%e5%b9%85%e5%ba%a6%e5%92%8c%e9%95%9c%e5%a4%b4","status":"publish","type":"post","link":"https:\/\/www.1ai.net\/en\/31158.html","title":{"rendered":"Step-Star Step-Video-TI2V Graph-generated Video Modeling Open Source: Controllable Amplitude of Motion and Lens Motion"},"content":{"rendered":"<p>March 20, 2012 - In February of this year<a href=\"https:\/\/www.1ai.net\/en\/tag\/%e9%98%b6%e8%b7%83%e6%98%9f%e8%be%b0\" title=\"[View articles tagged with [Step Star]]\" target=\"_blank\" >Step Star<\/a><a href=\"https:\/\/www.1ai.net\/en\/tag\/%e5%bc%80%e6%ba%90\" title=\"[View articles tagged with [open source]]\" target=\"_blank\" >Open Source<\/a>We have published two Step series of multimodal macromodels -- Step-Video-T2V video generation model and Step-Audio speech model, and today, Step Star continues to open source<a href=\"https:\/\/www.1ai.net\/en\/tag\/%e5%9b%be%e7%94%9f%e8%a7%86%e9%a2%91%e6%a8%a1%e5%9e%8b\" title=\"[See articles with tags]\" target=\"_blank\" >Image video model<\/a> --Step-Video-TI2V, a graph-generated video model trained on the 30B parameter Step-Video-T2V.<strong>It supports the generation of 102-frame, 5-second, 540P resolution videos with two core features: controlled motion and controlled camera movement, as well as the inherent ability to generate certain special effects.<\/strong><\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-31159\" title=\"1d786b76j00stf8yw007jd000gq00dam\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2025\/03\/1d786b76j00stf8yw007jd000gq00dam.jpg\" alt=\"1d786b76j00stf8yw007jd000gq00dam\" width=\"602\" height=\"478\" \/><\/p>\n<p>Step-Video-TI2V not only provides a higher upper limit for research in this field in terms of parameter scale, but its motion amplitude controllability is able to balance the dynamics and stability of the graph-generated video generation results, providing more flexible choices for creators, according to Step-Star.<\/p>\n<p>Meanwhile, Step-Video-TI2V has been adapted to Huawei's Rise Computing Platform and launched on Modelers.<\/p>\n<p>1AI attached Step-Video-TI2V Core features are as follows:<\/p>\n<p>1\u3001Motion amplitude can be controlled: dynamic &amp; stable free switching.<\/p>\n<p>Step-Video-TI2V supports controlling the \"motion\" of the video, balancing the motion and stability of the graphic content. Whether it's a static stabilized image or a highly dynamic action scene, the Step-Video-TI2V meets the needs of creators.<\/p>\n<p>2\u3001Multi-operation mirror control<\/p>\n<p>In addition to controlling in-camera subject movement, Step-Video-TI2V supports a wide range of camera movements, enabling precise control of camera movements in the resulting video to produce blockbuster-quality camera movements. From basic push, pull, pan and lift, to complex cinematic effects, Step-Video-TI2V can handle it all.<\/p>\n<p>3. The animation effect is particularly good<\/p>\n<p>Step-Video-TI2V is particularly effective in animation tasks, and is well suited for animation creation, short video production and other application scenarios.<\/p>\n<p>4\u3001Support multi-size generation<\/p>\n<p>Step-Video-TI2V supports multiple sizes of graph-generated video, whether it is the wide field of view of horizontal screen, the immersive experience of vertical screen, or the classic retro of square screen, all can be easily managed. Users can freely choose the image size according to different creative needs and platform characteristics, without worrying about the distortion or disproportion of the screen.<\/p>\n<p>Now, the Step-Video-TI2V model has been officially open-sourced, and both the Step AI web version and App are now online.<\/p>\n<p>In addition, Step-Video-TI2V is now capable of generating special effects, and in the future, Step-Star will continue to unlock the special effects potential of the model through LoRA and other technologies.<\/p>\n<p>Models and links to technical reports:<\/p>\n<p data-vmark=\"1297\">GitHub:<\/p>\n<p data-vmark=\"9f09\"><a href=\"https:\/\/github.com\/stepfun-ai\/Step-Video-TI2V\" target=\"_blank\" rel=\"noopener\"><span class=\"link-text-start-with-http\">https:\/\/github.com\/stepfun-ai\/Step-Video-TI2V<\/span><\/a><\/p>\n<p data-vmark=\"265f\">Github-ComfyUI:<\/p>\n<p data-vmark=\"cdc0\"><a href=\"https:\/\/github.com\/stepfun-ai\/ComfyUI-StepVideo\" target=\"_blank\" rel=\"noopener\"><span class=\"link-text-start-with-http\">https:\/\/github.com\/stepfun-ai\/ComfyUI-StepVideo<\/span><\/a><\/p>\n<p data-vmark=\"b382\">Technical report:<\/p>\n<p data-vmark=\"d4d8\"><a href=\"https:\/\/arxiv.org\/abs\/2503.11251\" target=\"_blank\" rel=\"noopener\"><span class=\"link-text-start-with-http\">https:\/\/arxiv.org\/abs\/2503.11251<\/span><\/a><\/p>","protected":false},"excerpt":{"rendered":"<p>In March 20th, February this year, a series of two large multi-modular models of Step-Video-T2V video generation and Step-Audio voice-based video-based models continued to be developed today for step-by-step stars \u2014 Step-Video-TI2V, a graphic video model based on 30B parameters Step-Video-T2V training that supports the generation of 102 frames, 5 seconds, 540P resolution video, with two core features that can be controlled by range and lens motion, and also with a natural special effect generation capability. Step-Video-TI2V is more than just an open-source video model<\/p>","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[146],"tags":[1711,219,1893],"collection":[],"class_list":["post-31158","post","type-post","status-publish","format-standard","hentry","category-news","tag-1711","tag-219","tag-1893"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/31158","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/comments?post=31158"}],"version-history":[{"count":0,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/31158\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/media?parent=31158"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/categories?post=31158"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/tags?post=31158"},{"taxonomy":"collection","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/collection?post=31158"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}