April 21st.Kunlun WanweiOfficially released today andOpen Source SkyReels-V2 - the first in the world to useA Model for Infinite Duration Movie Generation in the Diffusion-forcing (Diffusion-Forcing) FrameworkThe new model is based on a combination of Multi-Modal Large Language Model (MLLM), Multi-stage Pretraining, Reinforcement Learning and Diffusion-forcing frameworks to achieve collaborative optimization. Officially, the model will break through the boundaries of video generation technology and open up a new era of "unlimited-length movie generation".

1AI attached open source address below:
SkyReels-V2
- GitHub address:https://github.com/SkyworkAI/SkyReels-V2
- Paper address:https://arxiv.org/abs/2504.13074
SkyReels-A2
- HuggingFace Address:https://huggingface.co/ Skywork / SkyReels-A2
- GitHub address:https://github.com/SkyworkAI/SkyReels-A2
- Paper address:https://arxiv.org/ pdf/2504.02436
It is reported that existing techniques often sacrifice motion dynamics when enhancing stable visual quality, limit video duration (typically 5-10 seconds) in order to prioritize high resolution, and lead to insufficient shot-aware generation due to the inability of the Universal Multimodal Large Language Model (MLLM) to decode cinematic syntax (e.g., shot composition, actor expression, and camera motion). These interrelated limitations hinder realistic compositing and professional cinematic style generation for long videos.
SkyReels-V2 is a technological breakthrough that also provides a variety of useful application scenarios, including story generation, graphic video, mirror experts and multi-subject consistent video generation (SkyReels-A2).
SkyReels-V2 is now supportedGenerate 30-second, 40-second videosand has the ability to generate high motion quality, high consistency, and high fidelity video.
Officially, SkyReels-V2 excels in motion dynamics, generating smooth and lifelike video content that meets the demand for high-quality motion dynamics in movie production.