{"id":18457,"date":"2024-08-22T09:27:58","date_gmt":"2024-08-22T01:27:58","guid":{"rendered":"https:\/\/www.1ai.net\/?p=18457"},"modified":"2024-08-22T09:27:58","modified_gmt":"2024-08-22T01:27:58","slug":"%e7%88%b1%e8%af%97%e7%a7%91%e6%8a%80ai%e8%a7%86%e9%a2%91%e7%94%9f%e6%88%90%e4%ba%a7%e5%93%81pixverse-v2-5%e4%bb%8a%e6%97%a5%e9%9d%a2%e5%90%91%e5%85%a8%e7%90%83%e7%94%a8%e6%88%b7%e5%bc%80%e6%94%be","status":"publish","type":"post","link":"https:\/\/www.1ai.net\/en\/18457.html","title":{"rendered":"AiShi Technology&#039;s AI video generation product PixVerse V2.5 is now available to users around the world"},"content":{"rendered":"<p><a href=\"https:\/\/www.1ai.net\/en\/tag\/%e7%88%b1%e8%af%97%e7%a7%91%e6%8a%80\" title=\"Look at the article that contains the labels\" target=\"_blank\" >Aishi Technology<\/a>Alsphere announced that PixVerse V2.5 will be officially available to global users on August 22.<\/p>\n<p>In July this year, Aishi Technology released a new upgraded<a href=\"https:\/\/www.1ai.net\/en\/tag\/ai%e8%a7%86%e9%a2%91\" title=\"[View articles tagged with [AI Video]]\" target=\"_blank\" >AI Video<\/a>The generated product PixVerse V2 uses the Diffusion+Transformer (DiT) infrastructure and has achieved many innovations in video generation technology. PixVerse V2 can provide longer, more consistent and more interesting video generation capabilities, supporting the generation of multiple video clips at a time, with a single clip length of up to 8 seconds and a total multi-clip video length of up to 40 seconds.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-18458\" title=\"9d5f319aj00silipw000ad000n300cxm\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/08\/9d5f319aj00silipw000ad000n300cxm.jpg\" alt=\"9d5f319aj00silipw000ad000n300cxm\" width=\"831\" height=\"465\" \/><\/p>\n<p>PixVerse V2 has outstanding performance in technological innovation. It has introduced a self-developed spatiotemporal attention mechanism, which significantly improves the perception of space and time and optimizes the processing effect of complex scenes. At the same time, the product also uses multimodal models to strengthen text comprehension capabilities, achieves precise alignment of text information and video information, and enhances the model&#039;s comprehension and expression capabilities.<\/p>\n<p>In addition, PixVerse V2 is optimized based on the traditional model, and promotes the rapid convergence of the model through weighted loss, thereby improving training efficiency. Based on user feedback and community discussions, the Aishi Technology team particularly emphasized the importance of consistency in video creation. PixVerse V2 supports the one-click generation of 1-5 continuous video content, maintaining the consistency of the main image, picture style and scene elements.<\/p>\n<p>PixVerse V2 also supports secondary editing of generated results. Users can intelligently identify content and automatically associate, flexibly replace and adjust the video subject, action, style and camera movement, enriching the diversity of creation. Aishi Technology said that it will carry out multiple iterations and upgrades in the next three months to provide a better AI video generation experience.<\/p>","protected":false},"excerpt":{"rendered":"<p>Aishi Technology Alsphere announced that PixVerse V2.5 will be officially opened for global users on August 22nd. In July of this year, Alsphere released the new upgraded AI video generation product PixVerse V2, which adopts the Diffusion+Transformer (DiT) infrastructure and realizes a number of innovations in video generation technology.PixVerse V2 can provide longer, more consistent and more interesting video generation capabilities, and supports the generation of multiple videos and clips at a time. PixVerse V2 provides longer, more consistent, and more interesting video generation with the ability to generate multiple video clips at once, up to 8 seconds for a single clip and up to 40 seconds in total for multiple clips. PixVerse V2 stands out in terms of technological innovation, introducing a self-developed spatio-temporal attention mechanism, which significantly improves the ability to perceive space and time.<\/p>","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[146],"tags":[956,1044,3707],"collection":[],"class_list":["post-18457","post","type-post","status-publish","format-standard","hentry","category-news","tag-ai","tag-pixverse","tag-3707"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/18457","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/comments?post=18457"}],"version-history":[{"count":0,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/18457\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/media?parent=18457"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/categories?post=18457"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/tags?post=18457"},{"taxonomy":"collection","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/collection?post=18457"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}