{"id":10355,"date":"2024-05-15T09:26:58","date_gmt":"2024-05-15T01:26:58","guid":{"rendered":"https:\/\/www.1ai.net\/?p=10355"},"modified":"2024-05-15T09:26:58","modified_gmt":"2024-05-15T01:26:58","slug":"%e5%89%91%e6%8c%87-sora%ef%bc%8c%e8%b0%b7%e6%ad%8c%e6%8e%a8%e5%87%ba-veo-%e6%96%87%e7%94%9f%e8%a7%86%e9%a2%91%e6%a8%a1%e5%9e%8b%ef%bc%9a%e6%97%b6%e9%95%bf%e8%b6%85-1-%e5%88%86%e9%92%9f%e3%80%81","status":"publish","type":"post","link":"https:\/\/www.1ai.net\/en\/10355.html","title":{"rendered":"Aiming at Sora, Google launches Veo Vincent video model: over 1 minute long, up to 1080P, supports movie techniques"},"content":{"rendered":"<p data-vmark=\"99e5\">OpenAI launched text-to-video three months ago <a href=\"https:\/\/www.1ai.net\/en\/tag\/sora\" title=\"[See articles with [Sora] label]\" target=\"_blank\" >Sora<\/a>, sparking widespread discussion among netizens, media and people in the industry.<a href=\"https:\/\/www.1ai.net\/en\/tag\/%e8%b0%b7%e6%ad%8c\" title=\"[View articles tagged with [Google]]\" target=\"_blank\" >Google<\/a>AT TODAY'S 2024 I \/ O DEVELOPER'S CONGRESS, THE TARGET PRODUCT WAS ALSO LAUNCHED <a href=\"https:\/\/www.1ai.net\/en\/tag\/veo\" title=\"_Other Organiser\" target=\"_blank\" >Veo<\/a>,<strong>It can generate &quot;high quality&quot; videos with a length of more than 1 minute, a resolution of up to 1080P, and a variety of visual and cinematic styles.<\/strong><\/p>\n<p data-vmark=\"f815\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-10356\" title=\"5f0e51cc-7407-48f8-ace6-cc4bcf6c080f\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/05\/5f0e51cc-7407-48f8-ace6-cc4bcf6c080f.jpg\" alt=\"5f0e51cc-7407-48f8-ace6-cc4bcf6c080f\" width=\"1080\" height=\"1920\" \/><\/p>\n<p data-vmark=\"41ab\">According to Google&#039;s official press release, Veo has advanced understanding of natural language and can understand movie terms such as &quot;time-lapse photography&quot; and &quot;aerial landscape&quot;.<\/p>\n<p data-vmark=\"37d4\">Users can use text, image, or video prompts to guide their desired output, and Google says the resulting videos are &quot;more coherent and consistent,&quot; with more realistic movements of people, animals, and objects throughout the shot.<\/p>\n<p data-vmark=\"9750\">Demis Hassabis, CEO of Google DeepMind, said at a media preview on Monday that video results can be refined with additional prompts, and Google is exploring more features that would enable Veo to produce storyboards and longer scenes.<\/p>","protected":false},"excerpt":{"rendered":"<p>OpenAI launched a text-to-video video three months ago, Sora, which generated extensive discussion among netizens, the media and people in the community. Google also launched its target product, Veo, at today's 2024 I\/O Developer's Congress, to produce a \"quality\" video of more than one minute's length, with a maximum resolution of 1080 P, with a variety of visual and film styles. According to Google's official press release, Veo has an advanced understanding of natural languages and can understand film terms such as \u201ctime-lapse photography\u201d, \u201caeroplane view\u201d. Users can use text, images or video alerts to guide their required output, Google indicates that the resulting video is \u201cmore consistent\u201d and the movement of people, animals and objects throughout the lens<\/p>","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[146],"tags":[1249,2597,1248,281],"collection":[],"class_list":["post-10355","post","type-post","status-publish","format-standard","hentry","category-news","tag-sora","tag-veo","tag-1248","tag-281"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/10355","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/comments?post=10355"}],"version-history":[{"count":0,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/10355\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/media?parent=10355"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/categories?post=10355"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/tags?post=10355"},{"taxonomy":"collection","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/collection?post=10355"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}