{"id":46719,"date":"2025-12-02T11:23:42","date_gmt":"2025-12-02T03:23:42","guid":{"rendered":"https:\/\/www.1ai.net\/?p=46719"},"modified":"2025-12-02T11:23:42","modified_gmt":"2025-12-02T03:23:42","slug":"runway-%e6%8e%a8%e5%87%ba-gen-4-5-ai%e8%a7%86%e9%a2%91%e6%a8%a1%e5%9e%8b%ef%bc%8c%e6%b5%8b%e8%af%95%e6%88%90%e5%8a%9f%e5%87%bb%e8%b4%a5%e8%b0%b7%e6%ad%8c-veo3%e3%80%81openai-sora-2","status":"publish","type":"post","link":"https:\/\/www.1ai.net\/en\/46719.html","title":{"rendered":"Runway, launch Gen 4.5 AI video model, test successful Google Veo3, OpenAI Sora 2"},"content":{"rendered":"<p>DECEMBER 2ND. U.S. CNBC REPORTS, AI INITIAL <a href=\"https:\/\/www.1ai.net\/en\/tag\/runway\" title=\"[See articles with [Runway] label]\" target=\"_blank\" >Runway<\/a> It was released today<strong>New video model Gen 4.5<\/strong>I don't know. Independent baseline tests show that the model's performance<strong>More than Google and OpenAI<\/strong>.<\/p>\n<p>Gen 4.5 produces a high-resolution video based on the text tips entered by the user, with an accurate understanding<strong>Sports, man moves, camera movement and causality<\/strong>I don't know. Runway states that the model is also significantly elevated in the understanding of physical law\u3002<\/p>\n<p>The model tops the Video Arena list maintained by Artificial Analysis, an independent AI baseline analyser. The list is generated by a blind review: the user \u201cblind\u201d video output from two different models and then vote for a better section\u3002<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-46720\" title=\"8d5cf5f9j00t6mhej004vd000v9000m3p\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2025\/12\/8d5cf5f9j00t6mhej004vd000v900m3p.jpg\" alt=\"8d5cf5f9j00t6mhej004vd000v9000m3p\" width=\"1125\" height=\"795\" \/><\/p>\n<p>In this list, Google's Veo 3 ranked second, OpenAI's Sora 2 Pro seventh\u3002<\/p>\n<p>Runway CEO Crist\u00f3bal Valenzuela in an interview with CNBC:<strong>A company with only 100 people can beat a multi-billion-dollar technology giant<\/strong>I don't know. With sufficient focus and diligence, we can reach the front line.\u201d<\/p>\n<p>Runway was founded in 2018, mainly in the development of AI research and video models and world models. World models are trained in video and observational data to better simulate the physical characteristics of the real world\u3002<\/p>\n<p>Runway's clients include media organizations, video studios, brands, designers, creators and students. According to PitchBook, the company ' s valuation has risen to $3.55 billion (note: the current exchange rate is about 25,136 million yuan)\u3002<\/p>\n<p>Valenzuela revealed that Gen 4.5 was developed under the name \"David,\" meaning \"David versus Goliath\". \"This time..<strong>Seven years of \"one night's fame.\"<\/strong>I DON'T KNOW. THIS IS THE ERA OF EFFICIENCY AND RESEARCH, AND WE WANT TO MAKE SURE IT'S GENERATED\u00a0<strong>Not a monopoly of two or three companies<\/strong>. &quot;<\/p>\n<p>Gen 4.5 is gradually getting online<strong>All Runway users are available until the end of the week<\/strong>I don't know. The company also plans to launch major updates in the coming period\u3002<\/p>\n<p>According to Valenzuela, Gen 4.5 will be made available through Runway ' s platform, API and some partner channels\u3002<\/p>","protected":false},"excerpt":{"rendered":"<p>On December 2nd, according to CNBC in the United States, AI Startup Runway released a new video model today, Gen 4.5. Independent benchmarking tests showed that the model performed better than Google and OpenAI-like products. Gen 4.5 produces a high-resolution video based on text tips entered by the user, with a precise understanding of movement, person action, camera movement and causality. Runway indicates that the model is also significantly elevated in the understanding of physical law. The model tops the Video Arena list maintained by Artificial Analysis, an independent AI baseline analyser. The list is generated by a blind rating: two different models produced by the user simultaneously \"blind\"<\/p>","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[146],"tags":[868,195],"collection":[],"class_list":["post-46719","post","type-post","status-publish","format-standard","hentry","category-news","tag-ai","tag-runway"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/46719","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/comments?post=46719"}],"version-history":[{"count":0,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/46719\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/media?parent=46719"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/categories?post=46719"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/tags?post=46719"},{"taxonomy":"collection","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/collection?post=46719"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}