{"id":26250,"date":"2025-01-04T17:25:47","date_gmt":"2025-01-04T09:25:47","guid":{"rendered":"https:\/\/www.1ai.net\/?p=26250"},"modified":"2025-01-04T17:25:47","modified_gmt":"2025-01-04T09:25:47","slug":"%e8%b0%b7%e6%ad%8c-deepmind-%e6%8e%a8-cat4d%ef%bc%9aai-%e9%ad%94%e6%b3%95%e7%aa%81%e7%a0%b4%e6%ac%a1%e5%85%83%e5%a3%81%ef%bc%8c%e6%99%ae%e9%80%9a%e8%a7%86%e9%a2%91%e6%b4%bb%e5%8f%98-3d-%e5%a4%a7","status":"publish","type":"post","link":"https:\/\/www.1ai.net\/en\/26250.html","title":{"rendered":"Google DeepMind Launches CAT4D: AI Magic Breaks Through Dimensional Walls, Turning Ordinary Videos Into 3D Blockbusters"},"content":{"rendered":"<p>January 4, 2011 - Technology media outlet The Decoder published a blog post yesterday (January 3) reporting that<a href=\"https:\/\/www.1ai.net\/en\/tag\/%e8%b0%b7%e6%ad%8c\" title=\"[View articles tagged with [Google]]\" target=\"_blank\" >Google<\/a> <a href=\"https:\/\/www.1ai.net\/en\/tag\/deepmind\" title=\"_Other Organiser\" target=\"_blank\" >DeepMind<\/a> In conjunction with researchers at Columbia University and the University of California, San Diego, a program called <a href=\"https:\/\/www.1ai.net\/en\/tag\/cat4d\" title=\"[SEE ARTICLE WITH [CAT4D] LABEL]\" target=\"_blank\" >CAT4D<\/a> AI systems.<strong>The ability to transform ordinary video into dynamic 3D scenes lowers the threshold for 3D content creation and opens up new possibilities for multiple industries.<\/strong><\/p>\n<p>The CAT4D system utilizes a diffusion model to convert a single-view video into a multi-view view and builds it into a dynamic 3D scene, allowing the user to view the subject of the video from different angles as if they were in it. The attached demo is shown below:<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-26251\" title=\"05656fb7j00spk4ty005gd000le006kp\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2025\/01\/05656fb7j00spk4ty005gd000le006kp.jpg\" alt=\"05656fb7j00spk4ty005gd000le006kp\" width=\"770\" height=\"236\" \/><\/p>\n<p>Previously, multiple cameras were required to record the same scene at the same time to achieve similar effects, but CAT4D simplifies the process by requiring only common video footage, a technology that is expected to revolutionize game development, filmmaking, and augmented reality, among other fields.<\/p>\n<p>In training the AI, the Google DeepMind team found that there wasn't much existing data, and to solve this problem, the team mixed real-world footage with computer-generated content.The training data consisted of multi-view images of static scenes, single-view videos, and synthesized 4D data, which was learned through a diffusion model that creates an image from a specific angle at a specific moment in time.<\/p>\n<p>The 3D scenes generated by the system at this stage are shorter than the original footage, but the quality of CAT4D's imaging is already superior to comparable systems.CAT4D technology has a wide range of applications. Game developers can use it to create virtual environments, and filmmakers and AR developers can integrate it into their workflow.<\/p>","protected":false},"excerpt":{"rendered":"<p>January 4 news, technology media The Decoder yesterday (January 3) released a blog post, reported that Google DeepMind joint Columbia University, the University of California, San Diego, researchers, developed an AI system called CAT4D, can be ordinary video into dynamic 3D scenes, lowering the threshold for the creation of 3D content, for a number of industries to bring a new The CAT4D system utilizes a diffusion model. The CAT4D system uses diffusion modeling to convert a single-view video into a multi-view view and builds it into a dynamic 3D scene, allowing the user to see the subject of the video from different angles as if they were in it.<\/p>","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[146],"tags":[5431,593,281],"collection":[],"class_list":["post-26250","post","type-post","status-publish","format-standard","hentry","category-news","tag-cat4d","tag-deepmind","tag-281"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/26250","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/comments?post=26250"}],"version-history":[{"count":0,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/26250\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/media?parent=26250"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/categories?post=26250"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/tags?post=26250"},{"taxonomy":"collection","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/collection?post=26250"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}