{"id":24033,"date":"2024-11-28T21:23:14","date_gmt":"2024-11-28T13:23:14","guid":{"rendered":"https:\/\/www.1ai.net\/?p=24033"},"modified":"2024-11-28T21:23:14","modified_gmt":"2024-11-28T13:23:14","slug":"%e6%b6%88%e6%81%af%e7%a7%b0%e4%ba%9a%e9%a9%ac%e9%80%8a%e6%ad%a3%e5%bc%80%e5%8f%91%e8%a7%86%e9%a2%91-ai-%e6%a8%a1%e5%9e%8b%ef%bc%8c%e5%87%8f%e5%b0%91%e5%af%b9-anthropic-%e7%9a%84%e4%be%9d%e8%b5%96","status":"publish","type":"post","link":"https:\/\/www.1ai.net\/en\/24033.html","title":{"rendered":"Amazon is developing video AI models to reduce reliance on Anthropic, sources say"},"content":{"rendered":"<p>According to The Information,<a href=\"https:\/\/www.1ai.net\/en\/tag\/%e4%ba%9a%e9%a9%ac%e9%80%8a\" title=\"[View articles tagged with [Amazon]]\" target=\"_blank\" >Amazon<\/a>A new set of generative <a href=\"https:\/\/www.1ai.net\/en\/tag\/ai%e6%a8%a1%e5%9e%8b\" title=\"[View articles tagged with [AI models]]\" target=\"_blank\" >AI Models<\/a>, which handles images and video in addition to text, thus reducing the need for <a href=\"https:\/\/www.1ai.net\/en\/tag\/anthropic\" title=\"[View articles tagged with [Anthropic]]\" target=\"_blank\" >Anthropic<\/a> of dependence.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-24034\" title=\"d7e73732j00snnx5m005qd000gc00l3p\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/11\/d7e73732j00snnx5m005qd000gc00l3p.jpg\" alt=\"d7e73732j00snnx5m005qd000gc00l3p\" width=\"588\" height=\"759\" \/><\/p>\n<p>The new model, code-named Olympus, will be able to understand scenes in images and videos and search for specific clips or scenes in a video, such as a killer moment in a basketball game, using simple text prompts, according to the report.<\/p>\n<p>It can also use AI models to make the \"best tasting coffee\" or \"raindrops on the ground\", as well as more functionality through simple text prompts, potentially revolutionizing the way customers interact with visual data and making searches faster, more intuitive, and more specific. The result will be faster, more intuitive, and more specific searches.<\/p>\n<p>Amazon will make an announcement about the model as early as next week at the AWS re:Invent technology conference, people familiar with the matter said.<\/p>\n<p>Just last week, Amazon announced an additional investment of $4 billion (currently around Rs. 28,996 million) in Anthropic, bringing Amazon's total investment in it to $8 billion.<\/p>","protected":false},"excerpt":{"rendered":"<p>According to The Information, Amazon has developed a new set of generative AI models that can process images and videos in addition to text, reducing its reliance on Anthropic. The new model, code-named Olympus, will be able to understand scenes in images and videos and search for specific clips or scenes in videos, such as a killer moment in a basketball game, using simple text prompts, according to the report. It will also be able to use AI models to create the \"best tasting coffee\" or \"raindrops on the ground\", as well as more functionality through simple text prompts, potentially revolutionizing the way customers interact with visual data and making searches faster, more intuitive and more specific. .. Sources familiar with the matter said that Amazon will hold as early as next week<\/p>","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[146],"tags":[167,320,370],"collection":[],"class_list":["post-24033","post","type-post","status-publish","format-standard","hentry","category-news","tag-ai","tag-anthropic","tag-370"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/24033","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/comments?post=24033"}],"version-history":[{"count":0,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/24033\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/media?parent=24033"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/categories?post=24033"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/tags?post=24033"},{"taxonomy":"collection","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/collection?post=24033"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}