{"id":24377,"date":"2024-12-04T09:32:25","date_gmt":"2024-12-04T01:32:25","guid":{"rendered":"https:\/\/www.1ai.net\/?p=24377"},"modified":"2024-12-04T09:32:25","modified_gmt":"2024-12-04T01:32:25","slug":"%e4%ba%9a%e9%a9%ac%e9%80%8a%e5%8f%91%e5%b8%83-nova-%e7%b3%bb%e5%88%97-ai-%e6%a8%a1%e5%9e%8b%ef%bc%8c%e6%8f%90%e4%be%9b%e6%96%87%e6%9c%ac%e3%80%81%e5%9b%be%e5%83%8f%e5%92%8c%e8%a7%86%e9%a2%91%e7%94%9f","status":"publish","type":"post","link":"https:\/\/www.1ai.net\/en\/24377.html","title":{"rendered":"Amazon Releases Nova Series of AI Models with Text, Image and Video Generation Capabilities"},"content":{"rendered":"<p>December 4 News.<a href=\"https:\/\/www.1ai.net\/en\/tag\/%e4%ba%9a%e9%a9%ac%e9%80%8a\" title=\"[View articles tagged with [Amazon]]\" target=\"_blank\" >Amazon<\/a>today announced the launch of a new series of AI-based models branded as \"<a href=\"https:\/\/www.1ai.net\/en\/tag\/nova\" title=\"[See articles with [Nova] label]\" target=\"_blank\" >Nova<\/a>\" and will be available through AWS' Amazon Bedrock model library.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-24378\" title=\"e1f3f6b3j00sny48x00kmd000v900hkp\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/12\/e1f3f6b3j00sny48x00kmd000v900hkp.jpg\" alt=\"e1f3f6b3j00sny48x00kmd000v900hkp\" width=\"1125\" height=\"632\" \/><\/p>\n<p>Amazon said in a blog post that there are currently three \"understanding\" models to choose from:<\/p>\n<ul>\n<li>Amazon Nova Micro: a text model for \"speed and cost optimization\".<\/li>\n<li>Amazon Nova Lite: a \"very low-cost\" multimodal model that inputs images, videos, and text to generate text.<\/li>\n<li>Amazon Nova Pro: a \"powerful\" multimodal model.<\/li>\n<\/ul>\n<p>1AI notes that the company is also training a model called Amazon Nova Premier, the<strong>It is said to be \"the most powerful multimodal model for complex reasoning tasks.\"<\/strong>Amazon aims to offer Nova Premier \"early 2025\". Amazon aims to offer Nova Premier \"by early 2025\".<\/p>\n<p>Amazon has also released a content generation model:<strong>Amazon Nova Canvas (image generation model) and Amazon Nova Reel (video generation model)<\/strong>The company said the models have \"watermarking capabilities\" to \"promote responsible AI use\". The company says the models have \"watermarking capabilities\" to \"promote responsible AI use\".<\/p>\n<p>In addition, Amazon plans to release voice-to-speech and \"native multimodal-to-multimodal\" models later in 2025.<\/p>\n<p>Amazon announced these new models at the AWS re:Invent conference currently underway in Las Vegas. At the show, the company also said that it is working with Anthropic (in which Amazon has invested $8 billion) to build a massive AI compute cluster that relies on its Trainium 2 chip. \"When complete, it promises to be the world's largest AI compute cluster to date for Anthropic to build and deploy its future models,\" Amazon said.<\/p>\n<p>Amazon is also working on a redesigned, AI-powered Alexa, but while the voice assistant was initially planned to launch this fall, its reportedly been pushed back to next year.<\/p>","protected":false},"excerpt":{"rendered":"<p>December 4, 2012 - Amazon today announced a new set of AI base models, branded as \"Nova,\" that will be available through AWS' Amazon Bedrock model library. In a blog post, Amazon said there are now three \"comprehension\" models to choose from: Amazon Nova Micro: a text model optimized for \"speed and cost\". Amazon Nova Lite: a \"very low-cost\" multimodal model that can be fed images, video and text to generate text. Amazon Nova Pro: a \"powerful\" multimodal model. 1AI notes that the company is also training a multimodal model called Amazon Nova P<\/p>","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[146],"tags":[167,5121,370],"collection":[],"class_list":["post-24377","post","type-post","status-publish","format-standard","hentry","category-news","tag-ai","tag-nova","tag-370"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/24377","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/comments?post=24377"}],"version-history":[{"count":0,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/24377\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/media?parent=24377"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/categories?post=24377"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/tags?post=24377"},{"taxonomy":"collection","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/collection?post=24377"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}