{"id":1501,"date":"2023-11-27T09:50:11","date_gmt":"2023-11-27T01:50:11","guid":{"rendered":"https:\/\/www.1ai.net\/?p=1501"},"modified":"2023-11-27T09:50:11","modified_gmt":"2023-11-27T01:50:11","slug":"%e5%be%ae%e8%bd%afazure-ai%e6%96%b0%e5%a2%9ephi%e3%80%81jais%e7%ad%89%ef%bc%8c40%e7%a7%8d%e6%96%b0%e5%a4%a7%e6%a8%a1%e5%9e%8b","status":"publish","type":"post","link":"https:\/\/www.1ai.net\/en\/1501.html","title":{"rendered":"Microsoft Azure AI adds Phi, Jais, and 40 new large models"},"content":{"rendered":"<p><a href=\"https:\/\/www.1ai.net\/en\/tag\/%e5%be%ae%e8%bd%af\" title=\"[View articles tagged with [Microsoft]]\" target=\"_blank\" >Microsoft<\/a>In the official announcement of 40 new models such as Falcon, Phi, Jais, Code Llama, CLIP, Whisper V3, Stable Diffusion, and more in the Azure AI Cloud Development Platform, covering text, image, code, speech, and other content generation.<\/p>\n<p><strong>Developers can quickly integrate the model into their applications simply through an API or SDK.<\/strong>It also supports data fine-tuning, command optimization, and other tailored features.<\/p>\n<p>In addition, developers can quickly find the right product for them in Azure AI's \"Model Supermarket\" by searching for keywords, for example, typing in the word \"code\" will display the appropriate model.<\/p>\n<p>Experience:https:\/\/ai.azure.com\/<\/p>\n<p class=\"article-content__img\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-1502\" title=\"2023112708492931440\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2023\/11\/2023112708492931440.jpg\" alt=\"2023112708492931440\" width=\"554\" height=\"187\" \/><\/p>\n<p>Here is a brief description of some of the well-known additions to the model<\/p>\n<p><strong>Whisper V3<\/strong><\/p>\n<p>Whisper V3 is an OpenAI<span class=\"spamTxt\">up to date<\/span>The developed speech model was trained using 1 million hours of weakly labeled audio and 4 million hours of pseudo-labeled audio of multilingual data, and was also trained in speech recognition and speech translation. Speech translation and transcription are supported.<\/p>\n<p><strong>Stable Diffusion<\/strong><\/p>\n<p>Stable Diffusion is a text-generated image diffusion model developed by Stability AI, which can generate sketches, oil paintings, cartoons, 3D and other types of images, and is also the current<span class=\"spamTxt\">Strongest<\/span>One of the open source diffusion models.<\/p>\n<p>Microsoft Azure AI will offer Stable-Diffusion-V1-4, Stable-Diffusion-2-1, Stable-Diffusion-V1-5, Stable-Diffusion-Inpainting , Stable-Diffusion-2- Inpainting, Stable-Diffusion-2-Inpainting, and Stable-Diffusion-2-Inpainting.<\/p>\n<p class=\"article-content__img\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-1503\" title=\"2023112708492931441\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2023\/11\/2023112708492931441.jpg\" alt=\"2023112708492931441\" width=\"999\" height=\"593\" \/><\/p>\n<p><strong>Phi<\/strong><\/p>\n<p>Phi-1-5 has 1.3 billion parameters Transformer architecture of the model. It was trained using the same data as Phi-1 with the addition of a new data source consisting of various NLP synthesized texts.<\/p>\n<p>When evaluating benchmarks for testing common sense, language comprehension, and logical reasoning, Phi-1.5 emerged as one of the best models for models with fewer than 10 billion parameters. The model can write poetry, draft emails, create stories, summarize text, write Python code, and more.<\/p>\n<p>Phi-2 has 2.7 billion parameters, which is a significant improvement in inference and safety measures compared to Phi-1-5, but smaller parameters compared to other Transformer architecture models in the industry, but still strong performance.<\/p>\n<p class=\"article-content__img\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-1504\" title=\"2023112708492931442\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2023\/11\/2023112708492931442.jpg\" alt=\"2023112708492931442\" width=\"554\" height=\"242\" \/><\/p>\n<p><strong>Falcon<\/strong><\/p>\n<p>Falcon (Falcon) model is a large language model from Abu Dhabi Research Laboratory, UAE, which uses 1 trillion training datasets and supports text generation, content summarization, etc. It supports four models, Falcon-40b, Falcon-40b-Instruct , Falcon-7b-Instruct and Falcon-7b.<\/p>\n<p><strong>SAM<\/strong><\/p>\n<p>SAM (Segment Anything Model) is an image segmentation model developed by Meta to quickly segment images based on cues.SAM was trained on a dataset of 11 million images and 1.1 billion masks.<\/p>\n<p>SAM supports 0-sample training to support new image segmentation tasks, and currently there are three models, Facebook-Sam-Vit-Large , Facebook-Sam-Vit-Huge , and Facebook-Sam-Vit-Base .<\/p>\n<p><strong>CLIP<\/strong><\/p>\n<p>CLIP is a multimodal AI model developed by OpenAI that is trained on a large number of image and text pairs and is able to understand image content and relate it to natural language descriptions.CLIP greatly enhances a wide variety of tasks in computer vision, including classification, object detection, image captioning, and more, through co-representational learning of images and text.<\/p>\n<p>There are currently three versions, OpenAI-CLIP-Image-Text-Embeddings-ViT-Base-Patch32, OpenAI-CLIP-ViT-Base-Patch32 and OpenAI-CLIP-ViT-Large-Patch14.<\/p>\n<p><strong>Code Llama<\/strong><\/p>\n<p>Code Llama is a model developed by Meta focusing on the development field, through the text can generate, review, rewrite the code, with CodeLlama-34b-Python, CodeLlama-13b-Instruct and other 8 versions, is currently<span class=\"spamTxt\">Strongest<\/span>One of the open source code models.<\/p>","protected":false},"excerpt":{"rendered":"<p>Microsoft has officially announced the addition of 40 new models such as Falcon, Phi, Jais, Code Llama, CLIP, Whisper V3, Stable Diffusion, and more in the Azure AI Cloud Development Platform, covering text, image, code, speech, and other content generation. Developers can quickly integrate the models into their applications simply through the API or SDK, while supporting data fine-tuning, command optimization, and other tailored features. In addition, developers can quickly find their own products in Azure AI's \"Model Supermarket\" through keyword search, for example, typing in the word \"code\" will display the corresponding model. Experience: https:\/\/ai.azure.com<\/p>","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[146],"tags":[216,280],"collection":[],"class_list":["post-1501","post","type-post","status-publish","format-standard","hentry","category-news","tag-216","tag-280"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/1501","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/comments?post=1501"}],"version-history":[{"count":0,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/1501\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/media?parent=1501"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/categories?post=1501"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/tags?post=1501"},{"taxonomy":"collection","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/collection?post=1501"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}