{"id":18847,"date":"2024-08-29T10:35:23","date_gmt":"2024-08-29T02:35:23","guid":{"rendered":"https:\/\/www.1ai.net\/?p=18847"},"modified":"2024-08-29T10:35:23","modified_gmt":"2024-08-29T02:35:23","slug":"%e8%b0%b7%e6%ad%8c%e5%8f%91%e5%b8%83-3-%e6%ac%be-gemini-%e5%ae%9e%e9%aa%8c-ai%e6%a8%a1%e5%9e%8b%ef%bc%9a1-5-pro-%e5%86%b2%e6%a6%9c%e7%ac%ac%e4%ba%8c%e3%80%811-5-flash-%e4%bb%8e%e7%ac%ac-23-%e8%b9%bf","status":"publish","type":"post","link":"https:\/\/www.1ai.net\/en\/18847.html","title":{"rendered":"Google released three Gemini experimental AI models: 1.5 Pro ranked second, and 1.5 Flash jumped from 23rd to 6th"},"content":{"rendered":"<p><a href=\"https:\/\/www.1ai.net\/en\/tag\/%e8%b0%b7%e6%ad%8c\" title=\"[View articles tagged with [Google]]\" target=\"_blank\" >Google<\/a> AI Studio Product Director Logan Kilpatrick tweeted on X (August 28),<strong>Announced the launch of 3 <a href=\"https:\/\/www.1ai.net\/en\/tag\/gemini\" title=\"[View articles tagged with [Gemini]]\" target=\"_blank\" >Gemini<\/a> Experimental Model<\/strong>.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-18848\" title=\"6e07abbdj00siyjoi0035d000dn00jep\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/08\/6e07abbdj00siyjoi0035d000dn00jep.jpg\" alt=\"6e07abbdj00siyjoi0035d000dn00jep\" width=\"491\" height=\"698\" \/><\/p>\n<p>The three experimental Gemini AI models launched by Google this time are as follows:<\/p>\n<p><strong>Gemini 1.5 Flash-8B<\/strong><\/p>\n<p>Gemini 1.5 Flash-8B is a smaller model of Gemini 1.5 Flash with 8 billion parameters, designed for multimodal tasks, including high-capacity tasks and long text summarization tasks.<\/p>\n<p><strong>Gemini 1.5 Pro Exp-0827<\/strong><\/p>\n<p>The main enhancement, programming, complex prompt words, is now available for free through Google AI Studio and Gemini API as &quot;gemini-1.5-pro-exp-0827&quot;.<\/p>\n<p>Kilpatrick said the new Gemini 1.5 Pro Exp 0827 model is superior in every way to the experimental model released in early August and is currently ranked No. 2 on LMSYS, just behind OpenAI\u2019s GPT-4o-latest model.<\/p>\n<p>Starting September 3, Google will automatically redirect requests for the gemini-1.5-pro-exp-0801 model to the new gemini-1.5-pro-exp-0827 model.<\/p>\n<p>The gemini-1.5-pro-exp-0801 model will be removed from Google AI Studio and the API.<\/p>\n<p><strong>Gemini 1.5 Flash Exp-0827<\/strong><\/p>\n<p>The Gemini-1.5-Flash (0827) version has significant performance improvements, and its ranking on LMSYS has risen from 23rd to 6th.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-18849\" title=\"eb298fc3j00siyjpj0076d000dr00jcp\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/08\/eb298fc3j00siyjpj0076d000dr00jcp.jpg\" alt=\"eb298fc3j00siyjpj0076d000dr00jcp\" width=\"495\" height=\"696\" \/><\/p>\n<p>Users can access the above two models through Gemini API and Google AI Studio, named gemini-1.5-pro-exp-0827 and gemini-1.5-flash-exp-0827 respectively.<\/p>","protected":false},"excerpt":{"rendered":"<p>Logan Kilpatrick, Director of AI Studio Products at Google, announced the launch of 3 experimental Gemini models in a tweet on the X platform (August 28th). Google's three experimental Gemini AI models are as follows: Gemini 1.5 Flash-8B Gemini 1.5 Flash-8B is a smaller-sized model of Gemini 1.5 Flash with 8 billion parameters designed for multimodal tasks, including high-volume tasks and long text summarization tasks. Gemini 1.5 Pro Exp-0827 Major programming enhancements, complex prompt words<\/p>","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[146],"tags":[167,436,281],"collection":[],"class_list":["post-18847","post","type-post","status-publish","format-standard","hentry","category-news","tag-ai","tag-gemini","tag-281"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/18847","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/comments?post=18847"}],"version-history":[{"count":0,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/18847\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/media?parent=18847"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/categories?post=18847"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/tags?post=18847"},{"taxonomy":"collection","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/collection?post=18847"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}