{"id":7564,"date":"2024-04-10T09:59:22","date_gmt":"2024-04-10T01:59:22","guid":{"rendered":"https:\/\/www.1ai.net\/?p=7564"},"modified":"2024-04-10T09:59:22","modified_gmt":"2024-04-10T01:59:22","slug":"openai%e5%8f%91%e5%b8%83gpt-4-turbo-%e6%ad%a3%e5%bc%8f%e7%89%88-%e5%8f%af%e8%af%86%e5%88%ab%e5%9b%be%e7%89%87","status":"publish","type":"post","link":"https:\/\/www.1ai.net\/en\/7564.html","title":{"rendered":"OpenAI releases GPT-4-Turbo official version that can recognize images"},"content":{"rendered":"<p><a href=\"https:\/\/www.1ai.net\/en\/tag\/openai\" title=\"[View articles tagged with [OpenAI]]\" target=\"_blank\" >OpenAI<\/a> Published<a href=\"https:\/\/www.1ai.net\/en\/tag\/gpt-4\" title=\"[SEE ARTICLES WITH [GPT-4] LABELS]\" target=\"_blank\" >GPT-4<\/a>-Turbo official version, this is a model with vision capabilities, capable of handling 128k context. This model is now fully open and can be used through &quot;gpt-4-turbo&quot;,<span class=\"spamTxt\">up to date<\/span>The version is &quot;gpt-4-turbo-2024-04-09&quot;.<\/p>\n<p class=\"article-content__img\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-7565\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/04\/6384833694944565873389396.png\" alt=\"\" width=\"582\" height=\"264\" \/><\/p>\n<p>Interface information: https:\/\/platform.openai.com\/docs\/models\/continuous-model-upgrades<\/p>\n<p>Pricing information: https:\/\/openai.com\/pricing<\/p>\n<p>Related limits: https:\/\/platform.openai.com\/docs\/guides\/rate-limits\/usage-tiers?context=tier-five<\/p>\n<p>According to the official statement, the basic capabilities of this model have been significantly improved. It has its own visual capabilities and does not require the use of a 4v interface. Visual requests can now also use JSON mode and function calls. In addition, the training data is available until December 2023.<\/p>\n<p>In terms of price, the price of GPT-4-Turbo remains the same as the previous GPT-4-Turbo.<\/p>\n<ul>\n<li>Input: $10.00\/1 million tokens<\/li>\n<li>Output: $30.00\/1 million tokens<\/li>\n<li>Read the picture:<span class=\"spamTxt\">lowest<\/span>\u00a0$0.00085\/Figure<\/li>\n<\/ul>\n<p>In terms of frequency limitation,<span class=\"spamTxt\">Highest<\/span>Take Tire5 as an example,<span class=\"spamTxt\">Highest<\/span>Concurrency is 10,000 times\/minute,<span class=\"spamTxt\">Highest<\/span>The processing rate is 1,500,000 tokens\/minute.<\/p>\n<p>OpenAI also demonstrated several use cases using GPT-4-Turbo with vision capabilities. For example, Devin, built by @cognition_labs, is an AI software engineering assistant powered by GPT-4Turbo that can perform various coding tasks using vision.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-7566\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/04\/6384833696378213054053339.png\" alt=\"\" width=\"555\" height=\"537\" \/><\/p>\n<p>Another example is the @healthifyme team using GPT-4Turbo with Vision to build Snap, which can recognize various food photos from around the world and provide nutritional insights to users. Finally, Make Real, developed by @tldraw, allows users to draw UIs on a whiteboard and design websites using GPT-4Turbo with Vision to generate code directly.<\/p>\n<p>Overall, GPT-4-Turbo is a powerful model, and its release will bring new possibilities to the field of AI.<\/p>","protected":false},"excerpt":{"rendered":"<p>OpenAI released an official version of GPT-4-Turbo, a visual model that addresses the context of 128k. The model is now fully open and can be used by \u201cgpt-4-turbo\u201d, the latest version of which is \u201cgpt-4-turbo-2024-04-09). Interface information: https:\/\/platform.openai.com\/docs\/models\/continuous-model-upgrades price information: https:\/\/openai.com\/pricing restrictions: https:\/\/platform.openai.com\/docs\/g<\/p>","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[146],"tags":[510,190],"collection":[],"class_list":["post-7564","post","type-post","status-publish","format-standard","hentry","category-news","tag-gpt-4","tag-openai"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/7564","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/comments?post=7564"}],"version-history":[{"count":0,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/7564\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/media?parent=7564"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/categories?post=7564"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/tags?post=7564"},{"taxonomy":"collection","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/collection?post=7564"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}