{"id":10292,"date":"2024-05-14T09:40:13","date_gmt":"2024-05-14T01:40:13","guid":{"rendered":"https:\/\/www.1ai.net\/?p=10292"},"modified":"2024-05-14T09:40:13","modified_gmt":"2024-05-14T01:40:13","slug":"openai%e5%85%a8%e8%83%bd%e6%a8%a1%e5%9e%8bgpt-4o%e5%8f%91%e5%b8%83-%e8%83%bd%e5%90%ac%e8%83%bd%e7%9c%8b%e8%83%bd%e8%af%b4%e8%bf%98%e5%85%8d%e8%b4%b9","status":"publish","type":"post","link":"https:\/\/www.1ai.net\/en\/10292.html","title":{"rendered":"OpenAI releases GPT-4o, an all-around model that can hear, see, speak and is free"},"content":{"rendered":"<p><a href=\"https:\/\/www.1ai.net\/en\/tag\/openai\" title=\"[View articles tagged with [OpenAI]]\" target=\"_blank\" >OpenAI<\/a>\u00a0<span class=\"spamTxt\">up to date<\/span>Released its flagship large model\u00a0<a href=\"https:\/\/www.1ai.net\/en\/tag\/gpt-4o\" title=\"[View articles tagged with [GPT-4o]]\" target=\"_blank\" >GPT-4o<\/a>The model is not only free to use, but also has the combined ability to hear, see and speak, providing a silky smooth and latency-free interactive experience, as if you were having a video call with a human being.<\/p>\n<p class=\"article-content__img\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-10293\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/05\/6385127327655452191875481.png\" alt=\"\" width=\"840\" height=\"312\" \/><\/p>\n<p><strong>Features of GPT-4o<\/strong><\/p>\n<ul>\n<li>Omnipotent Input and Output: The GPT-4o is capable of accepting any combination of text, audio and images as input and generating corresponding text, audio and image output.<\/li>\n<li>Fast Response:The model responds to audio inputs in 232 ms to 320 ms, which is consistent with the speed of human dialog.<\/li>\n<li>Free and Open:GPT-4o will be free and open to all users, including all the features of the ChatGPT Plus member version, such as visualization, networking, memorization, code execution, and so on.<\/li>\n<\/ul>\n<p>During the livestream, CTO Murati demonstrated GPT-4o's real-time interactive capabilities, including interrupting conversations at any time and responding with a rich tone of voice.<\/p>\n<p>Researcher William Fedus revealed that the GPT-4o was one of the models that was previously A\/B tested in the Big Model Arena and had higher performance than the GPT-4-Turbo.<\/p>\n<p><strong>API Provision<\/strong><\/p>\n<p>GPT-4o will also be available as an API at 50% off, with double the speed and five times the number of calls per unit of time.<\/p>\n<p>Netizens are already envisioning application scenarios for GPT-4o, such as helping blind or partially sighted people better understand the world.<\/p>\n<p><strong>Demo Highlights<\/strong><\/p>\n<p>OpenAI President Brockman demonstrated GPT-4o's real-time translation capabilities, as well as conversations and singing between two ChatGPTs during the livestream.<\/p>\n<p><strong>Technical details<\/strong><\/p>\n<p>GPT-4o is a new model trained end-to-end where all inputs and outputs are processed by the same neural network, which is a significant improvement over previous speech models.<\/p>\n<p><strong>future outlook<\/strong><\/p>\n<p>Although OpenAI has not released a detailed technical report, the successful demonstration of GPT-4o has attracted widespread attention and discussion.<\/p>\n<p>The release of OpenAI's GPT-4o model not only demonstrates the company's AI<span class=\"spamTxt\">up to date<\/span>progress, and also provides the public with a powerful and easy-to-use AI tool. As the technology continues to advance, we can expect GPT-4o to bring even richer and more innovative application scenarios in the future.<\/p>","protected":false},"excerpt":{"rendered":"<p>OpenAI has released its flagship big model, GPT-4o, which is not only free to use, but also has the comprehensive ability to listen, see, and speak, providing a silky smooth and latency-free interactive experience, as if it were a video call with a person. Features of GPT-4o Omnipotent Input and Output: GPT-4o can accept any combination of text, audio and image as input and generate corresponding text, audio and image output. Fast Response: The model responds to audio inputs in 232 ms to 320 ms, which is consistent with human dialog. Free and Open:GPT-4o will be free and open to all users, including all the features of the ChatGPT Plus member edition, such as vision, networking, memorization, and code execution. In the live broadcast, C<\/p>","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[146],"tags":[2582,190,2585],"collection":[],"class_list":["post-10292","post","type-post","status-publish","format-standard","hentry","category-news","tag-gpt-4o","tag-openai","tag-2585"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/10292","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/comments?post=10292"}],"version-history":[{"count":0,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/10292\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/media?parent=10292"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/categories?post=10292"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/tags?post=10292"},{"taxonomy":"collection","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/collection?post=10292"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}