{"id":10408,"date":"2024-05-15T09:56:40","date_gmt":"2024-05-15T01:56:40","guid":{"rendered":"https:\/\/www.1ai.net\/?p=10408"},"modified":"2024-05-15T09:56:40","modified_gmt":"2024-05-15T01:56:40","slug":"openai%e9%87%8d%e7%a3%85%e5%8f%91%e5%b8%83%e5%85%a8%e8%83%bd%e6%a8%a1%e5%9e%8bgpt-4o%ef%bc%8c%e5%85%8d%e8%b4%b9%e5%bc%80%e6%94%be%e7%bb%99%e6%89%80%e6%9c%89%e7%94%a8%e6%88%b7%e4%bd%bf%e7%94%a8","status":"publish","type":"post","link":"https:\/\/www.1ai.net\/en\/10408.html","title":{"rendered":"OpenAI releases the all-round model GPT-4o, which is free for all users!"},"content":{"rendered":"<p data-pm-slice=\"0 0 []\"><a href=\"https:\/\/www.1ai.net\/en\/tag\/openai\" title=\"[View articles tagged with [OpenAI]]\" target=\"_blank\" >OpenAI<\/a> In the Spring Update event, a new<a href=\"https:\/\/www.1ai.net\/en\/tag\/gpt-4o\" title=\"[View articles tagged with [GPT-4o]]\" target=\"_blank\" >GPT-4o<\/a>the new flagship artificial intelligence model (\"o\"means \"omni\" means \"absent.\" the model is open to all users, whether free or paid\u3002<\/p>\n<div class=\"pgc-img\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-10409\" title=\"get-275\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/05\/get-275.jpg\" alt=\"get-275\" width=\"750\" height=\"381\" \/><\/div>\n<p data-track=\"195\">This shows that OpenAI is actively promoting the popularization of artificial intelligence, allowing more people to use artificial intelligence more conveniently.<\/p>\n<div class=\"pgc-img\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-10410\" title=\"get-276\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/05\/get-276.jpg\" alt=\"get-276\" width=\"543\" height=\"700\" \/><\/div>\n<p data-track=\"196\">GPT-4o performs comparable to GPT-4 Turbo but is faster at processing audio, image, and text inputs.<\/p>\n<p data-track=\"197\">This model focuses on understanding the intonation of speech and provides real-time audio and visual experience. Compared with GPT-4 turbo, it is twice as fast, 50% cheaper, and has 5 times higher rate limit.<\/p>\n<p data-track=\"198\">OpenAI demonstrated a new voice assistant to users through an online live broadcast, allowing everyone to see and understand their latest progress.<\/p>\n<p data-track=\"200\"><strong>How to use OpenAI GPT-4o?<\/strong><\/p>\n<p data-track=\"201\">GPT-4o is now available to all ChatGPT users (including free users). Previously, only paid subscribers could use GPT-4-type models.<\/p>\n<p data-track=\"202\">However, paid users use the app five times more than free users.<\/p>\n<div class=\"pgc-img\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-10411\" title=\"get-277\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/05\/get-277.jpg\" alt=\"get-277\" width=\"1083\" height=\"634\" \/><\/div>\n<p data-track=\"203\"><strong>What improvements does GPT-4o have over previous GPTs?<\/strong><\/p>\n<p data-track=\"204\">Before GPT-4o, the average latency for voice mode and ChatGPT conversations was 2.8 seconds (GPT-3.5) and 5.4 seconds (GPT-4), respectively.<\/p>\n<p data-track=\"205\">This process involves three separate models: a model that transcribes audio to text, a central GPT model that takes text input and outputs text, and a model that converts text back into audio.<\/p>\n<p data-track=\"206\">This means that the main source of intelligence, GPT-4, loses a lot of information\u2014it cannot directly observe intonation, multi-person conversations, or background noise, and cannot output laughter, singing, or sounds that express emotions.<\/p>\n<p data-track=\"207\">GPT-4o is an end-to-end model that is trained on text, visual, and audio data. That is, all inputs are processed by a neural network. This is the first full-scale model developed by OpenAI, so GPT-4o&#039;s capabilities are still in their infancy.<\/p>\n<p data-track=\"208\"><strong>GPT-4o Evaluation and Performance<\/strong><\/p>\n<p data-track=\"209\">This model has passed the evaluation standards of traditional industries. GPT-4o&#039;s performance in text processing, reasoning, and coding intelligence has reached the level of GPT-4 Turbo, and has made new breakthroughs in its ability to process multiple languages, audio, and vision.<\/p>\n<p data-track=\"210\">Additionally, the model developed a new tokenizer that can better compress language in different languages.<\/p>\n<p data-track=\"211\">OpenAI explains the model\u2019s capabilities in detail in their release blog using many different examples.<\/p>\n<div class=\"pgc-img\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-10412\" title=\"get-278\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/05\/get-278.jpg\" alt=\"get-278\" width=\"769\" height=\"791\" \/><\/div>\n<p data-track=\"212\">The researchers also discussed the limitations and safety of the model.<\/p>\n<blockquote>\n<p data-track=\"213\">We realize that GPT-4o&#039;s audio mode may introduce a variety of new risks. Today, we publicly released text and image inputs and text output. In the coming weeks and months, we will work on improving technical infrastructure, improving usability through post-training, and ensuring the safety of releasing other modes. For example, at launch, audio output will only be able to select preset voices and will comply with our existing security policies. We will share more details about all of GPT-4o&#039;s modes in an upcoming system card.<\/p>\n<p data-track=\"214\">\u2014 OpenAI<\/p>\n<\/blockquote>\n<p data-track=\"215\">Artificial intelligence companies have been working hard to improve computing power. In the previous voice interaction model, the three models of transcription, intelligence and text to speech worked together, resulting in high latency and affecting the immersive experience.<\/p>\n<p data-track=\"216\">However, with GPT-4o, this is all possible by adjusting the characteristics of its speech output to communicate naturally and fluently with the user, with almost no waiting time. GPT-4o is undoubtedly a very interesting and amazing tool for all users!<\/p>\n<p>&nbsp;<\/p>","protected":false},"excerpt":{"rendered":"<p>OpenAI released a new flagship artificial intelligence model called GPT-4o (\"O\"means \"omni\" means \"absent\") at its spring update. The model is open to all users, whether free or paid. This indicates that OpenAI is actively promoting the spread of artificial intelligence to make it easier for more people to use it. GPT-4o performance is comparable to GPT-4 Turbo, but faster when processing audio, image and text input. The model focuses on understanding the tone of voice and provides real-time audio and visual experience. Compared to GPT-4 turbo, it's twice as fast and cheap as 50<\/p>","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[148,146],"tags":[2582,190,2585],"collection":[],"class_list":["post-10408","post","type-post","status-publish","format-standard","hentry","category-headline","category-news","tag-gpt-4o","tag-openai","tag-2585"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/10408","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/comments?post=10408"}],"version-history":[{"count":0,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/10408\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/media?parent=10408"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/categories?post=10408"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/tags?post=10408"},{"taxonomy":"collection","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/collection?post=10408"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}