{"id":23686,"date":"2024-11-23T09:34:15","date_gmt":"2024-11-23T01:34:15","guid":{"rendered":"https:\/\/www.1ai.net\/?p=23686"},"modified":"2024-11-23T09:34:15","modified_gmt":"2024-11-23T01:34:15","slug":"openai-%e9%a6%96%e5%b8%ad%e4%ba%a7%e5%93%81%e5%ae%98%ef%bc%9achatgpt-%e7%bd%91%e9%a1%b5%e7%ab%af%e6%9c%ac%e5%91%a8%e5%bc%95%e5%85%a5%e9%ab%98%e7%ba%a7%e8%af%ad%e9%9f%b3%e6%a8%a1%e5%bc%8f","status":"publish","type":"post","link":"https:\/\/www.1ai.net\/en\/23686.html","title":{"rendered":"OpenAI Chief Product Officer: ChatGPT Web Side Introduces Advanced Speech Mode This Week"},"content":{"rendered":"<p>Early Wednesday morning, Beijing time.<a href=\"https:\/\/www.1ai.net\/en\/tag\/openai\" title=\"[View articles tagged with [OpenAI]]\" target=\"_blank\" >OpenAI<\/a> Chief Product Officer Kevin Weil confirmed via the X platform that Advanced Voice Mode went fully live this week <a href=\"https:\/\/www.1ai.net\/en\/tag\/chatgpt\" title=\"[View articles tagged with [ChatGPT]]\" target=\"_blank\" >ChatGPT<\/a> Web version, opened for paid users.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-23687\" title=\"a24944e9j00sndr06002cd000gh00i7p\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/11\/a24944e9j00sndr06002cd000gh00i7p.jpg\" alt=\"a24944e9j00sndr06002cd000gh00i7p\" width=\"593\" height=\"655\" \/><\/p>\n<p>Back in September, OpenAI first demonstrated ChatGPT's voice capabilities. However, at that time, this feature was limited to iOS and Android apps. Relying on the latest GPT-4o model, the advanced voice mode has native audio processing capabilities and can interact with users in natural language. It can even sense non-verbal signals such as intonation and speed of speech, and mimic emotions when responding to them to make them seem more vivid and real.<\/p>\n<p>For those who have subscribed to a paid plan, Advanced Voice Mode can be activated on the web version by clicking on the voice icon in the lower right corner of the prompt window. However.<strong>Plus and Teams users have a cap on their daily voice usage<\/strong>The system will alert you when the limit is reached.<\/p>\n<p>Will also revealed at X that OpenAI is optimizing voice mode interactions to make them less \"interruptive\". But until then, users are advised to clarify their thoughts before speaking.<\/p>\n<p>1AI understands that the highly anticipated feature was first made available to beta users in July and rolled out to paid subscribers at the end of Sept. In an October tweet, OpenAI revealed that Free users will also have the opportunity to experience Advanced Voice Mode, while Plus and Free users in the EU will have to be patient.<\/p>","protected":false},"excerpt":{"rendered":"<p>In the early morning hours of Beijing Wednesday, OpenAI Chief Product Officer Kevin Weil confirmed on X platform that Advanced Voice Mode was fully online this week on the ChatGPT page, which is open to paid users. As early as September this year, OpenAI showed the ChatGPT voice function for the first time. But at the time, this function was limited to iOS and Android applications. The advanced voice model, based on the latest GPT-4o model, has a primary audio processing capability and can interact with users in natural languages. It can even feel non-verbal signals, such as tone and speed, and imitate emotions in response, thus becoming more vivid and real. Yes<\/p>","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[146],"tags":[177,190],"collection":[],"class_list":["post-23686","post","type-post","status-publish","format-standard","hentry","category-news","tag-chatgpt","tag-openai"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/23686","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/comments?post=23686"}],"version-history":[{"count":0,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/23686\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/media?parent=23686"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/categories?post=23686"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/tags?post=23686"},{"taxonomy":"collection","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/collection?post=23686"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}