{"id":48222,"date":"2026-01-02T18:25:00","date_gmt":"2026-01-02T10:25:00","guid":{"rendered":"https:\/\/www.1ai.net\/?p=48222"},"modified":"2026-01-02T18:25:00","modified_gmt":"2026-01-02T10:25:00","slug":"%e6%b6%88%e6%81%af%e7%a7%b0-openai-%e5%a4%a7%e5%8a%9b%e7%a0%94%e5%8f%91%e9%9f%b3%e9%a2%91-ai-%e6%a8%a1%e5%9e%8b%ef%bc%8c%e5%8a%a0%e7%b4%a7%e5%a4%87%e6%88%98%e9%a6%96%e6%ac%be%e6%97%a0","status":"publish","type":"post","link":"https:\/\/www.1ai.net\/en\/48222.html","title":{"rendered":"Message says OpenAI is working hard on the audio AI model and stepping up the war's first \"screenless\" hardware device"},"content":{"rendered":"<p>On January 2, according to The Information<a href=\"https:\/\/www.1ai.net\/en\/tag\/openai\" title=\"[View articles tagged with [OpenAI]]\" target=\"_blank\" >OpenAI<\/a> I'm fully enhancing my voice<a href=\"https:\/\/www.1ai.net\/en\/tag\/%e4%ba%ba%e5%b7%a5%e6%99%ba%e8%83%bd\" title=\"[View articles tagged with [artificial intelligence]]\" target=\"_blank\" >AI<\/a>Capability to introduce a voice-centred individual for the future <a href=\"https:\/\/www.1ai.net\/en\/tag\/ai%e8%ae%be%e5%a4%87\" title=\"[SEE ARTICLES WITH [AI DEVICE] LABELS]\" target=\"_blank\" >AI Devices<\/a>Pave the road. A number of sources have revealed that the equipment will be<strong>Main form of hearing interaction<\/strong>instead of relying on the screen\u3002<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-48223\" title=\"47211819j00t88fkkk002bd000u00z9p\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2026\/01\/47211819j00t88fkk002bd000uu00z9p.jpg\" alt=\"47211819j00t88fkkk002bd000u00z9p\" width=\"1110\" height=\"1269\" \/><\/p>\n<p>Currently, ChatGPT does not have the same voice function as the model used behind the text answer. OpenAI internal researchers believe that existing audio models<strong>There's a clear lag in accuracy and response speed<\/strong>In the past two months, the company has been urged to integrate engineering, product and research forces and to focus on the audio model panels\u3002<\/p>\n<p>This adjustment directly points to the hardware target for OpenAI - to create a paragraph<strong>Consumer-level equipment that can be operated by natural voice command<\/strong>I don't know. It was reported earlier that the first product<strong>It'll take at least a year<\/strong>It's only possible\u3002<\/p>\n<p>With the introduction of the new architecture, audio models have been generated<strong>More natural, more emotional<\/strong>Voice response and available<strong>It's all the same<\/strong>Capacity. OpenAI Scheme<strong>\u00a0<\/strong><strong>2026 First quarter<\/strong>The model is officially published\u3002<\/p>\n<p>In hardware form, OpenAI has a similar judgement to Google, Amazon, Meta and Apple: the existing mainstream equipment is not intended for future AI interactions. OpenAI team wants users to pass<strong>\"Speak\" instead of \"see the screen\"<\/strong>Interacting with the device, it is believed that the voice is the closest way to human instinct\u3002<\/p>\n<p>Johnny Ivy of the hardware project with OpenAI also stressed that screenless design is not only more natural<strong>It also helps to avoid user addiction<\/strong>I don't know. In his view, the new generation of equipment should correct the negative effects of past consumption of electronics and assume responsibility for that\u3002<\/p>\n<p>However, OpenAI still faces real challenges. Internal sources point out that a number of ChatGPT users are not used to voice functions, not only because the audio model is ineffective, but also because of a lack of functional awareness. OpenAI has to change user usage before launching the audio priority AI device\u3002<\/p>\n<p>At the organizational level, OpenAI has formed a dedicated team to advance the Audio AI strategy. Quindan Kumar, a voice researcher from Character.AI, is in charge of the overall direction, and Ben Newhouse is reconfiguring the base structure for audio, with the participation of Jackie Shannon, product manager of ChatGPT\u3002<\/p>\n<p>OpenAI does not intend to launch only one piece of equipment, but rather to plan a product line, including<strong>Smart glasses and non-screen smart speaker<\/strong>I don't know. Within the company, it is envisaged that such equipment will exist in the form of \u201caccompanied assistants\u201d, proactively understanding the environment and user needs and, if authorized, providing ongoing assistance through audio and video\u3002<\/p>\n<p>In support of this long-term set-up, OpenAI has spent nearly $6.5 billion at the beginning of 2025 (note: the current exchange rate is approximately 45.506 billion yuan) on the acquisition of the io co-founded by Johnny Avei and the simultaneous advancement of the supply chain, industrial design and modelling\u3002<\/p>","protected":false},"excerpt":{"rendered":"<p>On 2 January, according to The Information, OpenAI is in the process of fully enhancing its voice-based artificial intelligence, paving the way for the future roll-out of a voice-centred personal AI device. A number of informed sources revealed that the device would be based on audio interaction rather than on screens. Currently, ChatGPT does not have the same voice function as the model used behind the text answer. According to inside OpenAI researchers, the existing audio model is clearly lagging behind in accuracy and response speed, prompting the company to integrate engineering, product and research forces over the past two months and to focus on the short panel of the audio model. This adjustment points directly to the hardware target for OpenAI - to create a text that can be operated by a natural voice command<\/p>","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[146],"tags":[891,190,204],"collection":[],"class_list":["post-48222","post","type-post","status-publish","format-standard","hentry","category-news","tag-ai","tag-openai","tag-204"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/48222","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/comments?post=48222"}],"version-history":[{"count":0,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/48222\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/media?parent=48222"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/categories?post=48222"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/tags?post=48222"},{"taxonomy":"collection","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/collection?post=48222"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}