{"id":6118,"date":"2024-03-25T09:34:11","date_gmt":"2024-03-25T01:34:11","guid":{"rendered":"https:\/\/www.1ai.net\/?p=6118"},"modified":"2024-03-25T09:34:11","modified_gmt":"2024-03-25T01:34:11","slug":"%e6%b6%88%e6%81%af%e7%a7%b0%e8%8b%b9%e6%9e%9c%e7%a0%94%e7%a9%b6%e4%ba%ba%e5%91%98%e6%ad%a3%e6%8e%a2%e7%b4%a2%e5%85%8d%e5%94%a4%e9%86%92%e8%af%8d%e5%91%bc%e5%8f%ab-siri%ef%bc%8c%e7%94%a8-ai-%e8%81%86","status":"publish","type":"post","link":"https:\/\/www.1ai.net\/en\/6118.html","title":{"rendered":"Apple researchers are reportedly exploring the possibility of calling Siri without the wake-up word, replacing it with AI listening"},"content":{"rendered":"<p data-vmark=\"3d87\">According to MIT Technology Review, a paper published on Friday (22nd) local time showed that<a href=\"https:\/\/www.1ai.net\/en\/tag\/%e8%8b%b9%e6%9e%9c\" title=\"[View articles tagged with [apple]]\" target=\"_blank\" >apple<\/a>The company&#039;s researchers are exploring<span class=\"accentTextColor\">The possibility of using artificial intelligence to detect when a user is talking to a device such as an iPhone<\/span>, thereby eliminating<a href=\"https:\/\/www.1ai.net\/en\/tag\/siri\" title=\"_Other Organiser\" target=\"_blank\" >Siri<\/a>\u201d The technical requirements for trigger phrases like this.<\/p>\n<p data-vmark=\"5269\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-6119\" title=\"5ed6ff9b-ce57-45fe-a362-e04e09a3df8a\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/03\/5ed6ff9b-ce57-45fe-a362-e04e09a3df8a.jpg\" alt=\"5ed6ff9b-ce57-45fe-a362-e04e09a3df8a\" width=\"800\" height=\"453\" \/><\/p>\n<p data-vmark=\"498c\">In the study, which was uploaded to Arxiv and has not yet been peer-reviewed, researchers<span class=\"accentTextColor\">A large language model was trained using speech captured by a smartphone as well as acoustic data from background noise.<\/span>, to look for patterns that \u201cmay indicate that the user needs assistance from the device.\u201d<\/p>\n<p data-vmark=\"75c5\">The paper states that the model is partly based on OpenAI&#039;s GPT-2.<span class=\"accentTextColor\">Because it is relatively lightweight, it can run on devices such as smartphones<\/span>The paper also describes more than 129 hours of data and additional text data used to train the model, but does not specify the source of the recordings used in the training set. According to LinkedIn profiles, six of the seven authors list their affiliation as Apple, three of whom work on Apple&#039;s Siri team.<\/p>\n<p data-vmark=\"5062\">The paper\u2019s conclusions are \u201cencouraging,\u201d claiming that the model is able to make more accurate predictions than audio-only or text-only models, and that it improves further as the model scales.<\/p>\n<p data-vmark=\"dc9f\">Currently, Siri&#039;s functionality is achieved by retaining a small amount of audio.<span class=\"accentTextColor\">It won&#039;t start recording or prepare to answer user prompts until it hears a trigger phrase like &quot;Hey, Siri&quot;<\/span>.<\/p>\n<p data-vmark=\"5dba\">Jen Kim, a privacy and data policy researcher at the Stanford Institute for Human-Centered AI, said removing the \u201cHey, Siri\u201d prompt could increase concerns about devices \u201calways listening.\u201d<\/p>","protected":false},"excerpt":{"rendered":"<p>According to MIT, according to a paper published on Friday (22) of local time, Apple\u2019s researchers are exploring the possibility of using artificial intelligence to detect when users are talking to equipment such as iPhone, thereby eliminating the technical need for trigger phrases such as \u201cSiri\u201d. In this research, uploaded to Arxiv and not peer-reviewed, researchers trained a large language model to find a model that \u201cmay indicate that users need equipment-aided\u201d using voice capture from smartphones and acoustic data from background noise. According to the paper, the model is partly based on OpenAI GPT-2 because it is relatively lightweight and can run on devices such as smart phones. It also describes training models<\/p>","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[146],"tags":[1843,345],"collection":[],"class_list":["post-6118","post","type-post","status-publish","format-standard","hentry","category-news","tag-siri","tag-345"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/6118","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/comments?post=6118"}],"version-history":[{"count":0,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/6118\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/media?parent=6118"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/categories?post=6118"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/tags?post=6118"},{"taxonomy":"collection","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/collection?post=6118"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}