{"id":18312,"date":"2024-08-20T09:26:04","date_gmt":"2024-08-20T01:26:04","guid":{"rendered":"https:\/\/www.1ai.net\/?p=18312"},"modified":"2024-08-20T09:26:04","modified_gmt":"2024-08-20T01:26:04","slug":"%e7%a7%91%e5%a4%a7%e8%ae%af%e9%a3%9e%e6%8e%a8%e5%87%ba%e6%98%9f%e7%81%ab%e6%9e%81%e9%80%9f%e8%b6%85%e6%8b%9f%e4%ba%ba%e4%ba%a4%e4%ba%92%ef%bc%9a%e5%8f%af%e6%a8%a1%e4%bb%bf%e5%ad%99","status":"publish","type":"post","link":"https:\/\/www.1ai.net\/en\/18312.html","title":{"rendered":"iFLYTEK launches &quot;Spark Ultra-fast Human Interaction&quot;: can imitate the voice, tone, and personality of Sun Wukong, Crayon Shin-chan, Peppa Pig, etc."},"content":{"rendered":"<p>News on August 19,<a href=\"https:\/\/www.1ai.net\/en\/tag\/%e7%a7%91%e5%a4%a7%e8%ae%af%e9%a3%9e\" title=\"[Sees articles with tags]\" target=\"_blank\" >iFLYTEK<\/a>Announced an update to the Spark Voice Master Model, bringing &quot;Spark Ultra-Fast Super-Anthropomorphic Interaction&quot;, scheduled to <strong>Available August 30<a href=\"https:\/\/www.1ai.net\/en\/tag\/%e8%ae%af%e9%a3%9e%e6%98%9f%e7%81%ab\" title=\"[Sees articles with tags]\" target=\"_blank\" >iFlytek Spark<\/a> App.<\/strong><\/p>\n<p>&quot;Spark Extreme Super Anthropomorphic Interaction&quot; uses a unified neural network to achieve speech-to-speech<strong>End-to-end modeling<\/strong>Officials said that even if they are frequently interrupted, they can still &quot;respond quickly&quot;, which is more in line with daily conversation situations.<\/p>\n<p><a href=\"https:\/\/weibo.com\/tv\/show\/1034:5069030453608463?from=old_pc_videoshow\" target=\"_blank\" rel=\"noopener\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-18313\" title=\"5b0b6d01j00sihtaa001kd001hc00u0m\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/08\/5b0b6d01j00sihtaa001kd001hc00u0m.jpg\" alt=\"5b0b6d01j00sihtaa001kd001hc00u0m\" width=\"1920\" height=\"1080\" \/><\/a><\/p>\n<p>The upgraded version can be used for happiness, sadness, anger, fear, etc.<strong>Emotion recognition<\/strong>; Automatically bring in conversations that fit the situation and respond with the appropriate emotional tone.<\/p>\n<p>It is reported that the emotional expression of &quot;Spark Extreme Speed Super Anthropomorphic Interaction&quot; is &quot;more flexible&quot; and can follow user instructions to control dozens of emotions, styles, and dialects during communication, and supports adjusting the speaking speed.<\/p>\n<p>&quot;Spark Speedy Super Anthropomorphic Interaction&quot; can imitate the voice and tone of many characters including Sun Wukong, Crayon Shin-chan, Peppa Pig, etc., and can also imitate their personalities to chat with users.<\/p>\n<p>Spark Speed Super Anthropomorphic Interaction Project<strong>\u00a0<\/strong><strong>The iFlytek Spark App will be launched on August 30 and will be open for trial use<\/strong>.<\/p>","protected":false},"excerpt":{"rendered":"<p>August 19 news, KU Xunfei announced an update to the Starfire speech model, bringing \"Starfire high-speed super anthropomorphic interaction\", which is scheduled to go online on August 30 Xunfei Starfire App. \"Starfire high-speed super anthropomorphic interaction\" adopts a unified neural network to realize the end-to-end modeling of speech to speech. The \"Starfire Extremely Fast Hyper-Animatronic Interaction\" uses a unified neural network to achieve end-to-end modeling of speech. Officially, it can \"react quickly\" even if it is frequently interrupted, which is more in line with daily conversational situations. The upgraded version can recognize emotions such as happiness, sadness, anger, fear, etc.; automatically bring in dialogues that fit the context, and reply with the appropriate emotional tone. According to the introduction, the emotional expression of \"Starfire Extreme Hyper Humanoid Interaction\" is \"more flexible\", and it can control dozens of emotions, styles and dialects following the user's instructions in the communication and support the adjustment of the speed of speech. \"Starfire Extreme Hyper-Animatronic Interaction\" is capable of<\/p>","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[146],"tags":[1116,362],"collection":[],"class_list":["post-18312","post","type-post","status-publish","format-standard","hentry","category-news","tag-1116","tag-362"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/18312","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/comments?post=18312"}],"version-history":[{"count":0,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/18312\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/media?parent=18312"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/categories?post=18312"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/tags?post=18312"},{"taxonomy":"collection","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/collection?post=18312"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}