{"id":2690,"date":"2024-01-08T16:19:28","date_gmt":"2024-01-08T08:19:28","guid":{"rendered":"https:\/\/www.1ai.net\/?p=2690"},"modified":"2024-01-08T16:19:28","modified_gmt":"2024-01-08T08:19:28","slug":"ai%e5%8a%a9%e6%89%8b%e9%9d%a2%e4%b8%b4%e7%94%a8%e6%88%b7%e6%b5%81%e5%a4%b1%e5%8d%b1%e6%9c%ba-%e5%ae%89%e5%85%a8%e8%80%83%e9%87%8f%e6%88%96%e8%ae%a9%e5%88%9b%e4%b8%9a%e5%85%ac%e5%8f%b8%e5%a4%b1","status":"publish","type":"post","link":"https:\/\/www.1ai.net\/en\/2690.html","title":{"rendered":"AI assistants face user loss crisis; security concerns may cause startups to lose opportunities"},"content":{"rendered":"<p>According to foreign media reports,<a href=\"https:\/\/www.1ai.net\/en\/tag\/ai%e5%88%9b%e4%b8%9a%e5%85%ac%e5%8f%b8\" title=\"[SEES ARTICLES WITH LABELS]\" target=\"_blank\" >AI startups<\/a>Antropic has been facing a tough time lately. The company's<a href=\"https:\/\/www.1ai.net\/en\/tag\/ai%e4%ba%a7%e5%93%81\" title=\"_OTHER ORGANISER\" target=\"_blank\" >AI Products<\/a>After Claude 2.1 was launched, many users found that it became difficult to communicate and use, often refusing to execute commands for no apparent reason.<\/p>\n<p>The root of the problem is that Claude 2.1 has become overly cautious and law-abiding in its safety and moral judgments by strictly adhering to its published constitution of AI ethics. This resulted in Antropic having to sacrifice some of its product performance in its pursuit of AI security.<\/p>\n<p class=\"article-content__img\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-2691\" title=\"202310180948538535_0\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/01\/202310180948538535_0.jpg\" alt=\"202310180948538535_0\" width=\"680\" height=\"357\" \/><\/p>\n<p>The result is that a large number of paying subscribers have expressed strong dissatisfaction. They have taken to social media platforms to complain that Claude 2.1 is \"dead\" and have indicated that they are ready to cancel their subscriptions in favor of competitor ChatGPT.<\/p>\n<p>Industry insiders point out that Antropic's predicament once again highlights the dilemma startups face in the AI space. Strict self-regulation to ensure AI safety is undoubtedly a step in the right direction, but over-consideration of ethical and legal ramifications could cost companies a head start and put them at a disadvantage in the growing competition.<\/p>\n<p>Antropic itself is deeply in crisis. But to maintain security without losing flexibility is a challenge for any AI company. The industry will be watching further to see how Antropic responds to the current dilemma and KEEP the users who are flowing to its competitors.<\/p>","protected":false},"excerpt":{"rendered":"<p>According to foreign media reports, AI startup Antropic has recently faced a difficult situation. After the launch of the company's AI product Claude2.1, many users found that it became difficult to communicate and use, often refusing to execute commands for no reason. The root of the problem was that Claude 2.1 followed its published AI ethics constitution to the letter, becoming overly cautious and law-abiding in its safety and moral judgments. This resulted in Antropic having to sacrifice some of its product performance in its pursuit of AI security. As a result, a large number of paying subscribers expressed strong dissatisfaction. They have taken to social media platforms to complain that Claude 2.1 is \"dead\" and have indicated that they are ready to cancel their subscriptions and switch to competitor ChatGPT. Industry insiders have pointed out that Antropic's plight re<\/p>","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[146],"tags":[873,872],"collection":[],"class_list":["post-2690","post","type-post","status-publish","format-standard","hentry","category-news","tag-ai"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/2690","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/comments?post=2690"}],"version-history":[{"count":0,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/2690\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/media?parent=2690"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/categories?post=2690"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/tags?post=2690"},{"taxonomy":"collection","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/collection?post=2690"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}