{"id":30680,"date":"2025-03-14T10:55:08","date_gmt":"2025-03-14T02:55:08","guid":{"rendered":"https:\/\/www.1ai.net\/?p=30680"},"modified":"2025-03-14T10:55:08","modified_gmt":"2025-03-14T02:55:08","slug":"anthropic-ceo-%e9%98%bf%e8%8e%ab%e4%bb%a3%e4%bc%8a%ef%bc%9a%e6%9c%aa%e6%9d%a5-ai-%e6%88%96%e6%9c%89%e8%87%aa%e6%88%91%e5%86%b3%e5%ae%9a%e6%9d%83%ef%bc%8c%e5%8f%af%e6%8b%92%e7%bb%9d%e4%b8%8d","status":"publish","type":"post","link":"https:\/\/www.1ai.net\/en\/30680.html","title":{"rendered":"Anthropic CEO Amodei: Future AIs may be self-determining, rejecting 'unpleasant' tasks"},"content":{"rendered":"<p>According to a March 13 report by Ars Technica, the foreign media<a href=\"https:\/\/www.1ai.net\/en\/tag\/anthropic\" title=\"[View articles tagged with [Anthropic]]\" target=\"_blank\" >Anthropic<\/a> CEO Dario Amodei made a surprising point on Monday, suggesting that the future of senior <a href=\"https:\/\/www.1ai.net\/en\/tag\/ai\" title=\"[View articles tagged with [AI]]\" target=\"_blank\" >AI<\/a> Models may be given a \"button\" that allows them to be used when they encounter the<strong>Opt out on unpleasant tasks.<\/strong><\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-30681\" title=\"894ad0b8j00st3er0002fd000sg00i9p\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2025\/03\/894ad0b8j00st3er0002fd000sg00i9p.jpg\" alt=\"894ad0b8j00st3er0002fd000sg00i9p\" width=\"1024\" height=\"657\" \/><\/p>\n<p>Amodei said in the interview, \"This is another one of those things that made me<strong>It looks like madness.<\/strong>of the topic. I think we should at least consider the question that if we are building these systems, they<strong>Capable of performing a variety of tasks like a human being<\/strong>, and appears to possess many human cognitive abilities. If it<strong>Quacks like a duck, walks like a duck. Maybe it is a duck.<\/strong>. &quot;<\/p>\n<p>Amodei's comments were in response to a question posed by data scientist Carmem Domingues, who asked why Anthropic hired AI benefits researcher Kyle Fish at the end of 2024 to work on future AI models<strong>Possibility of perceptual capabilities<\/strong>, or whether it should receive moral consideration and protection in the future.<\/p>\n<p>1AI has learned from the report that Fish is currently researching the controversial topic of whether AI can have the ability to perceive and whether it deserves moral protections.<\/p>\n<p>Amodei explains, \"One possibility we're considering is to give the models a button when we deploy them into a real-world environment, above the<strong>It says, \"I give up this job.<\/strong>This is how the model works.<strong>You can press this button.<\/strong>. &quot;<\/p>\n<p>Its says it's just a very simple preference framework that allows the model to push this button, assuming it really is autonomous and hates the job very much. \"If you find that the model is pushing this button a lot and doing something really unpleasant, maybe you should be concerned -- it doesn't mean you're totally convinced, but you should at least keep an eye on it.\"<\/p>","protected":false},"excerpt":{"rendered":"<p>Anthropic CEO Dario Amodei made a surprising observation on Monday, according to a March 13 report by Ars Technica, suggesting that future advanced AI models may be given a \"button\" that allows them to opt out of unpleasant tasks. AI models of the future may be given a \"button\" that allows them to opt out when they encounter unpleasant tasks. This is another one of those topics that makes me seem crazy,\" Amodei said in an interview. I think we should at least consider the question: if we're building these systems, they can perform a variety of tasks like humans and seem to have a lot of human cognitive abilities. If it quacks like a duck and walks like a duck, maybe it is a duck.\" Amodei's comments were in response to data scientists<\/p>","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[146],"tags":[411,320,445],"collection":[],"class_list":["post-30680","post","type-post","status-publish","format-standard","hentry","category-news","tag-ai","tag-anthropic","tag-ceo"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/30680","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/comments?post=30680"}],"version-history":[{"count":0,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/30680\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/media?parent=30680"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/categories?post=30680"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/tags?post=30680"},{"taxonomy":"collection","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/collection?post=30680"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}