{"id":34040,"date":"2025-04-25T14:47:27","date_gmt":"2025-04-25T06:47:27","guid":{"rendered":"https:\/\/www.1ai.net\/?p=34040"},"modified":"2025-04-25T14:47:27","modified_gmt":"2025-04-25T06:47:27","slug":"ai-%e4%bc%9a%e6%9c%89%e6%84%8f%e8%af%86%e5%90%97%ef%bc%9fanthropic-%e5%90%af%e5%8a%a8%e6%96%b0%e9%a1%b9%e7%9b%ae%ef%bc%8c%e6%8e%a2%e7%b4%a2%e5%af%bb%e6%b1%82%e7%ad%94%e6%a1%88","status":"publish","type":"post","link":"https:\/\/www.1ai.net\/en\/34040.html","title":{"rendered":"Will AI ever be conscious? Anthropic launches new project to explore the search for answers"},"content":{"rendered":"<p>April 25, 2012 - If <a href=\"https:\/\/www.1ai.net\/en\/tag\/ai\" title=\"[View articles tagged with [AI]]\" target=\"_blank\" >AI<\/a> What to do when consciousness sprouts?<a href=\"https:\/\/www.1ai.net\/en\/tag\/anthropic\" title=\"[View articles tagged with [Anthropic]]\" target=\"_blank\" >Anthropic<\/a> Researchers at the University of California at Berkeley have launched the Model Welfare research program to explore this cutting-edge issue.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-34041\" title=\"ef96f8daj00sv9hi5005md000v900g2p\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2025\/04\/ef96f8daj00sv9hi5005md000v900g2p.jpg\" alt=\"ef96f8daj00sv9hi5005md000v900g2p\" width=\"1125\" height=\"578\" \/><\/p>\n<p>Many people say \"please\" and \"thank you\" when interacting with chatbots, and OpenAI CEO Sam Altman has revealed that these polite phrases cost tens of millions of dollars a year to compute.<\/p>\n<p>Anthropic researchers are delving into a more cutting-edge question: What if AI systems are not just tools, but have some kind of \"experience\/emotion\/consciousness\"?<\/p>\n<p>Anthropic's Model Welfare research program seeks to explore whether AI can be conscious, and what this means for ethical design and AI development.<\/p>\n<p>\"We're very unsure whether AI will ever sprout consciousness, and there's no consensus on how to even tell\", said Anthropic team member Kyle Fish. The researchers do not believe that mainstream models such as Claude are already conscious.<strong>Internal experts estimate the probability of Claude 3.7 Sonnet possessing consciousness to be only between 0.15% and 15%.<\/strong><\/p>\n<p>Fish says that the research on \"model welfare\" is driven by both ethical and safety considerations. On the one hand, if AI systems can really experience positive or negative feelings, should we care if they \"suffer\" or \"are happy\"; on the other hand, this question involves AI alignment and how to ensure that AI can safely perform tasks. tasks.<\/p>\n<p>Fish notes, \"We want AIs to be happy with the task. If they show dissatisfaction, it's not just an ethical issue, it's a security risk.\" Anthropic is currently exploring ways for models to express preferences or reject \"painful\" tasks, while looking for architectural features that resemble human consciousness through interpretability studies.<\/p>","protected":false},"excerpt":{"rendered":"<p>April 25, 2011 - What if AI develops a consciousness? Anthropic researchers have launched a \"model welfare\" research program to explore this cutting-edge question. Many people say \"please\" and \"thank you\" when interacting with chatbots, and OpenAI CEO Sam Altman has revealed that these polite phrases cost tens of millions of dollars a year to compute. Anthropic's researchers are delving into a more cutting-edge question: what if AI systems were not just tools, but had some kind of \"experience\/emotion\/consciousness\"? Anthropic is launching a \"model welfare\" (mod<\/p>","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[146],"tags":[411,320],"collection":[],"class_list":["post-34040","post","type-post","status-publish","format-standard","hentry","category-news","tag-ai","tag-anthropic"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/34040","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/comments?post=34040"}],"version-history":[{"count":0,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/34040\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/media?parent=34040"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/categories?post=34040"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/tags?post=34040"},{"taxonomy":"collection","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/collection?post=34040"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}