{"id":18688,"date":"2024-08-27T09:07:13","date_gmt":"2024-08-27T01:07:13","guid":{"rendered":"https:\/\/www.1ai.net\/?p=18688"},"modified":"2024-08-27T09:07:13","modified_gmt":"2024-08-27T01:07:13","slug":"anthropic-%e5%85%ac%e5%bc%80-claude-ai-%e6%a8%a1%e5%9e%8b%e7%9a%84%e7%b3%bb%e7%bb%9f%e6%8f%90%e7%a4%ba%e8%af%8d","status":"publish","type":"post","link":"https:\/\/www.1ai.net\/en\/18688.html","title":{"rendered":"Anthropic releases system prompts for Claude AI model"},"content":{"rendered":"<p>Technology media Techcrunch reported yesterday (August 26),<strong><a href=\"https:\/\/www.1ai.net\/en\/tag\/anthropic\" title=\"[View articles tagged with [Anthropic]]\" target=\"_blank\" >Anthropic<\/a> The company released the Claude AI model<a href=\"https:\/\/www.1ai.net\/en\/tag\/%e7%b3%bb%e7%bb%9f%e6%8f%90%e7%a4%ba%e8%af%8d\" title=\"[Sees articles with [systemic hint] labels]\" target=\"_blank\" >System prompt words<\/a>\u201d (system prompts).<\/strong><\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-18689\" title=\"8285d833j00siur1u00qkd000v900hkp\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/08\/8285d833j00siur1u00qkd000v900hkp.jpg\" alt=\"8285d833j00siur1u00qkd000v900hkp\" width=\"1125\" height=\"632\" \/><\/p>\n<p>In order to make the AI model better understand human instructions, the prompt project actually includes two core layers: user prompt and system prompt:<\/p>\n<ul>\n<li>User prompt words: The user enters the prompt words, and then the AI model generates answers based on the user prompt words.<\/li>\n<li>System prompts: These are system-generated prompts that are usually used to set the context of the conversation, provide guidance, or set rules.<\/li>\n<\/ul>\n<p>Normally, system prompts will let the model understand its basic qualities and what it should and should not do.<\/p>\n<p>Common practice in the industry<\/p>\n<p>Every generative AI company, from OpenAI to Anthropic, uses system prompts to prevent (or at least try to prevent) bad behavior from occurring in its models and to guide the overall tone and sentiment of its responses.<\/p>\n<p>For example, a system prompt might tell the model that it should be polite but never apologize, or that it should honestly acknowledge that it cannot know everything.<\/p>\n<p>However, manufacturers usually keep these system prompts confidential for reasons such as competition and to prevent bad users from bypassing security protection after learning this information.<\/p>\n<p>Anthropic Select Public System Prompt Word<\/p>\n<p>However, Anthropic has been working to portray itself as a more ethical and transparent AI provider, and it has published system prompts for its latest models (Claude 3.5 Opus, Sonnet, and Haiku) on the Claude iOS and Android apps and on the web.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-18690\" title=\"e840a0e3j00siur2l003zd000i400cbp\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/08\/e840a0e3j00siur2l003zd000i400cbp.jpg\" alt=\"e840a0e3j00siur2l003zd000i400cbp\" width=\"652\" height=\"443\" \/><\/p>\n<p>Anthropic plans to release this type of information regularly as it updates and fine-tunes system prompts, Alex Albert, head of developer relations at Anthropic, said in a post published on X.<\/p>","protected":false},"excerpt":{"rendered":"<p>Techcrunch reported yesterday (August 26) that Anthropic has made public the \"system prompts\" for the Claude AI model. In order for the AI model to better understand human commands, the Prompt project actually consists of 2 core layers, User prompt and System prompt: User prompt: the user inputs a prompt, and the AI model generates an answer based on the user prompt. System prompt: This is a system-generated prompt that is usually used to set the context of a conversation, provide guidance, or specify rules. Typically, the system prompt will<\/p>","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[146],"tags":[320,4145],"collection":[],"class_list":["post-18688","post","type-post","status-publish","format-standard","hentry","category-news","tag-anthropic","tag-4145"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/18688","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/comments?post=18688"}],"version-history":[{"count":0,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/18688\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/media?parent=18688"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/categories?post=18688"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/tags?post=18688"},{"taxonomy":"collection","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/collection?post=18688"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}