{"id":3159,"date":"2024-01-24T09:55:04","date_gmt":"2024-01-24T01:55:04","guid":{"rendered":"https:\/\/www.1ai.net\/?p=3159"},"modified":"2024-01-24T09:55:04","modified_gmt":"2024-01-24T01:55:04","slug":"%e7%8e%a9%e8%bd%acchatgpt%e4%b8%a8%e6%8f%90%e7%a4%ba%e8%af%8d%e5%b0%8f%e6%8a%80%e5%b7%a7","status":"publish","type":"post","link":"https:\/\/www.1ai.net\/en\/3159.html","title":{"rendered":"Play ChatGPT\u4e28Tips for prompt words"},"content":{"rendered":"<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-3160\" title=\"f527fa054a01034c00a807328689e78cd7b2d1fd\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/01\/f527fa054a01034c00a807328689e78cd7b2d1fd.jpg\" alt=\"f527fa054a01034c00a807328689e78cd7b2d1fd\" width=\"1920\" height=\"1080\" \/><\/p>\n<p>Today I'm sharing 6<a href=\"https:\/\/www.1ai.net\/en\/tag\/%e6%8f%90%e7%a4%ba%e8%af%8d\" title=\"[View articles tagged with [cue word]]\" target=\"_blank\" >Prompt word<\/a><a href=\"https:\/\/www.1ai.net\/en\/tag\/%e5%b0%8f%e6%8a%80%e5%b7%a7\" title=\"[Sees articles with [small skills] labels]\" target=\"_blank\" >tip<\/a>, generalized to a variety of large language models (LLMs), not just the<a>ChatGPT<i class=\"wx_search_keyword\"><\/i><\/a>, which also includes Wenxin Yiyin or various open source models that can help improve the output results.<\/p>\n<section>Of course the following tips apply to person-to-person communication as well.<\/section>\n<blockquote class=\"js_blockquote_wrap\" data-type=\"2\" data-url=\"\" data-author-name=\"\" data-content-utf8-length=\"62\" data-source-title=\"\">\n<section class=\"js_blockquote_digest\">\n<section>\n<section>Tip 1: Write clear and specific instructions\u00a0<\/section>\n<section>Tip 2: Give the model time to think<\/section>\n<section>Tip 3: Multiple Tips<\/section>\n<section>Tip 4: Bootstrapping models\u00a0<\/section>\n<section>Tip 5: Break down tasks or tips\u00a0<\/section>\n<p>Tip 6: Use external tools<\/p>\n<\/section>\n<\/section>\n<\/blockquote>\n<p><strong>Tip 1: Write clear and specific instructions\u00a0<\/strong><\/p>\n<section><strong>Give detailed background information to the question.<\/strong>Reducing ambiguity reduces the likelihood of outputting irrelevant or incorrect content.<\/section>\n<section>You can also use separators to clearly label different parts of your input. Examples include: section headings, triple quotes, triple backquotes, triple dashes, pointed brackets, and \"#####\".<\/section>\n<p><strong>Specifies the desired output format or output length.<\/strong>One way to do this is to have the model play a role. For example:<\/p>\n<blockquote class=\"js_blockquote_wrap\" data-type=\"2\" data-url=\"\" data-author-name=\"\" data-content-utf8-length=\"52\" data-source-title=\"\">\n<section class=\"js_blockquote_digest\">\n<section>\"Pretend you're a tech blogger.\" \"Respond in about two sentences.\"<\/p>\n<p>\"Give me a summary of this paragraph. Here's an example of a summary I like ___\"<\/p>\n<\/section>\n<\/section>\n<\/blockquote>\n<p><strong>Provide examples.<\/strong>For example, this is the step of Few Shot Prompting:<\/p>\n<blockquote class=\"js_blockquote_wrap\" data-type=\"2\" data-url=\"\" data-author-name=\"\" data-content-utf8-length=\"79\" data-source-title=\"\">\n<section class=\"js_blockquote_digest\">\n<section>First example (first shot): gives a prompt and the corresponding output (answer). Second example (second shot): gives a second hint and output.<\/p>\n<p>Your prompt: give your actual prompt words.<\/p>\n<\/section>\n<\/section>\n<\/blockquote>\n<p>The model can now respond according to the pattern established in the first two examples.<\/p>\n<p><strong>Tip 2: Give the model time to think<\/strong><\/p>\n<p>Models are more likely to make inference errors when responding immediately.<\/p>\n<p>pass<strong>Requires a series of reasoning<\/strong>This prompts the model to think progressively and more carefully. You can ask for \"think step-by-step\" or specify specific steps. This simple prompt is a very effective addition: \"Think step by step.\" (Think step by step)<\/p>\n<p>For example, if you ask the model to grade a student's exam question, you can prompt the model like this:<\/p>\n<blockquote class=\"js_blockquote_wrap\" data-type=\"2\" data-url=\"\" data-author-name=\"\" data-content-utf8-length=\"67\" data-source-title=\"\">\n<section class=\"js_blockquote_digest\">\n<section>Step 1: Start by solving the problem yourself; Step 2: Compare your solution with the student's solution;<\/p>\n<p>Step 3: Complete your own solution calculations prior to evaluating student solutions.<\/p>\n<\/section>\n<\/section>\n<\/blockquote>\n<p><strong>Tip 3: Multiple Tips<\/strong><\/p>\n<p>When accuracy is most important (rather than latency or cost), generate multiple responses with different prompts and then determine the best answer.<\/p>\n<p>Some of the things you can tweak include:<\/p>\n<ul class=\"list-paddingleft-1\">\n<li><strong>Temperature<\/strong>: Moderate the randomness or creativity of large model responses. Higher temperatures give more varied, creative responses. Lower temperatures give more conservative, predictable responses.<\/li>\n<li><strong>Samples (shots)<\/strong>: refers to the number of examples given in the prompt. Zero-shot means that no examples are provided, one-shot means that one example is provided, and so on.<\/li>\n<li><strong>Prompt<\/strong>: More directly or indirectly, requesting explanations, making comparisons, etc.<\/li>\n<\/ul>\n<p><strong>Tip 4: Bootstrapping models\u00a0<\/strong><\/p>\n<p>Here are some examples:<\/p>\n<ul class=\"list-paddingleft-1\">\n<li><strong>If the input is too long<\/strong>, the model may stop reading early. You can guide the model to gradually process longer content and recursively build complete summaries.<\/li>\n<li><strong>Help it correct itself.<\/strong>It's hard for a model to self-correct if it answers incorrectly to begin with. \"I received your explanation of quantum physics, are you sure of your answer? Can you start with the basics of quantum mechanics, re-examine and provide a corrected answer?\"<\/li>\n<li><strong>Don't ask questions that have a clear bias.<\/strong>\u00a0The model is happy to \"please\" you, so lead but keep the prompts open and don't presuppose an answer in the question, for example:<\/li>\n<\/ul>\n<blockquote class=\"js_blockquote_wrap\" data-type=\"2\" data-url=\"\" data-author-name=\"\" data-content-utf8-length=\"53\" data-source-title=\"\">\n<section class=\"js_blockquote_digest\">\n<section>\"Do games cause violence?\" (Bad question) \"I would like an overview of the findings of the study on the relationship between Kwanzaa games and behavior without bias.\" (Good question)<\/p>\n<\/section>\n<\/section>\n<\/blockquote>\n<p><strong>Tip 5: Break down tasks or tips\u00a0<\/strong><\/p>\n<p><strong>Break down complex tasks into multiple simple tasks.<\/strong>The reason for this is that complex tasks have a significantly higher error rate than simple tasks.<\/p>\n<section>You can use intent classification to identify the most relevant instructions and then combine the responses to create a coherent output. Example:<\/section>\n<blockquote class=\"js_blockquote_wrap\" data-type=\"2\" data-url=\"\" data-author-name=\"\" data-content-utf8-length=\"38\" data-source-title=\"\">\n<section class=\"js_blockquote_digest\">\n<section>\"I'm going to Paris for three days and I need to know what to pack, the best restaurants, and how to use public transportation.\"\u00a0<\/section>\n<\/section>\n<\/blockquote>\n<section>break down:<\/section>\n<blockquote class=\"js_blockquote_wrap\" data-type=\"2\" data-url=\"\" data-author-name=\"\" data-content-utf8-length=\"47\" data-source-title=\"\">\n<section class=\"js_blockquote_digest\">\n<section>Intention 1: What to pack for a trip to Paris; Intention 2: Recommendations for the best restaurants in Paris;<\/p>\n<p>Intention 3: To provide guidance on how to use public transportation in Paris.<\/p>\n<\/section>\n<\/section>\n<\/blockquote>\n<section>Big Model Answer:<\/section>\n<blockquote class=\"js_blockquote_wrap\" data-type=\"2\" data-url=\"\" data-author-name=\"\" data-content-utf8-length=\"53\" data-source-title=\"\">\n<section class=\"js_blockquote_digest\">\n<section>The AI processes each intent separately, providing customized suggestions for packing luggage, dining and getting around Paris, and then integrates these into one comprehensive answer.<\/section>\n<\/section>\n<\/blockquote>\n<p>Or, if the subtasks are interrelated:<\/p>\n<blockquote class=\"js_blockquote_wrap\" data-type=\"2\" data-url=\"\" data-author-name=\"\" data-content-utf8-length=\"38\" data-source-title=\"\">\n<section class=\"js_blockquote_digest\">\n<section>Step 1: Decompose the task into queries. Step 2: Input the output of the first query into the next query.<\/p>\n<\/section>\n<\/section>\n<\/blockquote>\n<p>Note: This may also reduce costs, as each step will cost less.<\/p>\n<p><strong>Tip 6: Use external tools<\/strong><\/p>\n<p>In general, if a task can be done more reliably and efficiently with a tool (compared to a larger model), then move it out to get the advantages of both. (This may not apply to non-developers, and can be ignored by casual hobbyists, but the reasoning is not to let a larger model do something it's not good at)<\/p>\n<p>Here are some sample tools:<\/p>\n<ul class=\"list-paddingleft-1\">\n<li><strong>calculator<\/strong>: The big models do not perform well in math. Their original goal was to generate tokens\/words, not numbers. Calculators can significantly improve the math capabilities of LLMs.<\/li>\n<li><strong><a>RAG<i class=\"wx_search_keyword\"><\/i><\/a><\/strong>(Retrieval-Augmented Generation): connecting the big model to external knowledge (public web or private knowledge bases) instead of just getting information from the context window.<\/li>\n<li><strong>Code Execution:<\/strong>Execute and test model-created code using code execution or calls to external APIs.<\/li>\n<li><strong>External Functions:<\/strong>Define functions for the model to write calls to. For example, send_email(), get_current_weather(), get_customers(). Execute these functions on the user side and return the response to the model.<\/li>\n<\/ul>\n<p>Above, I hope this helps.<\/p>","protected":false},"excerpt":{"rendered":"<p>Today, you can share six small hint techniques that can be used in a variety of large-language models (LLMs), not just ChatGPT, but also in words or open-source models that can help improve output outcomes. The following techniques, of course, are equally relevant to people's communication. Skills 1: Write down clear and specific instructions Skills 2: Give models time to think Skills 3: Multiple hints Skills 4: Lead Models Skills 5: Decomposition tasks or tips Skills 6: Using external tools Skills 1: Write down clear and specific instructions Provide detailed background information on the problem. Reducing ambiguity reduces the possibility of exporting irrelevant or erroneous content. You can also use a separator to clearly identify the different parts of the input. For example: Chapter Title, Triple Quoted, Triple Inverted, Triple Broken<\/p>","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[149,144],"tags":[1009,837],"collection":[258,302],"class_list":["post-3159","post","type-post","status-publish","format-standard","hentry","category-jiaocheng","category-baike","tag-1009","tag-837","collection-chatgpt-prompt-guide","collection-prompt"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/3159","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/comments?post=3159"}],"version-history":[{"count":0,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/3159\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/media?parent=3159"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/categories?post=3159"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/tags?post=3159"},{"taxonomy":"collection","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/collection?post=3159"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}