{"id":51305,"date":"2026-03-21T09:47:37","date_gmt":"2026-03-21T01:47:37","guid":{"rendered":"https:\/\/www.1ai.net\/?p=51305"},"modified":"2026-03-19T15:48:53","modified_gmt":"2026-03-19T07:48:53","slug":"%e5%ae%89%e8%a3%85openclaw%e5%b0%8f%e7%99%bd%e6%95%99%e7%a8%8b%ef%bc%8copenclaw%e4%b8%ad%e7%ba%a7%e7%af%87%e6%95%99%e5%ad%a6","status":"publish","type":"post","link":"https:\/\/www.1ai.net\/en\/51305.html","title":{"rendered":"Installation of OpenClaw Junior White School, OpenClaw Middle School"},"content":{"rendered":"<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-51306\" title=\"da096853j00tc4ymn004hd000iw007kp\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2026\/03\/da096853j00tc4ymn004hd000iw007kp.jpg\" alt=\"da096853j00tc4ymn004hd000iw007kp\" width=\"680\" height=\"272\" \/><\/p>\n<p>This is us<a href=\"https:\/\/www.1ai.net\/en\/tag\/openclaw%e6%95%99%e5%ad%a6\" title=\"[sees articles with [openclaw teaching] labels]\" target=\"_blank\" >openclaw teaching<\/a>In the second part, how do you make it work<\/p>\n<p>a lot of people just installed an openclaw's first question: why so stupid? what's the difference<\/p>\n<p>The reason behind it, I think:<\/p>\n<p>it's not the model, it's not the prompt. it's the three things that's missing between being able to use it and being good: remember, find it<\/p>\n<p>I summarised all the pits I stepped on into a set of approaches. We said one by one<\/p>\n<p><strong>Remembering \u2014 memory systems<\/strong><\/p>\n<p>You asked agent to help you learn React, to talk about it all afternoon, to remember your progress, to step through the pit, to ask him what to learn in the next few days, to forget -- who you are, where you learned, what you talked about yesterday, you have to reteach it from scratch, and you don't know why you spend 10 minutes a day, five hours a month, all wasted<\/p>\n<p>How<\/p>\n<p>I'm talking about my program, and I'm using it to summarize a three-tier memory model:<\/p>\n<p>Tier 1 Information Layer Original records, learning notes, conversations, existence of memory\/learning\/ cataloguing, addition without deletion, search of the Tier 2 knowledge layer as needed Daily extraction, work logs, key decision-making, extracted knowledge points, one document per day with memory\/YYYY-MM-DD.md (this is an openclaw's own) Tier 3 Intelligence Layer Long-term memory, cross-section insight, bottom rule, core method theory, Memory.md, controlled within 100 lines, every session<\/p>\n<p>The inspiration comes from the way humans remember -- you don't remember all the words that you read every day, but you remember the points of knowledge that you've extracted, and you eventually form the bottom mode of thinking<\/p>\n<p>Then there were seven core documents, one for each, without repetition:<\/p>\n<p>AGENTS.md\u2192 How does it work<\/p>\n<p>SOUL.md\u2192 Who am I<\/p>\n<p>Who do I serve<\/p>\n<p>TOOLS.md\u2192 How to operate (toolbook, configuration description)<\/p>\n<p>What do I remember<\/p>\n<p>Where did I fail<\/p>\n<p>SHARED.md\u2192 Team Consensus<\/p>\n<p>the core principle is one: there is only one place to store information. i've made the same mistake of repeating the same rule in four files before<\/p>\n<p>Tell me about the big pit you stepped on..<\/p>\n<p>My study notes, line 345KB\/8509, are directly covered by a cron mission write operation, 9.9KB<\/p>\n<p>it's because i asked angent to add to the file, but the model chose write instead of edit<\/p>\n<p>write = overwrite the whole file, edit = add at the given location<\/p>\n<p>A month's study record is almost lost<\/p>\n<p>from that point on, the iron law was established: write to existing files is always added with edit, never covered with write<\/p>\n<p>This rule is now written in ERRORS.md and SHARED.md of each angent<\/p>\n<p>Let's just put it this way <a href=\"https:\/\/www.1ai.net\/en\/tag\/openclaw\" title=\"[See articles with [OpenClaw] label]\" target=\"_blank\" >OpenClaw<\/a> Time-trigger mechanism, default implementation every 30 minutes<\/p>\n<p>why are you introducing this? because we've only achieved memory storage, but the dynamic absorption of memory can't be done by the mechanism of openclaw itself, which is still weak<\/p>\n<p>So how do we solve the dynamic absorption mechanism<\/p>\n<p>the answer is the mechanism of the heartbeat, which i set up every six hours to deepen my memory, covering working hours four times a day<\/p>\n<p>Collapse process: Read the latest log file \u2192 Refinement to MEMORY.md\/USER.md\/ERRORS.md\/Daily Memory \u2192 Clear out outdated information<\/p>\n<p>what can your agent do with this memory system<\/p>\n<p>\u266a Learning scene: you learned React Hooks Chapter Three yesterday, Agent learned today, followed by Chapter Four<\/p>\n<p>working scene: your code style preferences, project structure engagements, angent always remembers, don't repeat it every time<\/p>\n<p>Team scene: New anent, it automatically reads ERROR.md, and the first second knows that all the rules and pits on the team aren't agent's smart, you give it a brain that doesn't lose, that's the soul of openclaw<\/p>\n<p>openclaw is really good, but people who don't use it in the agent frame are like kids with laser guns, but they don't understand<\/p>\n<p><strong>Search for decision trees<\/strong><\/p>\n<p>it's set up, but angent's going to do it again and again<\/p>\n<p>Let's start with how confused I was -- OpenClaw was with a bunch of search tools: web_fetch, curl, Brownser, and a variety of third parties that started to test it every time they had a web page. Using web_fetch poach failed (SSRF intercepted) to try to reconfigure around poach system abnormality poach curl poach successful<\/p>\n<p>The same pit, angent A stepped and angent B stepped again. It takes 5-10 minutes for every search mission to try and waste 30-50% token<\/p>\n<p>And what's more, I was trying to go around SSRF in openclaw.json-Rigadangerus AlloPrivateNetwork, not only useless, but also a system anomaly. Then I realized SSRF protection was hard-coding, and the configuration couldn't be changed<\/p>\n<p>It's the first key idea: don't try to change the bottom configuration<\/p>\n<p>The turning point was that I discovered a free, Internet-wide search tool, which automatically became OpenClaw Skill, and that all agents could directly use a web-based search, web reading, YouTube subtitles, all free of charge, without the need for an API key, and of course it's not important<\/p>\n<p>AFTER THE TESTS, I FOUND THAT IT COVERED THE 801 TP3T SEARCH SCENE, WITH THE EXCEPTION OF 201 TP3T, EACH WITH ITS OWN SOLUTION<\/p>\n<p>So I built a single search and decision tree:<\/p>\n<p>FIRST STEP: IS JS RENDERING OR LOGIN REQUIRED<\/p>\n<p>\u266a need, just use browner \u266a<\/p>\n<p>\u266a No need, with the free tools I've assigned \u266a<\/p>\n<p>The tool also failed, cutting by error type: SCRF intercepts curl, other bugs use web_fetch, and failed to go to Brownser<\/p>\n<p>Special rules have to be documented: GitHub search can only be browser, B IP blocked only browser, and his own warehouse gh CLI<\/p>\n<p>effect: search mission from 5 to 10 minutes error to less than one minute direct hit, token cut in half<\/p>\n<p>But the most valuable part of this is not the decision tree itself, but the idea behind it -- I wrote it in SHARED.md<\/p>\n<p>all agents automatically read when they start, i step on the pit, and the new agent will know in the first second<\/p>\n<p>One man stepped through the pit, and the others didn't have to. Central Control Agent maintains SHARED.md and advises all agents if they are updated<\/p>\n<p>The idea is not just to search for what can be done \u2014 it can be done in the context of any tool chosen:<\/p>\n<p>identification of duplicate problems \u2013 research tools \u2013 \u2013 \u2013 \u2013 \u2013 \u2013 \u2013 decision-making \u2013 \u2013 \u2013 \u2013 \u2013 \u2013 \u2013 \u2013 documentation \u2013 \u2013 \u2013 \u2013 \u2013 writing shared files \u2013 \u2013 \u2013 \u2013 \u2013 \u2013 \u2013 \u2013 \u2013 \u2013 \u2013 \u2013 \u2013 \u2013 \u2013 \u2013 \u2013 \u2013 \u2013 \u2013 all t\u00e9am benefits<\/p>\n<p>From personal experience to team knowledge, it's a step up, and in the AI era, that's all that matters: don't do it alone, angent is your employee, you're learning management<\/p>\n<p><strong>III. CONCLUSION \u2014 PLANNING DOCUMENT FORM<\/strong><\/p>\n<p>Remember, you can find it, but there's a covert question that's hardly ever mentioned<\/p>\n<p>have you ever experienced a situation in which angent was doing a complicated task and was able to ask you suddenly \u2014 what did you ask me to do<\/p>\n<p>you think it's bug, restart the session, make it start again. it's halfway there, and i forgot<\/p>\n<p>It's not bug, it's OpenClaw's normal mechanism - the context window is limited, it's too long to compress and remove old conversation and tool results to release token<\/p>\n<p>If your mission status is only in dialogue, you lose it all once. Most people would think OpenClaw was a bad idea of the mechanism<\/p>\n<p>that's the reason<\/p>\n<p>my approach: create a plan document for complex tasks<\/p>\n<p>file structure is simple: goal (in one sentence) + step list (with checkbox) + current progress + problems encountered + next step update the file, tick<\/p>\n<p>when context was compressed, the file was not affected -- the new session started reading the plan document, and the last progress continued<\/p>\n<p>Delete or move to an archive job after the task is completed as part of the external knowledge base<\/p>\n<p>And the Heartbetat check will find out what's going on and report to you<\/p>\n<p>How exaggerating<\/p>\n<p>you sleep at night, angent on a 20-step mission. central context compressed three times and crossed two sessions<\/p>\n<p>Up in the morning, it finished reading the plan document, and then went back to step 15 yesterday, without losing any progress<\/p>\n<p>you've slept, and angent's been working for you all night<\/p>\n<p>it's essentially a critical state of short-term memory, externalized into long-term storage<\/p>\n<p>It's the same logic as the three first layers of memory: something important can't just exist where it disappears<\/p>\n<p>The three modules speak of three things, but the bottom is the same thing<\/p>\n<p>This is a framework solution<\/p>","protected":false},"excerpt":{"rendered":"<p>This is the second part of our openclaw teaching, how to make it work? A lot of people just installed an openclaw's first question: Why so stupid? What's the difference? And the reason behind it, I think, is not the problem of the model, not the problem of the prompt is that there are three things that are missing between \"use\" and \"use\": remember, find, and never break, and I sum up all the pits I've stepped on, and we say, one by one -- remember -- remember the memory system, you let angent learn React, talk about it for one afternoon, remember your progress, step through the pit, what to learn in a few days, and it's all forgotten -- who you are, where you learned, what you talked about yesterday, you have to reteach it from scratch<\/p>","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[149,144],"tags":[8229,8392],"collection":[8384],"class_list":{"0":"post-51305","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"hentry","6":"category-jiaocheng","7":"category-baike","8":"tag-openclaw","10":"collection-openclaw"},"acf":[],"_links":{"self":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/51305","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/comments?post=51305"}],"version-history":[{"count":0,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/51305\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/media?parent=51305"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/categories?post=51305"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/tags?post=51305"},{"taxonomy":"collection","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/collection?post=51305"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}