Installation of OpenClaw Junior White School, OpenClaw Middle School

Installation of OpenClaw Junior White School, OpenClaw Middle School

This is usopenclaw teachingIn the second part, how do you make it work

a lot of people just installed an openclaw's first question: why so stupid? what's the difference

The reason behind it, I think:

it's not the model, it's not the prompt. it's the three things that's missing between being able to use it and being good: remember, find it

I summarised all the pits I stepped on into a set of approaches. We said one by one

Remembering — memory systems

You asked agent to help you learn React, to talk about it all afternoon, to remember your progress, to step through the pit, to ask him what to learn in the next few days, to forget -- who you are, where you learned, what you talked about yesterday, you have to reteach it from scratch, and you don't know why you spend 10 minutes a day, five hours a month, all wasted

How

I'm talking about my program, and I'm using it to summarize a three-tier memory model:

Tier 1 Information Layer Original records, learning notes, conversations, existence of memory/learning/ cataloguing, addition without deletion, search of the Tier 2 knowledge layer as needed Daily extraction, work logs, key decision-making, extracted knowledge points, one document per day with memory/YYYY-MM-DD.md (this is an openclaw's own) Tier 3 Intelligence Layer Long-term memory, cross-section insight, bottom rule, core method theory, Memory.md, controlled within 100 lines, every session

The inspiration comes from the way humans remember -- you don't remember all the words that you read every day, but you remember the points of knowledge that you've extracted, and you eventually form the bottom mode of thinking

Then there were seven core documents, one for each, without repetition:

AGENTS.md→ How does it work

SOUL.md→ Who am I

Who do I serve

TOOLS.md→ How to operate (toolbook, configuration description)

What do I remember

Where did I fail

SHARED.md→ Team Consensus

the core principle is one: there is only one place to store information. i've made the same mistake of repeating the same rule in four files before

Tell me about the big pit you stepped on..

My study notes, line 345KB/8509, are directly covered by a cron mission write operation, 9.9KB

it's because i asked angent to add to the file, but the model chose write instead of edit

write = overwrite the whole file, edit = add at the given location

A month's study record is almost lost

from that point on, the iron law was established: write to existing files is always added with edit, never covered with write

This rule is now written in ERRORS.md and SHARED.md of each angent

Let's just put it this way OpenClaw Time-trigger mechanism, default implementation every 30 minutes

why are you introducing this? because we've only achieved memory storage, but the dynamic absorption of memory can't be done by the mechanism of openclaw itself, which is still weak

So how do we solve the dynamic absorption mechanism

the answer is the mechanism of the heartbeat, which i set up every six hours to deepen my memory, covering working hours four times a day

Collapse process: Read the latest log file → Refinement to MEMORY.md/USER.md/ERRORS.md/Daily Memory → Clear out outdated information

what can your agent do with this memory system

♪ Learning scene: you learned React Hooks Chapter Three yesterday, Agent learned today, followed by Chapter Four

working scene: your code style preferences, project structure engagements, angent always remembers, don't repeat it every time

Team scene: New anent, it automatically reads ERROR.md, and the first second knows that all the rules and pits on the team aren't agent's smart, you give it a brain that doesn't lose, that's the soul of openclaw

openclaw is really good, but people who don't use it in the agent frame are like kids with laser guns, but they don't understand

Search for decision trees

it's set up, but angent's going to do it again and again

Let's start with how confused I was -- OpenClaw was with a bunch of search tools: web_fetch, curl, Brownser, and a variety of third parties that started to test it every time they had a web page. Using web_fetch poach failed (SSRF intercepted) to try to reconfigure around poach system abnormality poach curl poach successful

The same pit, angent A stepped and angent B stepped again. It takes 5-10 minutes for every search mission to try and waste 30-50% token

And what's more, I was trying to go around SSRF in openclaw.json-Rigadangerus AlloPrivateNetwork, not only useless, but also a system anomaly. Then I realized SSRF protection was hard-coding, and the configuration couldn't be changed

It's the first key idea: don't try to change the bottom configuration

The turning point was that I discovered a free, Internet-wide search tool, which automatically became OpenClaw Skill, and that all agents could directly use a web-based search, web reading, YouTube subtitles, all free of charge, without the need for an API key, and of course it's not important

AFTER THE TESTS, I FOUND THAT IT COVERED THE 801 TP3T SEARCH SCENE, WITH THE EXCEPTION OF 201 TP3T, EACH WITH ITS OWN SOLUTION

So I built a single search and decision tree:

FIRST STEP: IS JS RENDERING OR LOGIN REQUIRED

♪ need, just use browner ♪

♪ No need, with the free tools I've assigned ♪

The tool also failed, cutting by error type: SCRF intercepts curl, other bugs use web_fetch, and failed to go to Brownser

Special rules have to be documented: GitHub search can only be browser, B IP blocked only browser, and his own warehouse gh CLI

effect: search mission from 5 to 10 minutes error to less than one minute direct hit, token cut in half

But the most valuable part of this is not the decision tree itself, but the idea behind it -- I wrote it in SHARED.md

all agents automatically read when they start, i step on the pit, and the new agent will know in the first second

One man stepped through the pit, and the others didn't have to. Central Control Agent maintains SHARED.md and advises all agents if they are updated

The idea is not just to search for what can be done — it can be done in the context of any tool chosen:

identification of duplicate problems – research tools – – – – – – – decision-making – – – – – – – – documentation – – – – – writing shared files – – – – – – – – – – – – – – – – – – – – all téam benefits

From personal experience to team knowledge, it's a step up, and in the AI era, that's all that matters: don't do it alone, angent is your employee, you're learning management

III. CONCLUSION — PLANNING DOCUMENT FORM

Remember, you can find it, but there's a covert question that's hardly ever mentioned

have you ever experienced a situation in which angent was doing a complicated task and was able to ask you suddenly — what did you ask me to do

you think it's bug, restart the session, make it start again. it's halfway there, and i forgot

It's not bug, it's OpenClaw's normal mechanism - the context window is limited, it's too long to compress and remove old conversation and tool results to release token

If your mission status is only in dialogue, you lose it all once. Most people would think OpenClaw was a bad idea of the mechanism

that's the reason

my approach: create a plan document for complex tasks

file structure is simple: goal (in one sentence) + step list (with checkbox) + current progress + problems encountered + next step update the file, tick

when context was compressed, the file was not affected -- the new session started reading the plan document, and the last progress continued

Delete or move to an archive job after the task is completed as part of the external knowledge base

And the Heartbetat check will find out what's going on and report to you

How exaggerating

you sleep at night, angent on a 20-step mission. central context compressed three times and crossed two sessions

Up in the morning, it finished reading the plan document, and then went back to step 15 yesterday, without losing any progress

you've slept, and angent's been working for you all night

it's essentially a critical state of short-term memory, externalized into long-term storage

It's the same logic as the three first layers of memory: something important can't just exist where it disappears

The three modules speak of three things, but the bottom is the same thing

This is a framework solution

statement:The content of the source of public various media platforms, if the inclusion of the content violates your rights and interests, please contact the mailbox, this site will be the first time to deal with.
TutorialEncyclopedia

Installation of openclaw little white course, OpenClaw advanced course

2026-3-21 9:38:52

Information

Amazon launches "Detective" project: AI's sharp eyes ensure that products are flawless before shipment

2024-6-4 10:58:42

Search