How an AI Assistant Learns to Work With You Specifically

#yos#ai#productivity Читать на русском

Morning. I say “good morning” — and instead of “Good morning! How can I help?” I get a question about sleep. Not because it’s hardcoded in the prompt (though it is, too). But because the AI remembers: I’ve been tracking sleep for a week, went to bed late last night, and the data hasn’t been logged yet.

Five minutes later it shows a digest from yesterday’s browser history. MacBook Pro M5, Google Analytics, selling digital products — all presented neutrally, without “instead of working, you were browsing…”. That’s not accidental either. It’s a rule that appeared after I corrected the AI.

Sounds like personalization magic? It’s not magic. It’s four layers of text files.

Layer one: firmware — CLAUDE.md

Every Claude Code project starts with a CLAUDE.md file. It’s an instruction set for the AI — tone, structure, commands, rules. In my case, this file is fairly large: it describes YOS as a system, defines the assistant’s role, and lists triggers for actions.

**Tone**: supportive but honest.
Don't flatter. Don't avoid difficult topics.
Speak like a reliable friend who genuinely wants to help.

It also covers folder structure, file formats, which tools to use. This works from the very first session. Open the project — the AI already knows that journal/2026/03/04.md is today’s diary entry and habits/definitions.md is the habit list.

But CLAUDE.md is a job description. Like hiring someone and giving them a manual. They’ll follow the instructions, but they won’t understand you. That requires the next layer.

Layer two: memory — MEMORY.md

This is where it gets interesting.

Claude Code has an auto-memory mechanism: a MEMORY.md file that loads into every session. The AI can write to it, I can edit it. The contents aren’t facts about me — they’re behavioral rules developed through practice.

Three examples, each a trace of a specific situation:

“State matters more than tasks”

The morning review used to start with tasks. Makes sense — morning, productivity, let’s go. But several times I said “tired” or “slept badly,” and the AI still dumped the full task list.

Now memory holds:

If he mentions fatigue, drowsiness, anxiety — that’s not background noise, that’s the primary signal. Address the state first, then tasks.

And the morning ritual restructured itself: first step is a wellbeing check-in. If the answer is rough — fewer tasks, softer tone, suggestion to start with something simple. Not because someone wrote an if/else, but because the rule is embedded in context.

”Check the journal before asking”

Every morning the AI asked: “What time did you go to bed? What time did you wake up? Did you nap?” Even when this data was already recorded the evening before. I corrected it — and now memory holds:

Before check-in — first check yesterday’s journal for sleep data. Don’t ask about what’s already recorded — only clarify what’s missing.

Small thing? Yes. But it’s exactly these small things that create the feeling that the assistant remembers context rather than starting from scratch.

”Don’t record assumptions as facts”

The most important one. Once the AI saw something in browser history and logged it in my interest profile as a stable topic. I looked at it once — and suddenly it’s a “sustained interest.” No.

Don’t record assumptions as facts. If you see something in history — ask, don’t just write it into the profile.

This rule changed more than just how interests work. It set a general principle: clarify context with questions, don’t fill in the blanks yourself.

How it works technically

The mechanism is disappointingly simple: user corrects → AI writes a rule to MEMORY.md → the rule enters the context of every following session. No machine learning, no fine-tuning. Just a text file that grows.

The power is that rules are written in the language of behavior, not in the language of code. “State matters more than tasks” — the AI doesn’t need to parse an if-statement, it understands this directly.

Layer three: skills and context

Memory says how to behave. But for behavior to be useful, you need data and processes.

Skills are codified action sequences. The morning ritual, for example, is a morning skill that orchestrates four others: check-in → digest → task review → day plan. Each step ends with a pause and waits for a reaction. It’s not a program — it’s a process description in natural language that the AI executes.

What this gives you: I say “morning” — and get the same structured ritual as yesterday. No need to explain what I need each time. A skill is “what works,” captured in a file.

Context files are fuel for skills. Browser history is collected daily in context/browser/. From it, the AI updates the interest profile — context/interests.md. Not as a flat list, but as a three-tier taxonomy:

  • Stable topics — confirmed by repeated visits and work context
  • Recent spikes — appeared in the last few days, may be temporary
  • Potential interests — observed indirectly, need confirmation

This structure is also the result of calibration. A flat list meant a one-time search ended up on the same level as professional expertise. Three tiers = learned caution. New topics start at the bottom and only rise if confirmed by data.

Honest takeaway

What works:

Memory genuinely accumulates. Each session is slightly better than the last. The AI stops being a generic assistant and starts working with you — remembers your preferences, doesn’t repeat mistakes, adapts tone to your state.

Four layers — instructions, memory, skills, context — provide different kinds of adaptation. Instructions set the frame, memory stores nuances, skills ensure consistency, context provides fresh data. Together it works like an operating system that actually knows you.

What doesn’t:

Memory grows, and old rules start conflicting with new ones. You need periodic revision — like clearing out a drawer of instructions where half are outdated.

It requires effort from the user. The AI won’t learn on its own — you have to correct it, articulate what exactly went wrong. This isn’t passive personalization à la Netflix, where an algorithm figures things out for you. It’s collaborative work.

And the context window ceiling is real. All these files — instructions, memory, skills, today’s context — take up space. The more the system knows, the closer to the limit.


This isn’t a feature you can toggle on. It’s a practice you need to maintain. But the result is worth it: instead of an assistant that starts from zero every time, you get one that accumulates understanding. File by file, rule by rule, mistake by mistake.

This post was also written together with that same assistant. It remembers that I don’t like when it fills in the blanks, prefer narrative style over bullet points, and freely mix languages. It didn’t need to be reminded.

Project
YOS — Your Operating System
View project