fashions able to automating quite a lot of duties, comparable to analysis and coding. Nevertheless, usually instances, you’re employed with an LLM, full a job, and the following time you work together with the LLM, you begin from scratch.
This can be a main drawback when working with LLMs. We waste a number of time merely repeating directions to LLMs, comparable to the specified code formatting or tips on how to carry out duties in response to your preferences.
That is the place brokers.md recordsdata are available: A approach to apply continuous studying to LLMs, the place the LLM learns your patterns and behaviours by storing generalizable info in a separate file. This file is then learn each time you begin a brand new job, stopping the chilly begin drawback and serving to you keep away from repeating directions.
On this article, I’ll present a high-level overview of how I obtain continuous studying with LLMs by regularly updating the brokers.md file.

Why do we want continuous studying?
Beginning with a contemporary agent context takes time. The agent wants to choose up in your preferences, and you should spend extra time interacting with the agent, getting it to do precisely what you need.
For instance:
- Telling the agent to make use of Python 3.13 syntax, as a substitute of three.12
- Informing the agent to at all times use return varieties on capabilities
- Guaranteeing the agent by no means makes use of the Any sort
I usually needed to explicitly inform the agent to make use of Python 3.13 syntax, and never 3.12 syntax, in all probability as a result of 3.12 syntax is extra prevalent of their coaching dataset.
The entire level of utilizing AI brokers is to be quick. Thus, you don’t wish to be spending time repeating directions on which Python model to make use of, or that the agent ought to by no means use the Any sort.
Moreover, the AI agent typically spends further time determining info that you have already got out there, for instance:
- The identify of your paperwork desk
- The names of your CloudWatch logs
- The prefixes in your S3 buckets
If the agent doesn’t know the identify of your paperwork desk, it has to:
- Checklist all tables
- Discover a desk that sounds just like the doc desk (might be a number of potential choices)
- Both make a lookup to the desk to verify, or ask the consumer

This takes a number of time, and is one thing we will simply forestall by including the doc desk identify, CloudWatch logs, and S3 bucket prefixes into brokers.md.
Thus, the principle motive we want continuous studying is that repeating directions is irritating and time-consuming, and when working with AI brokers, we wish to be as efficient as attainable.
Methods to apply continuous studying
There are two important methods I strategy continuous studying, each involving heavy utilization of the brokers.md file, which it is best to have in each repository you’re engaged on:
- Each time the agent makes a mistake, I inform the agent tips on how to right the error, and to recollect this for later within the agent.md file
- After every thread I’ve had with the agent, I exploit the immediate beneath. This ensures that something I instructed the agent all through the thread, or info it found all through the thread, is saved for later use. This makes later interactions far simpler.
Generalize the information from this thread, and bear in mind it for later.
Something that might be helpful to know for a later interplay,
when doing comparable issues. Retailer in brokers.md
Making use of these two easy ideas will get you 80% on the best way to continuous studying with LLMs and make you a much more efficient engineer.
An important level is to at all times preserve the agentic reminiscence with brokers.md in thoughts. Each time the agent does one thing you don’t like, you at all times have to recollect to retailer it in brokers.md
You would possibly suppose you’re risking bloating the brokers.md file, which can make the agent each slower and extra expensive. Nevertheless, this isn’t actually the case. LLMs are extraordinarily good at condensing info down right into a file. Moreover, even if in case you have an brokers.md file consisting of 1000’s of phrases, it’s probably not an issue, neither with regard to context size or price.
The context size of frontier LLMs is tons of of 1000’s of tokens, in order that’s no difficulty in any respect. And for the associated fee, you’ll in all probability begin seeing the price of utilizing the LLM go down. The rationale for that is that the agent will spend fewer tokens determining info, as a result of that info is already current in brokers.md.
Heavy utilization of brokers.md for agentic reminiscence will each make LLM utilization quicker, and scale back price
Some added ideas
I’d additionally like so as to add some extra ideas which might be helpful when coping with agentic reminiscence.
The primary tip is that when interacting with Claude Code, you’ll be able to entry the agent’s reminiscence utilizing “#”, after which write what to recollect. For instance, write this into the terminal when interacting with Claude Code:
# At all times use Python 3.13 syntax, keep away from 3.12 syntax
You’ll then get an choice, as you see within the picture beneath. Both you put it aside to the consumer reminiscence, which shops the knowledge for all of your interactions with Claude Code, irrespective of the code repository. That is helpful for generic info, like at all times having a return sort for capabilities.
The second and third choices are to reserve it to the present folder you’re in or to the foundation folder of your challenge. This may be helpful for both storing folder-specific info, for instance, solely describing a particular service. Or for storing details about a code repository typically.

Moreover, totally different coding brokers use totally different reminiscence recordsdata.
- Claude Code makes use of CLAUDE.md
- Warp makes use of WARP.md
- Cursor makes use of .cursorrules
Nevertheless, all brokers normally learn brokers.md, which is why I like to recommend storing info in that file, so you have got entry to the agentic reminiscence irrespective of which coding agent you’re utilizing. It’s because at some point Claude Code would be the greatest, however we’d see one other coding agent on high one other day.
AGI and continuous studying
I’d additionally like so as to add a word on AGI and continuous studying. True continuous studying is usually mentioned to be one of many final hindrances to reaching AGI.
Presently, LLMs basically faux continuous studying by merely storing issues they be taught into recordsdata they learn afterward (comparable to brokers.md). Nevertheless, the best can be that LLMs regularly replace their mannequin weights at any time when studying new info, basically the best way people be taught instincts.
Sadly, true continuous studying shouldn’t be achieved but, but it surely’s seemingly a functionality we’ll see extra of within the coming years.
Conclusion
On this article, I’ve talked about tips on how to develop into a much more efficient engineer by using brokers.md for continuous studying. With this, your agent will decide up in your habits, the errors you make, the knowledge you normally use, and lots of different helpful items of data. This once more will make later interactions together with your agent far simpler. I consider heavy utilization of the brokers.md file is important to changing into engineer, and is one thing it is best to continually try to realize.
👉 My Free Sources
🚀 10x Your Engineering with LLMs (Free 3-Day E mail Course)
📚 Get my free Imaginative and prescient Language Fashions e-book
💻 My webinar on Imaginative and prescient Language Fashions
👉 Discover me on socials:
📩 Subscribe to my e-newsletter
🧑💻 Get in contact
✍️ Medium


