Handling Repeated Mistakes with AI
How I use a mistakes.md file to help AI agents learn from their errors and avoid repeating them.
How I use a mistakes.md file to help AI agents learn from their errors and avoid repeating them.
AI coding agents are incredible tools, but they are not perfect. They will sometimes get things wrong repeatedly. Ideally, they wouldn't make mistakes at all. However, they often do. Sometimes these are simple errors that are easy to correct, or the agent figures it out on its own when the build fails or unit tests don't pass.
But sometimes they get stuck in a loop where they think they are right when they are absolutely not. I have had issues where I have to tell the AI to look up the same piece of documentation multiple times because it keeps reverting to a hallucinated API or an outdated pattern.
In some cases, I have just added these corrections to my project documentation file. But sometimes this information gets lost because it applies to only a specific part of the project or becomes noise for other tasks.
My solution for this is a three-fold pattern that has saved me a significant amount of frustration.
First, I add a mistakes.md file to directories when I encounter a specific, tricky error. This file serves as a local memory bank for the agent.
Second, I add an instruction to my claude.md or system prompt to check the mistakes.md file in a directory while making a plan. This ensures that before the agent writes a single line of code, it is aware of the pitfalls that have trapped it before.
Third, I instruct the AI to update the mistakes.md file when I encounter a mistake that it made. This is crucial. It turns a negative interaction into a permanent asset.
Putting this all together, the AI still makes mistakes. But it makes different mistakes. It makes more unique mistakes, and I end up with files full of documentation and improvements that I can carry with me from project to project.
This pattern really shines when I start working on a new project. Sometimes I will send an AI Agent to go look through the mistakes.md files from another project to "learn" from that history. This has saved me so much time in just the short period of time I have been using this pattern.
A real-world example has been the way that I have been building speech to text in multiple apps. There is an API on macOS for "speech recognition" which is fine but not great. However, in macOS 26 there is a much better API for transcribing speech.
The problem is that the "speech recognition" service shows up first in nearly every search that AI does for understanding how to build this. In my most recent project, I was able to get the Cursor Auto model to one-shot the implementation correctly by referencing the previous mistakes made in other projects.
It turned what would have been a frustrating hour of debugging into a solved problem.