A couple days ago I released gurupanguji/simulacrum. It's a way to share persistent agent memory that compounds on itself while keeping it local. This is part one of a series where I try to describe both the origin behind the idea as well as details.

I am sitting in a room finally cleared of five years of paper. There is a specific relief in seeing a clean desk, but it also highlights a hard truth: willpower is a trap. I cannot rely on my own discipline to keep things this way. I decided to outsource my memory to a hidden folder in my home directory. My agents used to be strangers every time I opened a new terminal, but now they remember the exact weight of my skepticism through a few strategic symlinks.

The thud of a five-year stack of documents hitting the recycling bin is a heavy sound. It is the sound of a mess finally addressed. As I sit here chewing gum in the quiet, I realize that my productivity has always been a precarious beauty, held together by the duct tape of temporary focus. I am forty-one, and I have finally admitted that I cannot outrun my own inconsistency. I need systems that compound. I need my tools to remember the "earned frame" of my previous work so I don't have to rebuild the context from scratch every single morning.

The realization hit me while I was deep in a session with Gemini, rewriting the CSS for my website. The code was clean and the logic was tight, but then I hit a wall: out of tokens. I switched to Codex to finish the footer, but the context was gone. It was like waking up with amnesia in the middle of a conversation. Codex didn't know about my preference for vanilla CSS or my creative philosophy. It is a redundant masterpiece of an LLM that is effectively a paperweight without my personal context.

The token budgets of these models are fragmented, forcing us to jump between them mid-thought. I found GEMINI trying to write its working memory into ~/.gemini/GEMINI.md, while my other agents were looking for a similar file in ~/.codex/AGENTs.md. It was a mess of fragmented context. My tools were behaving like strangers in the same house. I decided to stop the fragmentation by unifying everything into a single ~/.agents/ directory. I used symlinks to point both .codex and .gemini back to this central source of truth. Now, when I update a skill or a personal preference in one place, every agent feels the change. It is a shared memory that I can version control on GitHub and carry across my laptop and desktop.

This is where it gets meta. As I sit here chewing gum and drafting this very post, the system is watching and learning. I am using a blog-coauthoring skill that I refined from a doc-coauthoring template. Every time I correct a tone or pick a specific analogy, the agent is updating my personal/my-voice.md and working-style.md files. It is capturing the grit of my actual process.

We are building a feedback loop that compounds. Each session isn't a fresh start; it is an iteration. By the time we finish this draft, the agent will have a sharper sense of my skepticism and my preference for short, punchy sentences. I am not just writing a blog post; I am training a cognitive auxiliary to understand the "earned frame" of my creative life. This setup gives me a sense of sovereignty over my tools. If I switch from my laptop to my desktop, or from Gemini to Codex, the memory is already there. The system doesn't just work for me; it grows with me.

The Sovereign Workshop: A Compounding Algorithm

# The Sovereign Workshop: A Portable, Compounding Nervous System

Setup(New Machine):
  - Clone(Nervous_System): "git clone context-repo ~/.agents"
  - Link(Shared_Memory):
      - Create_Symlink: "~/.agents/AGENTS.md"  -> "~/.codex"
      - Create_Symlink: "~/.agents/GEMINI.md"  -> "~/.gemini"
  - Verify(Calibration): "Call 'personal-context' to wake the memory"

Session(Start):
  - Sync(Context): "git pull latest from ~/.agents"
  - Calibrate(Persona): "Activate personal-context skill"
  - Observe(Environment): "Scan current workspace for READMEs and existing drafts"
  - Align(Inference): "Connect current goals to the 'earned frame' of previous work"

Session(Execution):
  - Action(Creative_Work): "Apply Illustrative Clarity and Active Voice"
  - Feedback_Loop(Correction): "Update internal voice and working style models"

Session(End):
  - Synthesize(Compounding):
      - Analyze: "What did we learn about voice, identity, or process today?"
      - Propose: "Draft updates for about-me.md, my-voice.md, and working-style.md"
  - Persist(Longevity): "Commit and Push context updates to the Nervous System"
  - Coda(Skeptical): "End with an honest doubt. Avoid the 'AI happy ending'."

The real win is portability. By pushing ~/.agents to GitHub, I am no longer tethered to a single machine or a specific local disk. If my laptop dies or I decide to wipe my desktop to clear out the digital cobwebs, I don't lose the years of context I have built. I clone the repo, run a few symlink commands, and my agents are back to knowing exactly how I like to work within minutes.

I have built a shared nervous system that stays awake even when I switch machines. I hope this is the end of the "fresh start" and the beginning of a truly compounding creative life. But I am skeptical. Systems have a way of decaying or becoming their own kind of weight. Or in the metaphor of a garden, how do I keep culling?