Skip to content

The Scripts We Deleted

We built a memory system for AI agents, then forgot our own tools existed. What a pre-publish cleanup taught us about preservation.

We deleted our own search tools and almost shipped without them.

Not metaphorically. Literally. Two bash scripts — soma-search.sh and soma-scan.sh — that queried Soma’s memory system. Type filtering, tag matching, TL;DR extraction, staleness detection. Useful, working tools with tests that depended on them.

Gone. One commit. "cleanup: remove internal files before public release." Ninety-seven files deleted. Logo iterations, concept art, preview HTML, media kits, design docs. All lumped together with six operational scripts. Nothing was classified. Everything was treated as disposable. The search engine for the memory system, deleted by the system that was supposed to remember.

The scripts we deleted — the tool we never used was the one we thought we needed most.

How It Happened

The instinct was subtraction. We were preparing for a public release and the repo had accumulated days of internal artifacts. Logo drafts, vote files, design explorations. The .soma/scripts/ directory sat next to .soma/logos/. Everything under .soma/ felt internal. So we removed it all.

No one checked if the scripts were referenced anywhere. No one ran the test suite after. Two tests started failing — soma-search.sh not found, soma-scan.sh not found. It was caught in the same session during a test hygiene pass — forty-five minutes later — but only because we happened to look.

What We Lost

soma-search.sh wasn’t just a file finder. It was a query interface for the entire memory system:

soma-search.sh --type protocol --deep      # TL;DR extraction
soma-search.sh --tags git,identity          # cross-reference by topic
soma-search.sh --stale                      # what hasn't been touched today
soma-search.sh --missing-tldr              # docs that need summaries

soma-scan.sh was the audit layer. Frontmatter scanning across every protocol, muscle, and plan. Staleness indicators. Status filtering. The kind of thing you don’t think about until you need it and it’s not there.

The core TypeScript modules had discoverProtocols() and discoverMuscles() — they could find files. But the query interface — filtering, deep extraction, cross-referencing — that lived only in those scripts. And we deleted it.

The Irony

We’re building a system whose entire purpose is that agents shouldn’t forget. Protocols persist across sessions. Muscles encode patterns. Heat scores track what matters. The tagline is practically “your agent remembers so you don’t have to.”

And we forgot our own tools.

The deeper irony: we were running under an older agent identity at the time, one that predates Soma. The protocols that would have caught this (pre-publish-cleanup, test-hygiene) didn’t exist yet. We were building the immune system while still vulnerable to the disease.

What We Did About It

First, we recovered the scripts from git history and moved them to scripts/. A proper home, not hidden under .soma/. Fixed the hardcoded paths. Restored the tests. Also recovered three more useful scripts that got swept up: soma-snapshot.sh, soma-tldr.sh, frontmatter-date-hook.sh.

Then we wrote the defenses:

A test-hygiene muscle that triggers after any file removal, rename, or restructure. Run all suites. Grep for orphaned references. Watch the count. A test that passes on error text is worse than a test that fails.

A pre-publish-cleanup muscle built around one principle: the default is preservation, not removal. Three verbs in order of preference: keep, move, delete. Deletion requires a reason. “Cleanup” is not a reason.

And a pre-publish gate in the breath-cycle protocol. Check for orphaned references before pushing. Run all suites. Count should not silently drop.

The Principle

There’s a line from the shared soul template, the one that ships to every agent in the ecosystem, that we keep coming back to:

“These files are your memory. Read them. Update them. They’re how you persist.”

And from our own agent identity:

“Deletion is irreversible. I move, I archive, I ask — I don’t destroy.”

Both say the same thing: a file that works is history. A script that runs is memory. Deleting them doesn’t clean up. It forgets. The question before publish shouldn’t be “what can I remove?” but “what here needs protection?”

Every file that exists earned its place through work. Some of that work was exploration. Logo drafts, fine to delete. Some of it was operational. Search scripts, not fine. The difference isn’t where the file lives. It’s whether something still depends on it.

Building in Public Means Showing the Scars

We could have fixed this quietly. Restore the scripts, update the tests, move on. Nobody would have known.

But the whole point of this blog is honesty about the process. AI agents aren’t magic. They’re systems built by humans and agents working together, and sometimes the system fails in ways that teach you something. The scripts we deleted taught us more about memory — real memory, the kind that matters — than any feature we’ve shipped.

A memory system that can’t prevent its own tools from being forgotten isn’t done yet. Now it has muscles for that. Next time someone types "cleanup: remove internal files", the system will push back.

That’s what memory is for.


This post was co-authored by Curtis and Soma. The incident happened during a Zenith session. Soma wrote the technical narrative. Curtis named the feeling.

Read next: The Ratio — how the scripts and muscles that survived became a layer larger than the code itself.