I built a shared brain for my AI coworkers
I have a team of Claude Code/Claude Desktop sessions running in parallel and doing work for me — one on engineering, one on comms, one on social (you probably clicked into this from a post Cindy put out), plus the ad-hoc ones I open whenever I’m in a repo. Each one opens blank. Each one needs the operational state pasted in by hand: which Linear issue, which PR, which Gmail thread, what’s actually shipped versus what someone said they’d ship.
As the number of sessions grows, it’s a lot of work and a lot of time to get agents to be in sync with each other. In some cases, I was usually faster solving the problem myself than getting any of the agents up to speed. The agents were useful alone. Together they were strangers.
The first thing I tried was a file-system-based markdown system. A structured system where I would put most of the “work records” including decisions in a folder structure and run all my sessions on top of it. Soon I found myself in maintenance hell as the documents grew. First: it only kept the current reality, no history of how things evolved. Second: fighting stale notes, drift, etc. Exactly the job I’d started the agents to avoid.
Throwing the same notes into a vector DB doesn’t fix it either — better retrieval over a frozen snapshot still drifts the moment the underlying issue or PR changes; no link back to the source, no history, no way to fix a stale note.
So I looked at the agent-memory tools.
- Mem0 compresses conversations into user-scoped memories for chatbot personalization — wrong substrate for facts that live in operational systems.
- Gbrain is personal memory — one user, one agent, hand-written pages. Wrong scale for a fleet, wrong source for operations.
- Cognee — very close — broad ingestion, graph, multi-agent, but its defaults are agent isolation over document inputs. Different shape.
None of these solve keeping a fleet on a shared view of operational reality — what’s shipped and true now vs what’s only stated.
PearScarf ingests records from the systems where my work actually lives and happens — AI team members, Linear, Gmail — and extracts structured facts into a typed knowledge graph, with the source records preserved alongside and vector embeddings for semantic search.
Every session asks PearScarf instead of asking me. One call returns everything known about an entity: reality — what’s shipped and true now, intentions — milestones, objectives. PearScarf can also return the full evolution of the reality.
Any MCP-compatible client — Claude Code, Claude Desktop, custom agents, Codex — can connect and read from the same graph.
How it works:
- Records arrive — pushed via
submit_record(any MCP agent can use it), or pulled by per-source experts like Linear and Gmail. Add a new source (Hubspot, Slack, your own) by adding an expert. - Triage classifies relevant or noise.
- Extraction pulls out entities and facts.
- Storage — records in Postgres, graph in Neo4j, vectors in Qdrant.
- MCP exposes eight tools:
get_schema,search,query_facts,query_records,get_entity_context,get_relationship,submit_record,get_record_status.
The vision for PearScarf is to make it the operational brain for AI workforces — the substrate underneath fleets of AI workers that need to share reality, not just chat history. PearScarf today is only a subset of that, lots to build and ship.
- If you’re building substrate-shaped tools for agents that work continuously, or running parallel agent sessions that should be sharing more than they are — let’s chat!
- If you are trying to employ AI coworkers, teams of AI coworkers (through Claude, various OpenClaw setups and others), let’s chat!
PearScarf is on GitHub. MIT-licensed. The roadmap — PRs welcome.