A high-level walkthrough of the architecture, data flow, and the retrieval strategies that make Engram different from a vector database.
Architecture overview
Every layer runs locally. No external services required for core functionality.
Where your data lives
Everything lives in a single SQLite database on your machine. Memories, embeddings, entities, edges — all in one portable file.
The core product runs entirely on your machine. No accounts, no API keys for storage, no data leaving your device.
The memory lifecycle
Agent stores a fact, preference, or observation. Entities and topics are extracted automatically.
Memory is embedded, indexed, and linked to existing entities in the knowledge graph.
Queries combine entity matching, topic matching, vector search, and spreading activation.
Sleep cycles distill raw episodes into structured semantic knowledge using an LLM.
Refined memories surface proactively. Contradictions are detected and resolved.
Three pillars deep dive
Memories aren’t isolated documents — they’re nodes in a graph connected by typed edges. When you store a new memory, Engram links it to existing knowledge automatically.
Like how your brain consolidates memories during sleep, Engram’s consolidation engine uses an LLM to distill raw episodes into structured semantic knowledge.
A query activates matching nodes, then energy spreads through the graph along weighted edges — surfacing context you didn’t know to ask for.
Query “Thomas” → recall cascades through the graph, surfacing “prefers morning runs” without explicitly asking about running.
Retrieval strategy
Every recall query runs four retrieval strategies in parallel, then merges and re-ranks the results. This is why Engram outperforms pure vector search by 15+ points.
Finds memories mentioning the same people, projects, or concepts as your query.
Filters by topic tags extracted when memories are stored.
sqlite-vec embeddings find semantically similar memories even with different wording.
Follows knowledge graph edges to surface context you didn’t know to ask for.