BM25 for precision, semantic embeddings for recall, PageRank for structural importance — fused via RRF. Faster and more accurate than grep.
all-MiniLM-L6-v2 via ONNX runtime. 384-dimensional vectors generated on your machine. No API calls, no data leaves your laptop.
Files that are imported by many others rank higher. Your agent finds the actually-important code first, not just keyword matches.
Save decisions, patterns, and preferences with git-state linking. Stale memories flagged automatically when referenced files change.
File watcher updates the index on every save. Dependency graph and PageRank recompute in real time. Always fresh.
TypeScript, JavaScript, Python, Go, Rust, Java, C, C++, Ruby, PHP. More coming. Structural parsing for each.
Auto-imports memories from CLAUDE.md, .cursorrules, AGENTS.md, CONTRIBUTING.md, and ADRs on init. Your existing project knowledge becomes semantically searchable instantly.
Every memory carries the git SHA and branch from when it was created. Know what the code looked like when the decision was made.
If a memory references a file that no longer exists, it's flagged as stale. No more advice based on deleted code.
Memories are embedded and searched the same way as code. Ask "what did we decide about auth?" and get the relevant memory.
An MCP resource surfaces top memories to Claude before you type anything. Your decisions travel across sessions automatically.