Code intelligence
for AI agents

Your AI agent wastes 70% of tokens reading irrelevant files. Sverklo indexes your codebase locally, so agents find the right code instantly — fewer tokens, faster results, better answers.

1. Install
$ npm install -g sverklo
2. Connect to your agent
$ claude mcp add sverklo -- sverklo .
3. Ask anything
Found 4 results in 23ms — hybrid search across 1,200 files

AI agents are blind without context

70%
of tokens wasted
AI agents read irrelevant files via grep, burning through your context window and rate limits.
10-30 min
lost per session
Developers rebuild context every session. Decisions, patterns, and preferences vanish after compaction.
<50ms
with Sverklo
Semantic search finds exactly the right code. Memories persist across sessions. Your agent stays smart.

Search that understands your code

Hybrid retrieval with real ONNX embeddings, graph-based ranking, and session memory. Everything runs locally.

Hybrid Search

BM25 text matching combined with vector semantic search, PageRank graph ranking, and reciprocal rank fusion for precise results.

Real ONNX Embeddings

all-MiniLM-L6-v2 runs locally via ONNX runtime. No API keys, no network calls, no data leaving your machine.

10 Languages

TypeScript, JavaScript, Python, Go, Rust, Java, C, C++, Ruby, and PHP. Tree-sitter parsing for accurate symbol extraction.

Session Memory

Remember decisions, patterns, and context across sessions. Git-state linking, quality scoring, and automatic staleness detection.

Zero Config

Auto-indexes your codebase, respects .gitignore, incremental updates with file watcher. Just run it and search.

Token-Budgeted

Responses are trimmed to fit within LLM context windows. Your agent gets the most relevant code without wasting tokens.

TypeScript JavaScript Python Go Rust Java C C++ Ruby PHP

From code to answers in milliseconds

30 files indexed in 500ms. Search results in under 50ms. All local.

1

Watch

File watcher detects changes, respects .gitignore

2

Parse

Tree-sitter extracts symbols, types, references

3

Embed

ONNX model creates vector embeddings locally

4

Index

BM25 + vector index + PageRank graph built

5

Search

Hybrid retrieval with RRF fusion, ranked results

10 tools your agent actually needs

Code search and session memory designed for how AI agents work.

Code Search
search
Hybrid code search across your entire codebase. Combines text matching, semantic similarity, and graph ranking.
overview
Get a high-level structural overview of the project: key modules, entry points, architecture patterns.
lookup
Jump directly to a symbol definition. Retrieve function signatures, class definitions, type declarations.
find_references
Find all usages of a symbol across the codebase. Understand impact before refactoring.
dependencies
Map import graphs and dependency chains. See what depends on what.
index_status
Check indexing progress, file counts, and embedding status for the current project.
Session Memory
remember
Save a decision, pattern, or context to persistent memory. Linked to current git state automatically.
recall
Retrieve relevant memories for the current context. Quality-scored and staleness-aware.
forget
Remove outdated or incorrect memories. Keep your memory store clean and accurate.
memories
List and browse all stored memories. Filter by topic, recency, or quality score.

100% free. No limits. No catch.

All 10 tools, all features, MIT licensed. Runs locally on your machine — your code never leaves your computer.

Up and running in 60 seconds

1

Install Sverklo

npm install -g sverklo
2

Add to your AI agent

# Claude Code claude mcp add sverklo -- sverklo .
// Cursor / Windsurf / VS Code — add to mcp.json { "mcpServers": { "sverklo": { "command": "sverklo", "args": ["."] } } }
3

Start coding. The embedding model (~90MB) downloads automatically on first run. After that, instant.

Optional: Open the dashboard to see what's indexed

sverklo ui .
Copied to clipboard