MCP code intelligence comparison matrix

Single grid: 12 AI coding assistants and MCP code-intelligence servers across 9 dimensions. Use this as the high-level scan, then drill into any cell via the per-tool comparison pages.

Last verified: 2026-04-29 · 12 tools · 9 dimensions · 108 cells · published with the per-tool comparisons at /vs/
yes / supported natively partial / via integration / not the primary use case no / explicitly unsupported
Tool Category License Local-first MCP-native Symbol graph Bi-temporal memory Diff review Languages Cost
Sverklo MCP retrieval layer MIT SQLite + ONNX 37 tools tree-sitter + PageRank git-SHA pinned risk-scored 12 $0
Sourcegraph Cody Hosted code AI Source-available cloud or self-host enterprise many $9-19/dev/mo
Greptile Hosted PR review Proprietary cloud only inline comments many $30/dev/mo
Cursor @codebase Editor + retrieval Proprietary cloud-indexed Cursor 0.42+ many Cursor sub
Claude Context (Zilliz) RAG over Milvus MIT needs Milvus vector only many $0 + Milvus
Serena LSP-backed retrieval MIT via LSP LSP-defined $0
GitNexus Graph DB retrieval Polyform NC KuzuDB Cypher graph many noncommercial only
codebase-memory-mcp Memory-only MCP MIT n/a $0
Aider repo-map Agent + static map Apache 2.0 via MCP plugin static many $0
Continue.dev Editor assistant Apache 2.0 many $0 (BYO LLM)
Claude Code CLI agent Proprietary CLI agent local, model cloud via MCP only many Claude API
OpenAI Codex CLI CLI agent Apache 2.0 (CLI) via MCP only many OpenAI API

How to read this: sverklo and the other MCP retrieval servers (Serena, Claude Context, codebase-memory-mcp, GitNexus) sit in the same category as us. Cody and Greptile are hosted services in adjacent categories. Cursor is an editor with built-in retrieval. Aider, Continue, Claude Code, and Codex CLI are agents — sverklo is the retrieval layer they call via MCP, not a competitor. The "diff review" column captures whether the tool itself produces risk-scored PR review output.

The two axes that actually decide

Axis 1: deployment. Local-first (laptop + embedded SQLite + local model) vs cloud-hosted (your code uploaded to the vendor's infrastructure). For finance, healthcare, defence, or any project under data-residency constraints, this axis settles the choice in 5 minutes. Sverklo, Serena, Aider repo-map, codebase-memory-mcp, and GitNexus are local-first; Cody, Greptile, and Cursor's @codebase are cloud-hosted.
Axis 2: license. MIT (Sverklo, Serena, Claude Context, codebase-memory-mcp), Apache 2.0 (Aider, Continue, Codex CLI), source-available (Cody), Polyform Noncommercial (GitNexus), proprietary (Greptile, Cursor). For commercial use, MIT and Apache are equivalent; Polyform NC blocks commercial use entirely; source-available means "fine for evaluation, restricted for production."

The category-confusion to avoid

Aider, Continue, Claude Code, and Codex CLI are agents, not retrieval engines. They do the editing. Sverklo is the retrieval layer the agent calls before editing. Asking "Aider vs sverklo" is like asking "your IDE vs your file system" — they live at different layers and you usually want both. The only honest comparison is between sverklo and the other MCP retrieval servers (Serena, Claude Context, codebase-memory-mcp, GitNexus) — and that's what the matrix above shows.

What about specific use cases?

How we built this matrix

Each cell was verified against the tool's official docs or repo on 2026-04-29. We don't claim any cell is permanent — tools evolve. If you find a stale cell, file an issue with the corrected information and a citation.

The choice of dimensions reflects the questions sverklo's users actually ask. We chose not to include "speed" or "F1" as columns because they're meaningful only with a specific benchmark — and we've published one of those at /bench/, with sverklo's losses on the same page as its wins.

See also