MCP code intelligence comparison matrix
Single grid: 12 AI coding assistants and MCP code-intelligence servers across 9 dimensions. Use this as the high-level scan, then drill into any cell via the per-tool comparison pages.
| Tool | Category | License | Local-first | MCP-native | Symbol graph | Bi-temporal memory | Diff review | Languages | Cost |
|---|---|---|---|---|---|---|---|---|---|
| Sverklo | MCP retrieval layer | MIT | ● SQLite + ONNX | ● 37 tools | ● tree-sitter + PageRank | ● git-SHA pinned | ● risk-scored | 12 | $0 |
| Sourcegraph Cody | Hosted code AI | Source-available | ○ cloud or self-host enterprise | ○ | ● | ○ | ◐ | many | $9-19/dev/mo |
| Greptile | Hosted PR review | Proprietary | ○ cloud only | ○ | ● | ○ | ● inline comments | many | $30/dev/mo |
| Cursor @codebase | Editor + retrieval | Proprietary | ○ cloud-indexed | ◐ Cursor 0.42+ | ◐ | ○ | ○ | many | Cursor sub |
| Claude Context (Zilliz) | RAG over Milvus | MIT | ◐ needs Milvus | ● | ○ vector only | ○ | ○ | many | $0 + Milvus |
| Serena | LSP-backed retrieval | MIT | ● | ● | ● via LSP | ○ | ○ | LSP-defined | $0 |
| GitNexus | Graph DB retrieval | Polyform NC | ● KuzuDB | ● | ● Cypher graph | ○ | ○ | many | noncommercial only |
| codebase-memory-mcp | Memory-only MCP | MIT | ● | ● | ○ | ◐ | ○ | n/a | $0 |
| Aider repo-map | Agent + static map | Apache 2.0 | ● | ◐ via MCP plugin | ◐ static | ○ | ○ | many | $0 |
| Continue.dev | Editor assistant | Apache 2.0 | ◐ | ● | ◐ | ○ | ○ | many | $0 (BYO LLM) |
| Claude Code | CLI agent | Proprietary CLI | ◐ agent local, model cloud | ● | ○ via MCP only | ○ | ○ | many | Claude API |
| OpenAI Codex CLI | CLI agent | Apache 2.0 (CLI) | ◐ | ● | ○ via MCP only | ○ | ○ | many | OpenAI API |
How to read this: sverklo and the other MCP retrieval servers (Serena, Claude Context, codebase-memory-mcp, GitNexus) sit in the same category as us. Cody and Greptile are hosted services in adjacent categories. Cursor is an editor with built-in retrieval. Aider, Continue, Claude Code, and Codex CLI are agents — sverklo is the retrieval layer they call via MCP, not a competitor. The "diff review" column captures whether the tool itself produces risk-scored PR review output.
The two axes that actually decide
The category-confusion to avoid
Aider, Continue, Claude Code, and Codex CLI are agents, not retrieval engines. They do the editing. Sverklo is the retrieval layer the agent calls before editing. Asking "Aider vs sverklo" is like asking "your IDE vs your file system" — they live at different layers and you usually want both. The only honest comparison is between sverklo and the other MCP retrieval servers (Serena, Claude Context, codebase-memory-mcp, GitNexus) — and that's what the matrix above shows.
What about specific use cases?
- Hosted PR-review-as-a-service for an enterprise team that doesn't run anything locally: Greptile.
- Symbol-graph precision via real language servers, willing to configure per-agent: Serena.
- Drop-in retrieval that auto-detects every agent on the laptop, MIT, no external services: Sverklo.
- The agent itself, not retrieval: Aider, Continue, Claude Code, Codex CLI.
- Pure agent memory layer, no code retrieval: codebase-memory-mcp.
- Cypher graph queries on a code-graph DB, willing to accept noncommercial license: GitNexus.
How we built this matrix
Each cell was verified against the tool's official docs or repo on 2026-04-29. We don't claim any cell is permanent — tools evolve. If you find a stale cell, file an issue with the corrected information and a citation.
The choice of dimensions reflects the questions sverklo's users actually ask. We chose not to include "speed" or "F1" as columns because they're meaningful only with a specific benchmark — and we've published one of those at /bench/, with sverklo's losses on the same page as its wins.
See also
- bench:primitives — 60-task retrieval evaluation, sverklo vs naive grep vs tuned grep
- All comparison pages — per-tool detail rather than the high-level grid
- GitHub: sverklo/sverklo