vs / jcodemunch-mcp

Sverklo vs jcodemunch-mcp

Both are MIT-licensed, local-first, MCP-shaped code-intelligence servers. Both appear on the same 90-task benchmark. Both shipped lodash P1 fixes inside 36 hours of the original bench publication. Different retrieval substrates, different wins. The honest version is below.

Side by side

Sverklo jcodemunch-mcp
License MIT MIT
Runtime Node 25, embedded SQLite, ONNX (~90 MB) Python (uvx-installed)
Install npm install -g sverklo uvx jcodemunch-mcp
Retrieval substrate Hybrid BM25 + ONNX embeddings + PageRank Tree-sitter symbol indexing + call-graph fallback
MCP tools shipped 37 (see jcodemunch docs)
Bench F1 — overall 0.56 0.32
Bench F1 — P1 (def lookup) 0.73 0.73
Bench F1 — P2 (refs) 0.25 0.00
Bench F1 — P4 (file deps) 0.71 0.46
Bench F1 — P5 (dead code) 0.67 0.00
Avg input tokens / task 469 1,267
Avg tool calls / task 1.0 1.2
Memory (bi-temporal, git-pinned) Yes No
Diff-aware PR review Yes No
MCP clients supported Claude Code, Cursor, Windsurf, Zed, VS Code, JetBrains Claude Code, Cursor, Windsurf, Zed

Bench data from sverklo.com/bench — sverklo v0.20.2 / jcodemunch-mcp v1.80.9, 90 tasks across express + lodash + sverklo. Reproducible: npm run bench:quick.

The bench-loop story

2026-04-28 → 2026-05-04 · what happened

The original 60-task bench (Apr 28) had jcodemunch-mcp at P1 0.65 (definition lookup leader) but P5 0.00 on Express — the CommonJS module.exports = X re-export chain wasn't modeled as a use site, so the only export of the entire module appeared to have no callers.

Within hours of the bench going live, jcodemunch-mcp's maintainer @jgravelle shipped v1.80.7 / 1.80.8 / 1.80.9 in 36 hours. The fixes: CommonJS re-export modeling, a 500 KB per-file size cap (lodash.js is 548 KB), and a monolithic-IIFE call-graph fallback. P5 went 0.00 → 1.00; lodash P1 went 0/10 → 9/10.

Adding lodash to sverklo's bench then exposed the symmetric blind spot in sverklo's own parser. The regex brace counter mis-counted braces inside string literals — line 6301 of lodash.js has '{\n/* [wrapped with ' inside a string, and the unbalanced { made every function declaration after that line get absorbed into one ~11 K-line chunk. Sverklo v0.20.2 ships the fix; P1 went 0.30 → 0.73, overall F1 went 0.45 → 0.56.

Both projects landed lodash P1 fixes inside 36 hours of the original bench publication. Different parsers, different bugs, same effect — and the public benchmark made each side's blind spot visible to the other in a way no internal eval would have. The full timeline lives on sverklo.com/bench.

When to use which

Choose Sverklo

  • You want the F1 leader on the published benchmark (0.56 vs 0.32) at lower input tokens (469 vs 1,267).
  • File-dependency analysis matters to your workflow — sverklo wins P4 at 0.71 vs jcodemunch's 0.46.
  • You want bi-temporal memory pinned to git SHAs (jcodemunch has no memory layer).
  • Your repo has TypeScript / TSX-heavy code where the regex parser plus tree-sitter fallback both matter.
  • You want diff-aware PR review (sverklo review) alongside retrieval.
  • Node ecosystem fits your install constraint better than uvx / Python.

Choose jcodemunch-mcp

  • You only need P1 definition lookup and want a tighter, simpler tool — jcodemunch matches sverklo at 0.73 with a smaller surface.
  • Your stack is Python-native and uvx is the install path you already use.
  • You want the most active single-purpose MCP tool in this category — jcodemunch shipped three releases in 36 hours against bench findings, which is unusually responsive.
  • Tree-sitter call-graph is the substrate you trust over hybrid BM25 + embeddings.

Where neither wins

Both projects' P2 reference-finding scores reflect import-graph-shaped models — they return ~0 on call-site reference questions because they track import sites, not call sites. That's a legitimate design choice for refactor-by-module workflows, not a bug. If your job is "find every caller of X", smart-grep beats both projects at F1 0.40. The bench page's honesty section calls this out at the per-task level. Different retrieval substrates, different jobs.

Try Sverklo

MIT-licensed. Local-first. 37 MCP tools. Single install command.

click to copy