/* recipe · custom MCP client */
Wire sverklo into any MCP-compatible agent
Sverklo runs as a Model Context Protocol stdio server. If your agent stack speaks MCP — OpenAI Agents SDK, an in-house JSON-RPC client, the official MCP TypeScript or Python SDK, or anything that can spawn a child process and exchange line-delimited JSON — you can hand it 37 sverklo tools without modifying sverklo itself. This recipe shows the exact wiring.
What you get after 35 lines
One sverklo child process per repo, indexed once on first call (~1.4s cold start, <50ms warm). The agent gains:
sverklo_search,sverklo_lookup,sverklo_refs— hybrid retrieval grounded in the actual symbol graphsverklo_impact,sverklo_test_map,sverklo_review_diff— refactor blast-radius and diff-aware risk scoringsverklo_remember,sverklo_recall,sverklo_pin— bi-temporal memory pinned to git SHAs (snapshot, branch, rollback, time-travel)- ~30 more across audit, dependency graphs, evidence verification, and concept clustering
Full tool list: github.com/sverklo/sverklo#tools.
The recipe (TypeScript)
Uses the official @modelcontextprotocol/sdk client. Same shape works with the Python SDK, custom JSON-RPC code, or any MCP-compatible runtime.
import { Client } from "@modelcontextprotocol/sdk/client/index.js";
import { StdioClientTransport } from "@modelcontextprotocol/sdk/client/stdio.js";
// 1. Spawn sverklo as a stdio MCP server.
// `npx -y sverklo` resolves to the binary on PATH (or fetches if missing).
// Pass the repo root via SVERKLO_PROJECT — sverklo indexes it on first call.
const transport = new StdioClientTransport({
command: "npx",
args: ["-y", "sverklo"],
env: { ...process.env, SVERKLO_PROJECT: process.cwd() },
});
const client = new Client({ name: "my-agent", version: "0.1.0" }, { capabilities: {} });
await client.connect(transport);
// 2. Discover tools. sverklo ships 37; your agent decides which to expose.
const { tools } = await client.listTools();
console.error(\`sverklo exposed \${tools.length} tools\`);
// 3. Call any tool by name. Arguments shape is documented per-tool in the
// JSON schema sverklo returns from listTools().
const result = await client.callTool({
name: "sverklo_lookup",
arguments: { name: "createApplication" },
});
// `result.content` is an MCP-shaped array of {type, text|image|...} items.
// For sverklo, every content block is text-shaped; concat or stream as needed.
for (const block of result.content) {
if (block.type === "text") console.log(block.text);
}
// 4. When done, close the transport. The sverklo child process exits with it.
await client.close();
What about the agent loop?
The recipe above is the MCP wiring. Your agent loop — model selection, tool-call routing, message history — stays whatever you already have. Three integration shapes the wiring tends to slot into:
- OpenAI Agents SDK: pass the discovered
toolsarray into the agent's tool registry; your existingtool_usehandlers route toclient.callTool. - Anthropic SDK / Claude direct: convert sverklo's
toolsJSON-Schema into Claude'stoolsparam shape; pipetool_useblocks back toclient.callTool. - Custom orchestration: discover tools once at session start, cache the schema, route by name. Sverklo's tool surface is stable across versions — re-discover only on version bumps.
Common gotchas
Stdio framing — keep it clean
Sverklo speaks line-delimited JSON-RPC over stdio. Don't write to stdout from the parent process while the transport is connected; it corrupts the frame stream and the SDK reports cryptic deserialization errors. Use stderr for your own logging. (We've seen this bug end-to-end inside MCP-shaped servers; an audit and fuzzer write-up lives at /blog/mcp-stdio-command-injection-audit/.)
Cold start vs warm latency
The first callTool blocks ~1.4s on a fresh repo while sverklo builds the index. Subsequent calls are <50ms. For a multi-turn agent session, eagerly call sverklo_status at startup to amortize cold start before the user is waiting on a real query.
Repo path resolution
Sverklo defaults to the working directory of the spawned process. Pass SVERKLO_PROJECT=/path/to/repo via the transport env if you want to be explicit. For multi-repo setups, run one transport per repo and route by path.
Memory persistence across sessions
Sverklo's memory layer (sverklo_remember, sverklo_recall) writes to ~/.sverklo/<repo>/ automatically. If your agent runs in an ephemeral container, mount that path or memories evaporate when the container does. Memories are git-SHA-pinned so even if the underlying file moves, the recall still resolves correctly.
Verifying the wiring
One-liner to confirm the agent and sverklo handshake completed:
npx sverklo --version # should print 0.20.x or later
npx sverklo doctor # 8 health checks; expects 8/8 green
If doctor reports "MCP handshake: not detected," the spawned process exited before stdio framing completed — usually a path or PATH issue. Fix with an explicit binary path in StdioClientTransport.command.
The receipts
This wiring is what powers sverklo's appearance on the bench:primitives 90-task benchmark: the bench harness is itself a custom MCP client (one process, batched callTool against the same surface a Cursor or Claude Code session uses). Sverklo at F1 0.56, 469 average input tokens, 1.0 tool calls per task — the numbers are reproducible because the wiring is the same a custom agent would use.
Install
npm install -g sverklo
cd your-project
sverklo init
sverklo init auto-detects which MCP-aware editors you have installed (Claude Code, Cursor, Windsurf, Zed, VS Code, JetBrains) and writes the right config files. For custom-client integrations, the recipe above is sufficient — you don't need to run init first.
GitHub: sverklo/sverklo · @modelcontextprotocol/sdk on GitHub · 90-task retrieval benchmark · Cursor SDK recipe
Read next
- Recipe with code: Cursor SDK + sverklo — 30 lines, programmatic agent
- Sverklo's stdio audit + fuzzer: MCP STDIO command injection audit
- How sverklo loses honestly: bench:primitives — every task where smart-grep beats us is on the same page as the wins
- The full integration list: /recipes/