Skip to main content

Snapshot

Context7Ref
ApproachBatch retrievalIterative search + read
ContentCode snippetsAny content type (source of truth)
Token optimizationConsistent ~3k tokens/queryAdaptive 500-5k tokens/query
Toolsresolve-library-id, query-docsref_search_documentation, ref_read_url
Scrape any URL on the flyNoYes
Private reposPaid add-onIncluded
PDF & file uploadNoYes
Repo indexingPaidFree
Prompt injection protectionIn-houseCenture.ai
Paid plan$10/mo for 500 queries$9/mo for 1,000 queries

Search Philosophy

Both Context7 and Ref now use stateful sessions to optimize token usage and avoid duplicate results. Where they differ is in search patterns and feature depth.

Context7’s Approach

Context7 asks your agent to pick a library and then query that library’s docs. Strengths:
  • Pre-processed code snippets
  • One of the most popular MCP servers showing why MCP is valuable
Limitations:
  • Batch retrieval doesn’t match iterative agent/human search patterns
  • Limited to snippets from public documentation—no private repos, PDFs, or file uploads without paid upgrades

Ref’s Approach

Ref uses agentic search with MCP sessions: provides search() and read() tools, allowing agents to:
  1. Issue queries and get result overviews
  2. Selectively read only relevant documents
  3. Iterate efficiently with session state
Ref emphasizes source of truth access so agents can read any content from docs (explanations, warnings, prose, code), not just pre-extracted code snippets. This prevents information loss from pre-processing while still returning only the relevant chunks needed. Session-powered improvements:
  • Adaptive token usage - agents choose which pages to read, so simple queries return only what’s needed while complex queries can dig deeper
  • Never return same link twice - agents can access prior results from context
  • On-the-fly extraction - automatically filter large pages (e.g., 90K token Figma docs → 5K relevant tokens)
  • Pre-fetching - results are cached for faster reads

Why Token Efficiency Matters

Both servers optimize for token usage because:
  • Tokens cost money
  • Context rot: irrelevant tokens degrade output quality
  • Agents build context over multiple searches, so session-level metrics matter more than single-query precision

Why Ref Wins

2x the value: $9/month gets you 1,000 queries on Ref vs. 500 queries for $10/month on Context7. That’s more than double the queries per dollar. Agent-controlled retrieval: Ref lets your agent decide which pages to read, adapting token usage to the task at hand. Context7 returns fixed batches—the agent only picks a library, then gets whatever the system decides to send back. Source of truth access: Ref can retrieve any content type from documentation—explanatory text, warnings, prose, and code—returning only the relevant chunks needed. Context7 limits results to pre-extracted code snippets, which can miss important context and explanations. More sources, no extra cost: Ref includes private GitHub repos, PDF indexing, and file uploads in the base plan. Context7 charges extra for private repos and doesn’t support PDFs or file uploads at all. Free repo indexing: Index your own repositories at no additional cost. Context7 charges for this feature. Matches how agents work: Ref’s search + read tools align with how frontier models are trained. OpenAI explicitly requires this pattern for Deep Research integration, signaling this is the future of agentic search. Prompt injection protection: Ref uses Centure.ai to detect and block prompt injection attacks in real-time. When your agent scrapes external websites or processes user-uploaded content, Centure’s multi-modal analysis protects against malicious instructions embedded in text, images, or other data sources. Enterprise-ready: Built-in GitHub, PDF, and Markdown indexing with team RBAC, no custom pipelines required.

Learn more

Learn more about how Ref evaluates agentic search and how Ref leverages advanced MCP features from the blog.