DocsGitHubBlog
Tutorials & Guides

How to connect your docs to Cursor, Claude Code, and other AI coding tools

Your users build with AI coding agents now. If those agents can't read your current docs, they guess — and ship the guess into production. Here's how an MCP server fixes that, and why it should already be part of your docs site.

A code editor with an AI assistant panel querying a documentation site's MCP server and returning exact API reference snippets

The developer integrating your product this week probably hasn't opened your documentation site. They opened Cursor, or Claude Code, or Windsurf, described what they wanted in plain English, and let the agent write the integration. If the agent could reach your current docs, that code is correct. If it couldn't, the agent did what these tools always do when context is missing: it pattern-matched against everything it has ever seen, produced something that looks exactly like a real integration against your API, and shipped it into the developer's editor. Wrong endpoint. Wrong auth header. A parameter that was renamed two releases ago. Nobody notices until it breaks.

You can close that gap with one thing: an MCP server in front of your documentation. Here's what that means and how to set it up.

What MCP is, in one paragraph

The Model Context Protocol is an open standard for letting AI tools call external tools and pull in external data on demand. An MCP server exposes a small set of named operations; an MCP client — Claude Code, Cursor, Windsurf, Claude Desktop, and a growing list of others — connects to it and can invoke those operations mid-conversation. When your documentation runs an MCP server, the agent stops guessing what your API looks like and starts asking. The difference between those two modes is the difference between a correct integration and a plausible one.

What a docs MCP server actually does

A good documentation MCP server exposes three operations, and they map cleanly onto how an agent thinks:

  • list_docs — "what pages exist?" The agent gets the shape of your documentation: every page, its title, its path. This is how it discovers that you have a page on webhooks at all.
  • search_docs — "which pages are about X?" Semantic search across the whole corpus. The agent asks for "rate limiting" and gets back the pages that actually cover it, ranked, not a keyword match on the word "rate."
  • get_doc — "give me that page, in full." The agent pulls the exact content of the page it needs, as clean Markdown, with no navigation chrome, no client-side rendering, no truncation.

Put together, that's a retrieval loop: discover, narrow, fetch. It's the same loop a careful human follows in your docs, except the agent does it in a second and then writes the code.

This is categorically better than the alternatives an agent falls back on. Scraping your rendered HTML gives it a page full of layout it has to strip, and breaks entirely if your docs render on the client. Relying on training data gives it whatever your API looked like whenever the model's corpus was frozen — months ago, at best. An MCP server gives it the version of your docs that's live right now.

Setting it up with Doccupine

If your docs run on Doccupine, the MCP server already exists. Every generated site exposes an endpoint at /api/mcp with list_docs, search_docs, and get_doc wired up — the same content your AI chat assistant uses for retrieval, exposed over the protocol. Semantic search needs an embeddings key configured (the same one you set up for AI chat); discovery and direct fetch work regardless.

To point an AI coding tool at it, you add a server entry to that tool's MCP config. Most MCP-aware editors take a block that looks like this:

{
  "mcpServers": {
    "acme-docs": {
      "type": "http",
      "url": "https://docs.acme.com/api/mcp"
    }
  }
}
1
Find your docs URL

It's whatever your documentation is served from — docs.acme.com, acme.com/docs, a *.doccupine.app subdomain, or your custom domain. The MCP endpoint is that origin plus /api/mcp.

2
Add the server to your AI tool

In Claude Code, drop the block above into a .mcp.json at your project root (or run claude mcp add). In Cursor, it goes in ~/.cursor/mcp.json or the project's .cursor/mcp.json. In Windsurf, it's the MCP section of the editor settings. The shape is the same everywhere — a name and a URL.

3
Ask a question that needs the docs

Restart the tool so it picks up the new server, then prompt it with something your documentation answers — "set up a webhook subscription for the invoice.paid event using the Acme API." Watch it call search_docs, then get_doc, then write the code against what it found.

That's the whole setup. The work that's normally hard — building the retrieval index, keeping it in sync with your content, exposing it over a protocol agents understand — is the part that's done for you.

When this matters most

An MCP server in front of your docs pays off hardest in three situations, and it's worth knowing which one you're in:

Your API surface is large or changes often. The more endpoints, parameters, and edge cases you have, the more an agent working from stale memory gets wrong. Live retrieval is the only thing that keeps pace.

Your docs are the integration path. If developers wire up your product mostly by reading docs rather than by installing a maintained SDK, then the docs are the SDK, and an agent that can't read them can't integrate. (We've written about why documentation has become the new SDK — the MCP server is the practical other half of that argument.)

You support an internal platform. If your "users" are other teams inside your company building against your internal services with their own agents, an MCP endpoint on your internal docs is the cheapest possible reduction in "wait, how does auth work on this service again?" Slack threads.

If you're not on Doccupine

The pattern is portable. To stand up your own docs MCP server you need: a place to host an HTTP endpoint that speaks the protocol, an index of your documentation content (titles and paths for list_docs, an embeddings index for search_docs, raw page content for get_doc), and a job that rebuilds that index whenever your docs change. The MCP side is a thin wrapper — the protocol spec and the official SDKs handle most of it. The real work is the index and keeping it fresh, which is exactly the work a docs platform should be doing for you anyway. If yours doesn't, that's worth weighing.

Either way, the test is simple: open your AI coding tool, point it at your product, and ask it to build something. If it writes the integration against your current API without you pasting in a single doc page, your retrieval surface is doing its job. If it hallucinates, it isn't — and an MCP server is the fix.

Start your free trial

If you wire this up and an agent surprises you — good or bad — I'd like to hear about it. I read every reply at [email protected].

Luan Gjokaj
Written byLuan Gjokaj

On the Doccupine team, building the open-source, AI-ready documentation platform.