Migrating an existing WordPress API to MCP: a 4-week playbook
A clean greenfield MCP build is straightforward. Migrating from an existing, working WordPress REST API while it keeps serving production traffic is the harder shape. This playbook is what I use to move from “we have /wp-json/ and one partner consumer” to “we have /wp-json/, that partner, and an LLM-friendly MCP server in front” without breaking anything.
This article anchors to the MCP server development service pillar.
TL;DR
- Four weeks: audit, scaffold, parallel-run, cutover.
- REST stays live for existing consumers; MCP is an addition, not a replacement.
- Zod schemas come from the REST audit, not from a wishlist.
- Parallel-run with one internal agent before any external traffic touches MCP.
- Logpush captures every tool call; mismatches between MCP and REST responses surface as schema failures.
Week 1: audit /wp-json/
The audit is a spreadsheet. One row per endpoint, the columns are:
| Column | What goes in it |
|---|---|
| Endpoint | /wp-json/wp/v2/posts, /wp-json/wc/v3/products, etc. |
| HTTP verbs | GET, POST, PUT, DELETE supported |
| Current consumers | Storefront, partner ERP, webhook receivers |
| Traffic volume | Requests per day from access logs |
| Data sensitivity | Public, customer-only, admin-only |
| Mutates state? | Yes/no |
| Maps to MCP tool? | Proposed tool name and intent |
| Notes | Plugins involved, custom field shapes, gotchas |
For a typical WooCommerce store the spreadsheet has 20 to 60 rows. Most of them map cleanly to MCP tools; a handful (admin internals, plugin-specific endpoints, webhook receivers) stay REST-only.
The output of week 1 is two artefacts:
- The proposed tool inventory:
catalogue.list,product.detail,order.intent,order.status,inventory.check, plus whatever is specific to your build. - A Zod schema first draft for each tool’s input and output, derived from the actual REST responses you captured during the audit.
I capture REST responses with curl plus jq for shape inspection, or a Postman collection for the team to share. The point is empirical truth, not what the README says the response should look like. WordPress plugins are notorious for adding fields that documentation never gets around to mentioning; the audit catches them.
Week 2: scaffold the MCP server
The week-2 deliverable is a working MCP server, deployed to a non-production Cloudflare Workers environment, with the tool inventory from week 1 implemented as thin adapters over the existing REST endpoints.
The server skeleton:
import { Server } from "@modelcontextprotocol/sdk/server/index.js";
import { StreamableHTTPServerTransport } from "@modelcontextprotocol/sdk/server/streamableHttp.js";
import { catalogueListInput, catalogueListOutput, handleCatalogueList } from "./tools/catalogue-list.js";
import { productDetailInput, productDetailOutput, handleProductDetail } from "./tools/product-detail.js";
// ... one import per tool
export function createMcpServer(env: Env): Server {
const server = new Server({
name: "wppoland-mcp",
version: "0.1.0",
});
server.tool(
"catalogue.list",
catalogueListInput,
catalogueListOutput,
(input) => handleCatalogueList(input, env),
);
server.tool(
"product.detail",
productDetailInput,
productDetailOutput,
(input) => handleProductDetail(input, env),
);
// ... one .tool() call per tool
return server;
}
Each handler is a thin adapter. It takes the validated input, builds a query string, fetches from /wp-json/, maps the response into the schema.org-aligned output shape, runs the output parse, returns. Business logic stays in WordPress; the handler is mechanical translation.
Week 2 also includes the auth scaffolding from the MCP authentication patterns guide. For the parallel-run in week 3 I use a single test token with all scopes; production scopes get tightened in week 4.
The wrangler.toml for the preview environment points at a staging WordPress install or a copy of production. The MCP server is reachable on a *.mcp-staging.wppoland.workers.dev URL accessible only to the team.
Week 3: parallel-run with synthetic traffic
Week 3 is where bugs surface. The pattern:
Build a comparison harness. A small TypeScript script that takes a tool name and an input, calls the MCP server, then calls the equivalent REST endpoint directly, and diffs the two responses. The diff is logged with the tool name, input, and a JSON-pointer-style path to each mismatch.
async function compareToolToRest(toolName: string, input: unknown) {
const mcpResponse = await callMcp(toolName, input);
const restResponse = await callEquivalentRest(toolName, input);
const diff = jsonDiff(restResponse, mcpResponse);
if (diff.length > 0) {
await logMismatch({ toolName, input, diff });
}
}
Run every tool against a representative input set. For catalogue.list: empty query, a query that returns hundreds of products, a query with category filter, a query with price range, a query that returns nothing. For product.detail: known SKU, unknown SKU, SKU with variations, SKU with custom fields. Cover the edge cases the audit surfaced.
Point one internal agent at the MCP server. Claude Desktop with the MCP server configured as a remote tool source is the configuration I use. A team member spends 30 minutes per day for the week interacting with their own store via the agent, capturing surprises in a shared doc.
Tighten schemas based on what comes back. Every Zod validation failure in the log is a schema bug or a mapping bug. Fix it; redeploy the preview Worker; rerun the harness.
The output of week 3 is a passing harness with zero diffs and a tool inventory that has survived 30 minutes of real agent interaction per day for five days. If either of those is missing, week 4 does not start.
Week 4: cutover and observability
The cutover is anticlimactic if weeks 1 to 3 went well.
Deploy production Workers. wrangler deploy --env production. The production MCP server points at the production WordPress origin, with production-scoped tokens issued through the WordPress admin.
Issue tokens to the first real agent runtime. This is usually one of: an internal agent that staff use, a partner integration that has asked for an MCP surface, or a public-facing assistant on the storefront. Start with the smallest, lowest-risk consumer.
Wire Cloudflare Logpush to a long-term store. The log fields covered in the building an MCP server for WooCommerce article are the right defaults: tool name, input hash, latency, validation outcome, token scope. The store is whatever your team already uses (BigQuery, ClickHouse, S3 + Athena).
Set up the watch dashboard. Three queries on day one:
- Tool calls per minute, broken down by tool name. Detects burst load.
- Validation failure rate per tool, breakdown by code (
input_invalid,output_invalid). Detects regression. - Latency p50/p95/p99 per tool. Detects upstream WordPress slowness leaking through.
Keep REST live, untouched. The existing storefront, the existing partner integrations, the existing webhook receivers all keep using /wp-json/ exactly as before. MCP is an addition, not a replacement. This is the single most important rule of the migration.
Failure shapes worth planning for
Six things I have watched go wrong on real migrations:
REST response field that the audit missed. A plugin adds meta_data: [...] to product responses. The MCP output schema does not include it. The Zod parse fails on real data. Fix: rerun the audit on production traffic, expand the schema or explicitly drop the field with a .transform().
Permalink mismatch between staging and production. The MCP product.detail tool returns the WooCommerce permalink field. Staging permalinks are https://staging.example.com/...; production permalinks are https://example.com/.... Test data passes; production fails the URL validator. Fix: configure the staging Worker with a permalink-rewrite step that mirrors production behaviour.
WooCommerce variation handling. The audit captured the simple-product response shape. Variation responses are different (the variant SKUs live under /wp-json/wc/v3/products/<id>/variations). Fix: handle variations as a separate fetch in product.detail, mirror to schema.org hasVariant.
Auth token leaks into logs. A handler logs the full Authorization header for debugging. The token ends up in the log store. Fix: redact the header in the log layer; rotate every token issued before the redaction was deployed.
Plugin update breaks a REST response. WooCommerce 9.x renames a field, the MCP handler still expects the old name, the Zod parse fails. Fix: pin WooCommerce versions in staging, run the comparison harness on every WordPress upgrade, treat the harness as part of the upgrade gate.
Agent loops on a malformed tool call. A buggy agent retries order.intent 100 times in a minute when the input fails validation. Without rate limiting the WordPress origin sees 100 fan-out calls. Fix: rate-limit per principal as documented in the MCP authentication patterns guide, and return a retry_after_seconds hint in the error envelope.
Splitting the migration across more time
Four weeks is the default. Two adjustments:
Smaller surface, two weeks. Three tools, one consumer, no auth complexity. Compress audit and scaffold into week 1, parallel-run and cutover into week 2. The pattern is the same; the calendar is shorter.
Larger surface, six weeks. A dozen tools, multiple auth modes, sensitive mutating actions. Add a week between scaffold and parallel-run for security review. Add a week between parallel-run and cutover for a soft launch with a single OAuth-authorized user before the wider rollout.
The four phases stay in the same order regardless of the time budget.
What stays on the WordPress side
This is worth saying explicitly. After the migration:
- The WordPress admin UI is unchanged.
- The Block Editor is unchanged.
- The user authentication system is unchanged.
- The existing REST endpoints are unchanged and continue to serve their existing consumers.
- The webhook firing logic is unchanged.
- The data layer (
wp_posts,wp_postmeta, WooCommerce tables) is unchanged.
What is added: a Cloudflare Workers MCP server, a token-issuance UI on the WordPress admin side (a small plugin or theme function), a Cloudflare Logpush configuration. Everything else is unchanged.
This is what makes the migration low-risk. If the MCP server has a bad day, you turn off the Worker. The agent surface goes down. The storefront keeps working, the partner integrations keep working, orders keep coming in.
Where this fits in the cluster
This article covers the migration shape. For the implementation walkthrough see building an MCP server for WooCommerce. For the auth strategy see MCP authentication patterns. For typed tool design see writing typed catalogue tools with Zod for MCP. For the protocol-level decision see MCP vs REST. The pillar is MCP server development.
Pricing is individual because the migration scope depends on the number of endpoints in the audit, the auth complexity, and the consumer count.
