Anonymised case study

Confidential MCP server for an editorial workflow

The client cannot be named. The publishable part: how the MCP boundary was drawn, why human approval stayed mandatory, and how cost control was built in from day one rather than bolted on after the first invoice.

Starting constraint

An editorial team produced multilingual long-form content. They wanted AI assistance for drafting and translation, but they refused two things: opaque cost, and unsupervised publishing. Both are reasonable refusals.

The brief was: give us AI tooling that we can audit, that does not surprise us with a five-figure bill, and that never publishes anything a human did not approve.

Why MCP and not direct API calls

Direct calls to Anthropic or OpenAI from WordPress code would have worked. They would also have spread credentials across plugins, leaked cost per feature, and made audit unrealistic.

MCP gave one boundary: the editorial team's WordPress install talks to one MCP server, the MCP server talks to model providers, and every tool call is typed, logged, and scoped.

Tool design

Every tool was typed with Zod and named for the editorial action it represented: suggest_outline, draft_section, translate_to_locale, evaluate_against_voice. Each tool had a clear input schema, output schema, and side-effect declaration.

Tools that produced content never wrote directly to wp_posts. They wrote to a draft queue. A human approved before publishing.

Cost control by design

Per-user, per-tool, and per-day budgets were enforced at the MCP boundary, not at the API call. When a user hit a cap, the next call returned a budget exhausted response and the editorial UI told them so. No silent overruns.

The same boundary surfaced cost reports per locale, per topic cluster, and per tool. Finance had a single dashboard.

Outcome bands

Exact numbers are confidential. Publishable: editorial throughput improved on translation tasks, time-to-first-draft dropped on long-form briefs, and cost stayed inside the agreed budget every month thanks to the hard caps.

The reusable lesson: AI in editorial works when the boundary is typed and the budget is enforced, not when the prompt is clever. Most failed AI editorial tools fail at the boundary, not at the model.

Frequently asked questions

Could we have used the OpenAI Assistants API instead of MCP?

Yes. The team chose MCP because it is open, vendor-neutral, and lets the same server serve other AI clients later. MCP is the Tech Radar Q4 2026 Adopt entry for tools for that reason.

Did this replace human editors?

No. It replaced first drafts and translation grunt work. Human editors gained back the time they used to spend on those, and used it on review, judgment, and headline work.

What happens when the model is wrong?

The draft queue catches it. Human review is mandatory. The point of typed MCP tools and the human approval step is exactly to make model errors cheap, instead of letting them ship.

Could the same approach work for support or customer service?

Yes, with a different tool design. The pattern is: typed tools, scoped auth, hard budgets, human approval for anything customer-facing. Editorial is one application of the pattern. Support is another.

Want an MCP boundary for your editorial team?

Send the current editorial workflow, the locale matrix, and the budget ceiling. I will tell you whether MCP earns its complexity for your team or whether a smaller integration is enough.

Request an MCP sketch