Benchmarks · Preliminary protocol
Cloudflare Workers vs Vercel Edge: TTFB on Astro 5
Status: methodology published, reproducible run scheduled for Q3 2026.
Preliminary
I do not publish numbers I cannot reproduce. This page ships the protocol so peers can audit the design and so the eventual numbers carry full provenance. Until the reproducible run completes, treat every measurement on this page as the corpus, the test plan, and the infrastructure description, not the result.
Test corpus
- 1 static blog post route (cached at edge)
- 1 ISR product page route (revalidated on webhook)
- 1 SSR personalised dashboard route (auth required, no cache)
- WordPress origin: shared WordPress 6.7+ instance with WooCommerce 9+
Stacks under test
- Astro 5+ with the Cloudflare adapter, deployed to Cloudflare Workers + Pages
- Astro 5+ with the Vercel adapter, deployed to Vercel Edge Functions
- Both fetch from the same WordPress REST API, identical cache headers, identical SSR logic per route
Infrastructure
- Cloudflare Workers: EU region, default plan
- Vercel Edge: CDG region (Paris), default plan
- Synthetic monitor: WebPageTest with Moto G4 device profile and 4G throttle
- Run sites: Frankfurt, Warsaw, Amsterdam, Madrid, Stockholm
- Cold-cache pass and warm-cache pass per run, 5 runs per pass
Variables measured
- TTFB (time to first byte): primary axis
- LCP (largest contentful paint): secondary
- Cold-start latency on first hit per route
- Cost per request (USD), normalised to 100k monthly pageviews
Methodology file
The result table lives in Markdown until the reproducible run is complete: open the methodology and result template. The first completed run must keep the same table shape so later runs remain comparable.
The full benchmark methodology cluster is available at /en/benchmarks/methodology/.
Expected findings
Cloudflare Workers historically wins on cold-start latency because Workers uses V8 isolates rather than container starts. Vercel Edge Functions also use V8 isolates and are competitive in 2026; the differentiator may shift to Cloudflare's larger network footprint and per-request pricing model. I am not predicting the result; I am documenting what the test will measure.
Cite this protocol
WPPoland. Cloudflare Workers vs Vercel Edge: TTFB on Astro 5 (preliminary protocol). Published 2026-04-28. URL: https://wppoland.com/en/benchmarks/cloudflare-workers-vs-vercel-edge-ttfb/
Frequently asked questions
What is being measured?
Time to first byte (TTFB), largest contentful paint (LCP), serverless cold-start latency, and per-request cost. Both runtimes serve the same Astro 5 build behind the same WordPress origin.
What is the test corpus?
An Astro 5 build with three route types: static blog post, ISR product page, SSR personalised dashboard. The same build artefact deploys to both Cloudflare Workers and Vercel Edge with the platform-specific adapter, no code changes.
What infrastructure is used?
Both stacks deploy from the same EU region (Cloudflare FRA, Vercel CDG). Synthetic monitoring runs from 5 European city POPs at the same throttled network profile (Moto G4 over 4G). Cold-cache and warm-cache passes are recorded separately.
Why TTFB and not full Web Vitals?
TTFB is the cleanest signal of edge runtime cost; the rest of the page-render budget is dominated by client-side hydration, not server. I measure LCP and INP for completeness but the primary axis is TTFB.
When will the numbers ship?
The reproducible run is scheduled for Q3 2026. I will refresh this page with the measurements, the Markdown result tables, and the full methodology footnotes. Until then, treat this page as the protocol, not the result.
Want to reproduce this benchmark?
Get in touch. I share the harness with peers under CC BY 4.0.
Contact me