Default WooCommerce stalls before the catalogue gets interesting. A clean install with 200 SKUs feels fast in staging, then the same template under 30k products and 50k orders hits TTFB in the 1.5 to 3 second range, cart fragments serialise on the shop archive, and the wp_options autoload row count quietly crosses 100k.
The platform itself is fine. WooCommerce powers stores doing eight figures a year. The configuration is what kills it.
This is a field guide for hardening WooCommerce against real load. It assumes you already have shell access, WP-CLI, and a willingness to read slow query logs.
The order storage problem and what HPOS actually changes
For a decade WooCommerce stored orders as shop_order posts in wp_posts, with every line item, address, and metadata field exploded into rows in wp_postmeta. A “show me orders from May” query meant scanning a table mixed with blog revisions, autosaves, and draft pages, then JOINing postmeta three or four times to reconstruct each order.
High Performance Order Storage moves orders into wc_orders, wc_order_addresses, wc_order_operational_data, and wc_orders_meta. Order list queries now hit indexed columns directly rather than reconstructing each row from postmeta.
What HPOS does not fix on its own:
- Custom plugins that still write to postmeta keys instead of the new HPOS API will silently desync once compatibility mode is off
- Third party plugins that read order data with
get_post_meta($order_id, ...)instead of$order->get_meta()will return nothing once the legacy table stops being authoritative - Reporting plugins built on
WP_Queryagainstshop_orderneed a rewrite
Workflow:
- WooCommerce -> Settings -> Advanced -> Features -> turn on High Performance Order Storage with compatibility mode on
- Let the sync run during low-traffic hours; on a store with 200k orders this can run for several hours
- Verify with
wp wc cot verify_cot_data - Run your full plugin suite against the new tables for a week with compatibility mode still writing to postmeta
- Disable compatibility mode only after you have confirmed every order-touching plugin reads from HPOS
wp wc cot verify_cot_data
wp wc cot sync --batch-size=500
Cart fragments and the admin-ajax tax
Run any WooCommerce store through GTmetrix or WebPageTest and you will see ?wc-ajax=get_refreshed_fragments showing up on every page, often eating 800 to 1500 ms. This is WooCommerce asking PHP “did the cart icon in the header change?” on the homepage, on the blog, on category archives, on the about page. Even when the cart is empty. Even when the user has never added anything.
A real failure mode: a Stripe-checkout fashion store hit Black Friday with a paid traffic spike around 200 concurrent sessions. The checkout flow itself was fine, but cart fragments serialised through admin-ajax.php on the shop archive page where most of the paid traffic landed. PHP-FPM workers queued, TTFB on the archive crossed 8 seconds, and conversions collapsed for ninety minutes until the engineering team scoped fragments to commerce pages with a hard dequeue.
The simplest scoping rule:
add_action( 'wp_enqueue_scripts', function() {
if ( is_cart() || is_checkout() || is_product() ) {
return;
}
wp_dequeue_script( 'wc-cart-fragments' );
}, 11 );
Plugins like Perfmatters and Asset CleanUp expose this as a UI toggle. Either path works; the goal is the same. Cart fragments should fire only where a cart icon needs live updating.
If you genuinely need a header cart count on every page, store it in localStorage and update from JavaScript on add-to-cart events rather than polling the server.
Object cache versus database transients
WooCommerce uses transients heavily for variation price ranges, attribute term lookups, and shipping rate calculations. Without an object cache backend, transients live in wp_options with autoload = 'yes', which means every page load deserialises them all into memory.
Two failure modes worth recognising:
- A bookstore with 12k variable products had
wc_var_prices_*transients accumulating inwp_optionsfor two years. Autoload row size crossed 80 MB. Every uncached page load did a SELECT on awp_optionstable with 130k rows just to bootstrap WordPress. TTFB sat at 2.4 seconds before any business logic ran - A B2B catalogue store ran 30+ plugins, baseline TTFB 1.8 s. After identifying four plugins that wrote autoloaded options on every product save and disabling them, TTFB dropped to around 600 ms with no other change
Pick one cache backend and commit. Either Redis or Memcached as a real object cache, or stay on database transients. Mixing them causes invalidation bugs that are hard to diagnose.
# Audit autoload size in wp_options
wp db query "SELECT SUM(LENGTH(option_value)) AS bytes, COUNT(*) AS rows FROM wp_options WHERE autoload = 'yes';"
# Find the worst offenders
wp db query "SELECT option_name, LENGTH(option_value) AS bytes FROM wp_options WHERE autoload = 'yes' ORDER BY bytes DESC LIMIT 20;"
With Redis configured at the host level and object-cache.php dropped in, transients move to RAM, autoload pressure drops, and product page render time on a cold cache typically goes from the 500 to 800 ms range to under 100 ms.
Page caching exceptions and what must never be cached
Static caching helps archives, single product pages, and blog posts. It actively breaks anything that depends on a per-user session.
Hard exclusions for any page cache or Cloudflare Page Rule:
/cart//checkout//my-account/- Any URL containing
?wc-ajax= - Any URL containing the
woocommerce_items_in_cartcookie - The REST endpoints under
/wp-json/wc/
Cloudflare APO can cache HTML for logged-out users and respects WooCommerce session cookies if configured correctly, but verify with curl -I on cart and checkout that you are getting cf-cache-status: BYPASS. A misconfigured cache that serves another shopper’s cart is a GDPR incident, not a performance win.
Variation product queries and the woocommerce_variable_children_args filter
Variable products with 50+ variations each are a JOIN multiplier. The product archive on a clothing store with 200 variable products and 30 variations each is doing a JOIN against wp_posts for 6000 child products on every load.
Two practical fixes:
- Limit the variations loaded for the dropdown to those that are in stock and visible
- Use
woocommerce_variable_children_argsto exclude private and out-of-stock variations from the initial load on archive pages
add_filter( 'woocommerce_variable_children_args', function( $args ) {
if ( is_shop() || is_product_category() ) {
$args['post_status'] = 'publish';
$args['meta_query'][] = array(
'key' => '_stock_status',
'value' => 'instock',
);
}
return $args;
} );
Action Scheduler queue table bloat
Every WooCommerce extension that does background work uses Action Scheduler. Subscriptions, follow-up emails, abandoned cart recovery, stock sync, ERP integrations, all of it ends up in wp_actionscheduler_actions and wp_actionscheduler_logs.
On a site with active subscriptions and abandoned cart recovery, these tables accumulate millions of completed and failed actions within a year. The tables get queried on every admin page load to render the queue counter.
Maintenance script:
# Trigger action scheduler log purge
wp action-scheduler run --hooks=action_scheduler/purge_logs
# Inspect queue size
wp db query "SELECT status, COUNT(*) FROM wp_actionscheduler_actions GROUP BY status;"
# Verify indexes exist on status and scheduled_date_gmt
wp db query "SHOW INDEX FROM wp_actionscheduler_actions;"
WooCommerce ships an as_purge_logs action scheduled by default, but on stores that disable cron and run external schedulers it can stop firing without anyone noticing.
Image delivery for product galleries
Product images are usually the heaviest payload. Three rules:
- AVIF for hero and gallery images, with a WebP fallback for older Safari versions still in your traffic mix
- Stop letting clients upload 4000px raw exports. Cap the upload pipeline at 2000px on the long edge
- Native
loading="lazy"is enough; remove any old JavaScript lazy loaders that fight it
Pre-resize before upload rather than relying on WordPress to generate thumbnails on the fly, which writes to wp_postmeta and inflates the database.
Database hygiene checklist
Run these monthly, more often on stores with high cart abandonment volume:
# Truncate expired sessions
wp db query "DELETE FROM wp_woocommerce_sessions WHERE session_expiry < UNIX_TIMESTAMP();"
# Clear all WooCommerce transients
wp wc tool run clear_transients
# Optimize tables
wp db optimize
# Check Action Scheduler queue depth
wp action-scheduler status
Schedule via system cron rather than wp-cron.php, which only fires on traffic.
What to measure
Optimise blind and you will optimise the wrong thing. Pick three signals and track them weekly:
- TTFB on a cold cache for the shop archive and a single product page
- Time to first byte on
/cart/and/checkout/with one item and four items in the cart - Wp_options autoload row count and total bytes
- Action Scheduler queue depth and oldest pending action
Query Monitor in staging tells you which queries dominate. New Relic or Datadog APM in production shows the real distribution. PageSpeed Insights is a smoke test, not a diagnosis.
Last updated
2026-04-01. Field notes from production WooCommerce audits over the past year. If your store does not match these patterns, the patterns are not the issue. Read your slow query log first.

