How Next.js 15's Full Route Cache Served Stale Prices at Checkout for 3 Hours
← Back
March 14, 2026Architecture9 min read

How Next.js 15's Full Route Cache Served Stale Prices at Checkout for 3 Hours

Published March 14, 20269 min read

The support ticket arrived at 2:14 PM on a Tuesday: "I was charged $49 but the page showed $29." Then another. Then seven more. By the time we traced it, our checkout page had been showing the old promotional pricing to every visitor for three hours — while our database had already switched to full price. Nineteen customers got manual refunds. The culprit wasn't our CDN, wasn't our database, wasn't a bad deploy. It was Next.js 15's own caching layer, doing exactly what it was designed to do.

Production Failure

We'd migrated our SaaS marketing and checkout flow from a PHP/Laravel monolith to Next.js 15 App Router six weeks earlier. The migration felt smooth — Lighthouse scores improved, TTFB dropped from 420 ms to 68 ms, and the team was happy. We had a promotional campaign running: $29/month for the first 3 months, reverting to $49/month on March 11 at noon.

At noon we updated the pricing record in PostgreSQL. No code deploy was needed — the checkout component fetched pricing server-side from our FastAPI service, which read from the database. On Pages Router (our old setup) this would have been instant: every request hit the server, fetched live data, rendered. On App Router with Next.js 15 defaults, something very different happened.

19manual refunds issued
3 hrsstale cache window
$380revenue exposure
0errors in logs

Zero errors. Zero alerts. Our monitoring showed healthy response times and a 200 status on every checkout page request. The system was working perfectly — just serving the wrong price.


False Assumptions

Our first instinct: Cloudflare cache. We had aggressive caching rules for static assets, and the team assumed a CDN cache hadn't been busted after the pricing change. We immediately ran cf-cache-status checks on the checkout URL.

terminal
$ curl -I https://app.example.com/checkout
HTTP/2 200
cf-cache-status: DYNAMIC
cache-control: no-store, must-revalidate
x-vercel-cache: HIT
...

Cloudflare showed DYNAMIC — it wasn't caching the route at all. But that x-vercel-cache: HIT was a hint we didn't follow for another 40 minutes. Our second assumption was a Redis cache in our FastAPI pricing service. We checked — Redis TTL on pricing keys was 60 seconds. Ruled out.

Third assumption: a stale server-side import or module-level variable holding old pricing data. We redeployed the application. The checkout page immediately showed the correct $49 price. Problem solved — or so we thought. Twenty minutes later, the first support ticket after the redeploy arrived: stale price again.

"We deployed three times trying to fix a cache we didn't know existed."

Profiling / Investigation

After the third failed redeploy, I started reading Next.js 15 internals. App Router introduced a four-layer caching model that Pages Router never had. Most of our team had been trained on Pages Router and assumed App Router behaved the same way for server-rendered routes.

Next.js 15 App Router — 4-Layer Cache Stack
═══════════════════════════════════════════════

  Browser Request
       │
       ▼
┌─────────────────────┐
│   Router Cache      │  ← Client-side, in-memory
│   (5 min TTL        │    Prefetched route segments
│    for static segs) │
└──────────┬──────────┘
           │ miss
           ▼
┌─────────────────────┐
│   Full Route Cache  │  ← Server-side, on-disk
│   (indefinite TTL   │    Entire rendered HTML + RSC payload
│    by default)      │    ⚠️  THIS WAS OUR PROBLEM
└──────────┬──────────┘
           │ miss
           ▼
┌─────────────────────┐
│    Data Cache       │  ← Per fetch() call
│    (indefinite TTL  │    Stored across requests & deploys
│     by default)     │
└──────────┬──────────┘
           │ miss
           ▼
┌─────────────────────┐
│   Upstream / DB     │  ← Actual data source
│   FastAPI + PG      │
└─────────────────────┘
  

The key discovery: in Next.js 15, any route that renders with no dynamic functions (no cookies(), no headers(), no searchParams access) is automatically treated as a static route and stored in the Full Route Cache. Our checkout page used none of those — it fetched pricing via fetch() inside a Server Component and rendered the price directly into HTML.

Next.js had captured that rendered HTML at build time (and again on first request after deploy), stored it in the Full Route Cache, and served it to every subsequent visitor without ever hitting our FastAPI service or database again. A fresh deploy busted the cache — which is why redeploying fixed it temporarily. But after the first post-deploy request, the new stale HTML was cached again.

Timeline of the Incident
════════════════════════

12:00 PM  ── Pricing updated in PostgreSQL ($29 → $49)
              No deploy triggered (expected: DB-only change)

12:00 PM  ── Full Route Cache still holds HTML with "$29"
              ↳ Every request served from cache (HIT)
              ↳ FastAPI never called
              ↳ 0 errors in logs

12:47 PM  ── First support ticket: "charged $49, saw $29"

 1:10 PM  ── We redeploy (cache busted by build ID change)
              ↳ First request after deploy: FastAPI called → $49 rendered
              ↳ Full Route Cache repopulated with "$49" HTML ✓

 1:10 PM  ── Fixed? No.
              ↳ 20 min later: Router Cache (client-side, 5 min) expires
              ↳ New visitors hit Full Route Cache: still "$49" ← actually fine now

Wait — why did tickets keep coming after redeploy?

 1:10 PM  ── Redeploy only invalidated one region (IAD)
              ↳ Edge regions CDG, SIN still serving old Full Route Cache
              ↳ Geography-split stale window for another 90 min
  

The redeploys did clear the US East cache but Next.js's Full Route Cache is replicated to each edge region on first request. Our non-US users were still hitting warmed caches in Frankfurt and Singapore that hadn't yet seen a post-deploy request.


Root Cause

Two compounding issues, both stemming from misunderstood Next.js 15 App Router defaults:

  • Full Route Cache with no revalidation: Our checkout Server Component called fetch('/api/pricing') without { cache: 'no-store' } or revalidate. Next.js cached the entire rendered HTML indefinitely. No dynamic function access meant Next.js classified the route as static.
  • Edge region cache warming after deploy: Redeploying busts the build ID and invalidates the server's Full Route Cache, but edge replicas re-warm independently on first request per region. A deploy with no warm-up traffic to non-primary regions left stale caches active for 60–90 minutes per region.
app/checkout/page.tsx — the broken fetch
// ❌ This is cached indefinitely by Next.js 15 Full Route Cache
// No dynamic signal = treated as fully static route
async function CheckoutPage() {
  const pricing = await fetch('https://api.internal/pricing/current')
    .then(r => r.json());

  return (
    <main>
      <PricingCard price={pricing.monthlyPrice} />
      <CheckoutForm priceId={pricing.stripeId} />
    </main>
  );
}

Architecture Fix

The fix required changes at two levels: opting the checkout route out of Full Route Cache, and ensuring our fetch calls were never cached beyond our defined TTL.

app/checkout/page.tsx — fixed
// Force dynamic rendering — opts out of Full Route Cache entirely
export const dynamic = 'force-dynamic';

// Alternatively, set per-fetch no-store:
async function CheckoutPage() {
  const pricing = await fetch('https://api.internal/pricing/current', {
    cache: 'no-store',           // ← bypass Data Cache too
    next: { tags: ['pricing'] }, // ← allow on-demand revalidation
  }).then(r => r.json());

  return (
    <main>
      <PricingCard price={pricing.monthlyPrice} />
      <CheckoutForm priceId={pricing.stripeId} />
    </main>
  );
}

export default CheckoutPage;

We chose export const dynamic = 'force-dynamic' at the route level rather than cache: 'no-store' per fetch. Why? Because the checkout page has multiple data fetches (pricing, user session, active coupons), and setting no-store on each individually was fragile — a future developer adding a new fetch would silently re-enable caching for that piece. Route-level force-dynamic is a single, unambiguous signal.

For the edge region warming problem, we added a post-deploy step to our GitHub Actions pipeline:

.github/workflows/deploy.yml — post-deploy cache warming
post-deploy:
  runs-on: ubuntu-latest
  needs: deploy
  steps:
    - name: Warm edge regions after deploy
      run: |
        REGIONS=("iad" "cdg" "sin" "syd")
        ROUTES=("/checkout" "/pricing" "/login")

        for region in "${REGIONS[@]}"; do
          for route in "${ROUTES[@]}"; do
            curl -sf               -H "x-vercel-deployment-url: $DEPLOYMENT_URL"               -H "x-vercel-edge-region: ${region}"               "https://$DEPLOYMENT_URL${route}" > /dev/null
            echo "Warmed ${region}: ${route}"
          done
        done

We also added on-demand revalidation to our pricing admin panel. Whenever a pricing record is updated in the database, a webhook fires to /api/revalidate:

app/api/revalidate/route.ts
import { revalidateTag } from 'next/cache';
import { NextRequest, NextResponse } from 'next/server';

export async function POST(req: NextRequest) {
  const secret = req.headers.get('x-revalidate-secret');

  if (secret !== process.env.REVALIDATE_SECRET) {
    return NextResponse.json({ error: 'Unauthorized' }, { status: 401 });
  }

  const { tag } = await req.json();
  revalidateTag(tag); // e.g. 'pricing'

  return NextResponse.json({ revalidated: true, tag });
}
Revised Architecture — Pricing Update Flow
═══════════════════════════════════════════

Admin Panel
    │
    ▼
FastAPI /admin/pricing  ──► PostgreSQL (write)
    │
    ▼
Webhook → /api/revalidate
    │
    ▼
revalidateTag('pricing')
    │
    ├─► Full Route Cache: invalidated for tagged routes
    ├─► Data Cache: invalidated for tagged fetches
    │
    ▼
Next request per region
    │
    ▼
FastAPI /pricing/current ──► PostgreSQL (read live price)
    │
    ▼
Fresh HTML rendered & cached with new price
  

Result: pricing changes now propagate in under 2 seconds globally. Before the fix, the propagation window was unbounded — whatever the Full Route Cache held, users got.

<2spricing propagation (was 3+ hrs)
68msTTFB maintained (no perf regression)
4edge regions warmed on every deploy
0stale-price tickets since fix

Lessons Learned

  • Next.js 15 App Router caching is opt-out, not opt-in for static-looking routes. If your Server Component fetches data without any dynamic function access, Next.js will cache the entire rendered output by default. Pages Router never did this — migrating teams need an explicit audit of which routes contain live data.
  • Audit every route that displays user-facing pricing, inventory, or session data. Add export const dynamic = 'force-dynamic' or tag-based revalidation before migrating these routes to App Router.
  • Redeploying is not a reliable cache-busting strategy at the edge. Each edge region re-warms independently. A deploy without explicit warming leaves a 60–90 minute stale window per region. Build warming into your CI/CD pipeline.
  • On-demand revalidation (revalidateTag) is the right primitive for data that changes outside of code deploys. Attach tags to your fetches, and fire a revalidation webhook from your backend whenever the data changes. This gives you the CDN-level performance of caching with the correctness of dynamic rendering.
  • Zero errors in logs is not the same as correct behavior. The system was healthy by every metric — latency, error rate, status codes. We needed a business-layer check: does the displayed price match the database price? Add synthetic monitoring that validates content, not just availability.

— Rey, writing for Darshan Turakhia · March 14, 2026

Share this
← All Posts9 min read