INP Field Data Worse Than Lighthouse Lab Score Fix

Lighthouse INP score is green but CrUX field data is failing pages in Search Console? Close the lab-vs-real-user INP gap with focused attribution fixes.
INPCore Web VitalsPerformance
May 11, 20266 min read1081 words

The Problem

A B2C marketplace client emailed me on Monday morning. PageSpeed Insights showed an INP of 142ms (green by every threshold the spec defines), but their Search Console Core Web Vitals report had flipped 38% of mobile URLs to "Poor INP" two weeks running. The CrUX 75th-percentile INP was 412ms. Lighthouse lab said the site was fine. Real users said it was broken. Their dev team had been chasing the score for a sprint with no movement.

If your Lighthouse INP audit looks healthy but Search Console keeps marking pages as failing on INP, you are looking at the structural gap between lab and field data. Same metric, two different sources, often two different verdicts.

Why It Happens

Lighthouse's INP is a single synthetic interaction in a clean Chrome profile with no extensions, on a throttled but consistent network and CPU. CrUX, the source of Search Console's INP verdict, aggregates real Chrome users at the 75th percentile of the worst interaction across an entire session, on every device, every network, every extension. The gap is structural and it always favours Lighthouse.

Three things widen the gap routinely on client sites:

  1. Long tasks that only fire on the Nth interaction. A hydration hot path or an analytics flush that runs once per session does not show up in Lighthouse's single audited interaction. CrUX captures it because real users hit it every time.
  2. Mid-tier and budget Android devices. PageSpeed Insights mobile uses a Moto G Power with throttled CPU. The bottom 25% of mobile CrUX traffic is on older Samsung A-series and budget Xiaomi units with four to six times less single-thread performance. Same JavaScript, very different INP.
  3. Browser extensions on desktop. Ad blockers, password managers, and accessibility extensions inject event listeners that delay the next paint after every click. CrUX captures it, Lighthouse does not.

The 75th percentile is the killer. You only need a quarter of your sessions to be slow for Search Console to label your page as failing INP, even if the median user is fine.

The Fix

Step 1: Get real attribution flowing. Stop trusting lab and start recording field data with the Web Vitals attribution build, which tells you the slowest interaction's target element, type, and which sub-component dominated. Add this once at the app root:

// app/vitals-reporter.tsx
'use client';
import { useEffect } from 'react';
import { onINP } from 'web-vitals/attribution';

export function VitalsReporter() {
  useEffect(() => {
    onINP((m) => {
      navigator.sendBeacon(
        '/api/vitals',
        JSON.stringify({
          value: m.value,
          target: m.attribution.interactionTarget,
          type: m.attribution.interactionType,
          inputDelay: m.attribution.inputDelay,
          processingDuration: m.attribution.processingDuration,
          presentationDelay: m.attribution.presentationDelay,
          url: location.pathname,
        }),
      );
    });
  }, []);
  return null;
}

Log the beacons wherever you already aggregate analytics: Vercel Speed Insights, GA4, a Postgres table on your own backend. After 48 hours you will have a ranked list of which elements cause the worst interactions, broken into the three sub-components that make up an INP measurement.

Step 2: Pick the dominant sub-component. Every INP is the sum of three parts, and the fix is different for each:

  • High inputDelay means the main thread was busy when the user clicked. Hydration, third-party scripts, large React reconciles. Fix by deferring non-critical work or splitting hydration.
  • High processingDuration means your event handler itself is slow. Heavy state updates, synchronous fetches, expensive selectors.
  • High presentationDelay means the browser took too long to paint after your handler finished. Usually a huge layout shift caused by the click (mobile menus, accordion expands, virtualised lists re-rendering).

For most React 19 apps I audit, the dominant sub-component is inputDelay on the first interaction and processingDuration on subsequent ones. Two different fixes, and you need the attribution data to know which one to ship.

Step 3: Cut input delay on the first interaction. Move third-party scripts off the critical path and defer non-essential hydration:

// app/layout.tsx
import Script from 'next/script';

export default function RootLayout({
  children,
}: {
  children: React.ReactNode;
}) {
  return (
    <html>
      <body>
        {children}
        <Script
          src="https://cdn.example.com/analytics.js"
          strategy="lazyOnload"
        />
      </body>
    </html>
  );
}

strategy="lazyOnload" keeps analytics out of the first interaction window. For your own client components, push interactive widgets behind <Suspense> and lazy-load anything not visible above the fold. On a recent client site this single change moved p75 inputDelay from 220ms to 95ms.

Step 4: Cut processing duration. Wrap heavy handlers in useTransition so React can break up the work:

'use client';
import { useTransition, useState } from 'react';

export function FilterPanel({ items }: { items: Item[] }) {
  const [filtered, setFiltered] = useState(items);
  const [, startTransition] = useTransition();

  function onFilter(query: string) {
    startTransition(() => {
      setFiltered(items.filter((i) => i.name.includes(query)));
    });
  }

  return <input onChange={(e) => onFilter(e.target.value)} />;
}

The keystroke handler returns immediately, React schedules the filter work as a transition, and INP attribution moves the cost out of processingDuration and into background work that does not block the next paint.

Step 5: Cut presentation delay. If a click expands a menu or accordion, use content-visibility: auto on the hidden region so the browser does not pay layout cost on the closed state, and avoid synchronous DOM measurements inside the handler. For images that swap on click, set explicit width/height so the swap does not trigger a fresh layout pass. The web.dev INP guide covers presentation delay in more depth if you want to go deeper.

Lesson Learned

INP regressions hide where Lighthouse cannot see them: the second interaction, the slow device, the extension-loaded browser. Treat field data as the source of truth and use lab as a development convenience, not a verdict. Once attribution beacons are flowing you stop guessing about which click is slow and which sub-component is to blame.

If your Search Console is flipping mobile URLs to "Poor INP" while your Lighthouse stays green and you want someone who closes the gap permanently rather than chasing the lab number, this is the work I do — see my services. For a related INP-on-mobile pattern I have written about, see INP regression react hydration mobile fix.

Back to blogStart a project