The Problem
A client emailed me on Wednesday with a screenshot: PageSpeed Insights showed a 1.4s LCP in the lab section but their CrUX field data was reporting 4.1s LCP for 75th-percentile mobile users. The Lighthouse score said 96, the Core Web Vitals assessment said "Failed". Search Console was reporting Core Web Vitals: Poor for the same URL and rankings were drifting down.
If your PageSpeed lab score is great but the field data tells a different story, and your search rankings are slipping with it, you are dealing with a lab-versus-field mismatch. This is one of the most misunderstood signals in technical SEO right now and developers usually optimize for the wrong number.
Why It Happens
Lab data is one synthetic Lighthouse run from a Google data center using a simulated Moto G Power on Slow 4G. Field data (CrUX) is a 28-day rolling aggregate of real Chrome users on real devices, real networks, real cache states. They will never match exactly, but a 2x or 3x gap means real users are having an experience your lab test is not catching.
The four causes I see on most client audits:
First, the hero LCP element changes by viewport. Your lab test runs at 412x823 (Moto G). Real mobile users are on iPhones (390x844), Galaxy S24 (393x852), even Pixel Folds (412x892 unfolded). If your hero image's srcset or sizes returns a different file at 412 versus 390, your lab is loading the optimized variant but real users are loading the larger one.
Second, third-party scripts vary by region. Google Tag Manager loads different containers based on geo. PageSpeed runs from a fixed Google IP. Real users from the EU hit a heavier consent banner, more tags, slower scripts. Lab LCP excludes the consent banner cost; field LCP includes it.
Third, the LCP element is below the fold on small screens but inside the lab viewport. I see this constantly. The lab's 412x823 viewport happens to fit the hero image as the LCP candidate. Real users on shorter viewports (iPhone SE at 375x667, older Android budgets) see a different element as LCP, usually a larger image lower on the page that takes longer to fetch.
Fourth, back/forward cache hits skew field data favorably until a deploy invalidates them. If you ship a new JS bundle, every existing user loses their bfcache entry. Their next visit pays the full LCP cost. CrUX picks this up. PageSpeed lab does not.
The Fix
Step 1: Identify the actual LCP element from real users, not lab. Add the web-vitals library to your site and log LCP element details to your analytics:
'use client';
import { useEffect } from 'react';
import { onLCP } from 'web-vitals/attribution';
export function VitalsReporter() {
useEffect(() => {
onLCP((metric) => {
const target = metric.attribution.element ?? 'unknown';
const url = metric.attribution.url ?? '';
navigator.sendBeacon('/api/vitals', JSON.stringify({
name: 'LCP',
value: metric.value,
rating: metric.rating,
element: target,
resource: url,
viewport: `${window.innerWidth}x${window.innerHeight}`,
}));
});
}, []);
return null;
}
Drop this into your root layout. After 24 hours you will know which element is LCP for which viewport. Stop guessing.
Step 2: Ship explicit fetchPriority="high" and preload the LCP image. If your LCP is consistently a hero image, hint it in Next.js:
import Image from 'next/image';
<Image
src="/hero.webp"
alt="Product hero"
width={1200}
height={600}
priority
fetchPriority="high"
sizes="(max-width: 768px) 100vw, 1200px"
/>
priority adds a <link rel="preload"> automatically. fetchPriority="high" upgrades the request priority on Chrome. This alone closes a 600-1200ms LCP gap on most sites I audit.
Step 3: Defer or self-host third-party scripts that delay LCP. Move GTM behind a strategy that loads after LCP completes:
import Script from 'next/script';
<Script
id="gtm"
strategy="lazyOnload"
dangerouslySetInnerHTML={{
__html: `(function(w,d,s,l,i){w[l]=w[l]||[];w[l].push({'gtm.start':
new Date().getTime(),event:'gtm.js'});var f=d.getElementsByTagName(s)[0],
j=d.createElement(s);j.async=true;j.src='https://www.googletagmanager.com/gtm.js?id='+i;
f.parentNode.insertBefore(j,f);})(window,document,'script','dataLayer','GTM-XXXXXX');`,
}}
/>
lazyOnload waits for the window load event before executing. LCP is unaffected. Conversions still tracked.
Step 4: Match your test viewport to your actual mobile traffic. Open Chrome DevTools, switch to your most-used mobile device (check GA4, Tech, Browser), and run Lighthouse there. If LCP is 4s on a Pixel 8 but 1.4s on the Moto G the lab uses, the Moto G size is hiding the issue.
Step 5: Read the field tab in PageSpeed Insights, not the lab tab. The PageSpeed Insights tool splits LCP into TTFB, render delay, load delay, and load duration. A high render delay means render-blocking resources. A high load duration means the image is too big or has the wrong priority. Each subscore points to a different fix.
The Lesson
When lab and field disagree, trust field. CrUX is real users; Lighthouse is one machine on a fixed network. Use web-vitals attribution to find the actual LCP element your real visitors see, then optimize for that, not for the lab's 412px viewport.
If your Search Console is showing a Core Web Vitals failure right now and you cannot find the cause, I run this audit quickly — see my services. For a related issue I covered the INP regression from GTM third-party tags recently.