Core Web Vitals Guide
Core Web Vitals are Google's user experience signals tied to search ranking. They measure real user experience across loading performance (LCP), interaction responsiveness (INP), and visual stability (CLS). Understanding them requires going beyond PageSpeed scores into the measurement infrastructure behind them.
Core Web Vitals are not a speed score. They are field metrics — measurements taken from real users on real devices in real network conditions, aggregated via the Chrome User Experience Report (CrUX). A page can achieve a high lab score in PageSpeed Insights while still failing field thresholds for a significant portion of its actual audience. The gap between lab performance and field performance is where most CWV failures live, and it is the gap that matters for ranking.
What Core Web Vitals actually measure
Core Web Vitals replaced older performance proxies — First Contentful Paint, First Meaningful Paint, First Input Delay — because those metrics did not reliably capture whether a user actually experienced the page as fast, responsive, and stable. The current set of three metrics addresses distinct dimensions of perceived experience.
Largest Contentful Paint (LCP) measures loading performance — specifically, when the largest visible content element in the viewport has finished rendering. It is the metric that most closely correlates with a user's perception that the page has loaded.
Interaction to Next Paint (INP) measures interaction responsiveness — specifically, the latency between a user interaction and the next visual update. INP replaced First Input Delay (FID) as a Core Web Vital in March 2024 because FID only measured the first interaction, while INP captures the full interaction lifecycle throughout a page visit.
Cumulative Layout Shift (CLS) measures visual stability — specifically, the cumulative score of all unexpected layout shifts during a page's lifetime. A layout shift occurs when a visible element moves position without a user-initiated trigger. CLS captures whether the visual layout behaves predictably as the page loads and as content changes.
The thresholds: LCP should be under 2.5 seconds for a "Good" rating; INP should be under 200 milliseconds; CLS should be under 0.1. These thresholds apply to the 75th percentile of page loads for a given URL — meaning 75% of real user visits must meet the threshold, not the median.
Core Web Vitals are not a speed score. They are field metrics — measurements taken from real users on real devices in real network conditions, aggregated via the Chrome User Experience Report (CrUX). A page can achieve a high lab score in PageSpeed Insights while still failing field thresholds for a significant portion of its actual audience. The gap between lab performance and field performance is where most CWV failures live, and it is the gap that matters for ranking.
Logic Grid Studio
Largest Contentful Paint (LCP)
LCP measures when the largest content element in the viewport — typically a hero image, a large text block, or a video poster — has finished rendering. The metric is designed to correlate with a user's perception that the primary content of the page is visible and usable.
LCP depends on four sequential sub-processes, each of which can be a bottleneck:
Optimisation approaches depend on which sub-process is the bottleneck. Common interventions: use fetchpriority="high" on the LCP image to prioritise discovery; eliminate render-blocking scripts above the fold; use a CDN for TTFB reduction; serve images at correct dimensions with next-gen formats (WebP, AVIF); remove unnecessary third-party scripts that contend for main thread time during early page load.
Interaction to Next Paint (INP)
INP replaced First Input Delay as a Core Web Vital in March 2024. The distinction is significant: FID measured the delay before the browser began processing the first user interaction. INP measures the full latency — from interaction to the next visual update — for every interaction during a page visit, then reports the worst one (with some outlier exclusion).
This change makes INP substantially harder to pass than FID for pages with complex client-side behaviour. A page could pass FID with a clean initial load while having severe responsiveness problems on subsequent interactions — those problems were invisible to FID, but are captured fully by INP.
The common causes of poor INP are:
INP improvements typically require JavaScript profiling and refactoring — the fixes are more technical than LCP optimisation, which often reduces to infrastructure and asset delivery changes.
Cumulative Layout Shift (CLS)
CLS measures visual instability — the degree to which visible page elements move unexpectedly during the page lifecycle. A score of 0 means no layout shifts occurred. A score above 0.1 is a "Needs Improvement" rating; above 0.25 is "Poor".
The most common causes of layout shift:
CLS debugging in Chrome DevTools: the Layout Shift regions overlay in the Rendering panel shows which elements shifted and when. The Performance panel's Experience track shows layout shift events with impact fractions and distance fractions, which together determine the shift contribution to the CLS score.
Measurement infrastructure
Passing Core Web Vitals requires two separate measurement systems: lab tools for diagnosis and field tools for the ground truth. Optimising based on lab data alone leads to improvements that do not necessarily improve field scores — the metrics that determine ranking.
Chrome User Experience Report (CrUX). CrUX is the source of field data for Core Web Vitals in Google Search Console and PageSpeed Insights. It aggregates real user measurements from Chrome users who have opted in to usage statistics. CrUX data is URL-level (specific pages) and origin-level (the whole domain). It requires a minimum traffic threshold — low-traffic URLs may not have CrUX data.
PageSpeed Insights. PageSpeed Insights combines two data sources: CrUX field data (the "Field Data" section at the top) and a Lighthouse lab simulation (the "Lab Data" section). The lab score at the top of PageSpeed Insights is from the Lighthouse simulation, which uses a controlled network throttling profile and emulated mobile device. It is not a prediction of field performance.
Google Search Console Core Web Vitals report. The Search Console CWV report groups URLs by status (Good, Needs Improvement, Poor) based on their CrUX field data. It shows which URL groups are failing and the metric responsible. This is the most actionable field data view for systematic CWV remediation.
Real User Monitoring (RUM). For sites that need more granular field data than CrUX provides — per-segment performance, custom dimensions, immediate feedback rather than 28-day rolling averages — proprietary RUM implementations using the Web Vitals JavaScript library (available as an npm package or CDN snippet) capture CWV measurements from every user session and send them to an analytics platform.
Lighthouse. Lighthouse is a lab tool. It provides the diagnostic detail needed to identify and fix performance issues, but its scores are not directly predictive of CrUX field data. Lighthouse is the right tool for finding what to fix; CrUX and Search Console are the right tools for verifying that a fix improved field performance.
Logic Grid Studio's technical SEO and GEO visibility work includes Core Web Vitals audit and remediation — establishing field baselines, identifying the metric and sub-metric responsible for failure, and scoping the engineering changes required to reach "Good" status across the URL groups that matter for organic visibility. Services overview.
Let's scope your next system together.
0 Comments
Share your perspective
Questions, corrections, or commentary on this topic - we read everything. Your email address will not be published.