Core Web Vitals Explained: What Every Score Means and How to Fix It
LCP, CLS, INP and TTFB — what each Core Web Vital really means, how Google uses them as ranking signals, and exactly how to fix poor scores in 2026.
Core Web Vitals are Google's user experience metrics — three specific measurements that capture real-world loading performance, visual stability, and interactivity. Since 2021, they have been confirmed ranking signals for Google Search, and they directly affect how efficiently AI crawlers process your content.
Understanding what each metric actually measures — not just the name — is the foundation of fixing them. This guide explains every metric in depth, covers how to measure them properly, and clarifies how they influence both Google rankings and overall site quality.
What Are Core Web Vitals?
Core Web Vitals are three page experience signals Google uses to evaluate the real-world performance of web pages as experienced by actual users in Chrome. Unlike lab tests that simulate performance, Core Web Vitals use Chrome User Experience Report (CrUX) data — real measurements from real Chrome users visiting your pages.
The three Core Web Vitals are:
- LCP — Largest Contentful Paint (loading)
- CLS — Cumulative Layout Shift (visual stability)
- INP — Interaction to Next Paint (interactivity)
A fourth metric — TTFB (Time to First Byte) — is not an official Core Web Vital but is a diagnostic metric that underlies LCP and overall perceived performance. Understanding it helps diagnose root causes.
Google uses Core Web Vitals as a ranking tiebreaker: when pages are otherwise equal in quality and relevance, better Core Web Vitals scores provide a ranking advantage. The impact is not dramatic on most queries — content quality and relevance dominate — but for competitive keyword clusters where pages are closely matched, Core Web Vitals can be the deciding factor.
LCP — Largest Contentful Paint
What it measures: The time from when the page starts loading to when the largest content element visible in the viewport — typically a hero image, large text block, or above-the-fold image — finishes rendering.
Target: Good < 2.5 seconds | Needs improvement 2.5–4.0s | Poor > 4.0s
What "largest content element" means: Google identifies the largest image, video poster, or text block visible in the initial viewport. For most websites, this is the hero image or the main H1 heading. For pages where an image is present, it is almost always the image.
Why LCP matters: LCP measures when the page "looks loaded" to the user — the moment the most visually dominant element appears. Users perceive a page as loaded when the main content is visible, even if background resources are still loading. A poor LCP score means users wait and wait before seeing meaningful content.
Common causes of poor LCP scores:
- Unoptimised hero images (too large, wrong format, not preloaded)
- Render-blocking CSS or JavaScript that delays page rendering
- Slow server response times (high TTFB affects LCP directly)
- No CDN — content served from a single server geographically distant from users
- Third-party scripts blocking the main thread
The most impactful LCP fix: Preload your LCP image with <link rel="preload" as="image" href="/hero.webp"> in your page <head>. This single change frequently drops LCP by 0.5-1.5 seconds. Combine with WebP or AVIF format (20-50% smaller than JPEG), explicit width and height attributes on the <img> tag, and a CDN for maximum effect.
CLS — Cumulative Layout Shift
What it measures: The total unexpected visual movement of page content during the page's lifespan. Measured as a score (not seconds) representing the total impact of all unexpected layout shifts.
Target: Good < 0.1 | Needs improvement 0.1–0.25 | Poor > 0.25
What causes layout shifts: Anything that changes the dimensions of an element after it has been rendered — an image loading without explicit dimensions and pushing content down, a web font loading and changing text reflow, a dynamic banner appearing at the top of the page, or an ad slot loading and displacing content.
Why CLS matters: A page that shifts content while the user is reading or about to click is frustrating and error-prone. The accidental click phenomenon — users clicking the wrong element because the layout shifted at the moment of click — is the direct user experience harm CLS measures.
Most common CLS causes and fixes:
Images without dimensions: Every <img> tag must have explicit width and height attributes. The browser reserves the space before the image loads, preventing the shift. In Next.js, the <Image> component handles this automatically.
Web fonts (FOUT/FOIT): Fonts loading after initial text render cause reflow. Use font-display: swap in your @font-face declarations, and preload your primary font file with <link rel="preload">.
Dynamic content injection: If you inject banners, cookie notices, or ads into the visible viewport after initial render, reserve the space for them explicitly before the content loads. Use a min-height on the container.
INP — Interaction to Next Paint
What it measures: The latency between any user interaction (click, tap, key press) and the browser's next visual update in response. Specifically, it measures the worst interaction latency in the 75th percentile across all interactions during a page visit.
Target: Good < 200ms | Needs improvement 200–500ms | Poor > 500ms
Why INP replaced FID: First Input Delay (FID) measured only the delay before the browser started processing the first interaction. INP measures the full duration of any interaction — start to next paint — and covers all interactions, not just the first. It is a more accurate representation of interactive responsiveness.
What causes poor INP:
- Long JavaScript tasks blocking the main thread (tasks > 50ms)
- Heavy third-party scripts executing on user interactions
- Synchronous event handlers doing expensive computations
- Unoptimised React renders triggered by state changes
INP improvement strategies: Break long JavaScript tasks using scheduler.postTask() or setTimeout to yield to the main thread. Defer non-critical third-party scripts using script strategy="lazyOnload" (Next.js). Optimise React component renders with useMemo, useCallback, and code splitting.
TTFB — Time to First Byte
What it measures: The time from when the browser sends an HTTP request to when it receives the first byte of the response from the server.
Target: Good < 800ms | Needs improvement 800ms–1800ms | Poor > 1800ms
Why TTFB matters: TTFB is the foundation of all other performance metrics. A server that takes 2 seconds to respond means LCP cannot be under 2 seconds regardless of other optimisations. High TTFB is typically a server or infrastructure problem, not a frontend problem.
Common TTFB causes: Slow server processing (database queries, API calls), no CDN (serving from a geographically distant origin), no response caching (every request hitting the origin), or an underpowered hosting plan.
TTFB fixes: Implement server-side caching (Redis, Varnish, CDN edge caching), use a CDN with global edge nodes (Cloudflare, Fastly, Vercel Edge), optimise slow database queries identified in server logs, and use a hosting tier appropriate for your traffic volume.
How to Measure Your Core Web Vitals
PageSpeed Insights — pagespeed.web.dev — measures both lab data (simulated) and field data (real CrUX data). Run it on your 5 highest-traffic pages. The field data section shows your actual Core Web Vitals status as Google uses them for ranking.
Google Search Console (Core Web Vitals report) — Shows field data aggregated across all your pages, segmented by Desktop and Mobile. The "Poor URLs" section lists specific pages failing each metric.
Chrome DevTools (Performance panel) — For developer-level diagnosis of specific interactions and LCP elements. The Performance Insights panel surfaces LCP timing, CLS events, and long tasks visually.
CrUX Dashboard — A free Looker Studio template that visualises CrUX field data for any public domain over time. Use it to track Core Web Vitals trends month-over-month.
Field Data vs Lab Data: The Key Difference
Field data (CrUX) is collected from real Chrome users visiting your pages. This is what Google uses for rankings. It represents 28 days of accumulated measurements and reflects real network conditions, devices, and user behaviour.
Lab data (PageSpeed Insights, Lighthouse) simulates a page load on a controlled device with controlled network conditions. Useful for diagnosis and identifying specific issues, but does not directly reflect ranking inputs.
The discrepancy can be significant: a page might score 75 in Lighthouse lab testing but show "Good" in field data because your actual users are on fast connections and modern devices. Conversely, a page might score 95 in lab but "Poor" in field data because many of your users are on slower mobile connections.
Always use field data as the source of truth for ranking impact. Use lab data to diagnose and fix issues, then verify the fix in field data over the following 28-day window.
Frequently Asked Questions
Do Core Web Vitals affect all pages equally?
No. Pages with insufficient CrUX data (typically low-traffic pages) are grouped with similar pages for assessment. Very low-traffic pages may not have individual field data and are measured at the origin level (all pages aggregated). High-traffic pages are assessed individually and have more direct ranking impact.
How long does it take for Core Web Vitals improvements to affect rankings?
CrUX field data is a rolling 28-day aggregate. Changes you make today will only be reflected in the data after 28 days of accumulated measurements from real users. Expect 4-6 weeks between implementing fixes and seeing changes in GSC's Core Web Vitals report.
Is mobile or desktop more important for Core Web Vitals ranking?
Google uses mobile-first indexing for the vast majority of sites — mobile field data is the primary ranking input. However, both desktop and mobile scores are shown in GSC. Prioritise fixing mobile scores, particularly for pages where a significant portion of your audience arrives on mobile devices.
What is the fastest Core Web Vitals win for most SaaS sites?
Adding explicit width and height to all images (fixes CLS), preloading the LCP image (improves LCP), and deferring non-essential third-party scripts (improves INP and LCP). These three changes, implementable in a single afternoon, address the most common CWV failures.
Improve Your Scores Systematically
Core Web Vitals represent a direct line between technical performance and both rankings and user experience. Each metric has clear causes and clear fixes. Prioritise your highest-traffic pages, use field data as your benchmark, and verify improvements after the 28-day CrUX window.
OmniRank's technical audit checks Core Web Vitals status for all your key pages and surfaces the specific elements causing poor scores — or read the guide to fixing Core Web Vitals for the implementation details.
OmniRank Editorial Team
SEO & AI Research Team
The OmniRank team combines expertise in AI, SEO, and SaaS growth to deliver actionable insights that help websites rank across Google, AI search engines, and LLM citation networks.