Learning Web Performance

First Time

  1. Go to pagespeed.web.dev, type in your website, and click Analyze.
  2. Look at the Core Web Vitals Assessment, then scroll down to Diagnose performance issues.
  3. Ask yourself, “What the heck do all these numbers mean?”

Getting Started

  • User-Centric Performance Metrics – Start here. What does “performance” mean? Learn about different types of metrics, why and how they’re measured.
  • Getting Started With Measuring Web Vitals – Collect data. Learn the difference between field data (real users) and lab data (tests). “RUM” tools collect field data from real users. “Synthetic” tools collect lab data by running tests. You’ll need both on your journey to manage performance.
  • Core Web Vitals Workflows With Google Tools – Know your tools. Develop a workflow to find opportunities in your field data, choose specific pages to optimize, and use lab tools to debug and validate improvements.

Intro to Tools

  • PageSpeed Insights – Easiest tool to start with. Pulls field data from CrUX (if your site has enough traffic) and lab data from Lighthouse (tested on controlled servers). Read PSI’s documentation for more details on CrUX distributions, Lighthouse scores, and other FAQs.
  • Lighthouse – Simulation tool. Good for gut checks and quick advice. But be careful! Scores can be confusing and vary by environment (e.g., my laptop will show different scores than yours; see Lighthouse Variability). Best to compare the median of 5 or more test runs on the same machine before and after code changes to validate improvements. Also, opportunities are only automated tips, not absolute truth. Treat them as friendly suggestions.
  • Chrome DevTools – The Swiss Army knife of development. Network, Performance, and Sources are essential tools for debugging and optimizing performance. Mastery takes time and practice, but is well worth the investment.
  • WebPageTest – Real devices. Waterfall charts. Shareable tests, video comparisons, one-click experiments, and more. “The most effective, repeatable, and all-around useful performance investigation tool.” —Alex Russell

Intro to Optimization

Take your time here. Half of web performance is just understanding how the technology works. The better you understand your system, the more capable you are to optimize it.

For even more jumping off points, bookmark Smashing Magazine’s Front-End Performance Checklist.

*For a distilled version of these two Udacity courses, see Web Performance Crash Course by Ilya Grigorik. There are four videos in the playlist, all very in-depth but well worth your time.

Intro to RUM

Lab data is stable. Field data is another world.

This is what real user data looks like: Wikimedia performance for the past week.

Statistics are essential. At minimum: learn how to read histograms and how to think in percentiles (not averages).

Web performance is not a normal distribution. Averages are misleading. The most confusing thing about RUM is that it is real and random: any user, anywhere, on any device, at any time, can visit your site. This makes field data noisy—much noisier than controlled lab tests.

Your site speed will change even if your code doesn’t (e.g., daily and seasonal trends; see Pat Meenan’s presentation, How Fast Is It?, at 27:17).

But that’s okay! Variation is normal. Web vitals vary just like human vitals. While analyzing real user data can be challenging, a little statistical knowledge goes a long way.

Take a deep breath, and start with the basics:

  • How to Read a RUM Histogram – Histograms visualize distributions. They’re basically bar charts that show how often results appeared across a range of values.
  • Averages, Medians & Percentiles – Averages misrepresent long tail distributions. Instead, percentiles focus on how many people saw what. For example, the 75th percentile (P75) says, “75% of page views loaded in X seconds or less.”

Chrome User Experience Report

When you’re ready to start analyzing field data, the easiest place to start is the Chrome User Experience Report.

For a quick introduction, see The Chrome UX Report – The State of the Web.

Several tools and services report field data from CrUX:

  • PageSpeed Insights shows you data from CrUX in its Core Web Vitals Assessment. Each metric shows a distribution and its P75 for the last 28 days. Click Expand view to see the full breakdown for each metric. Which metrics need the most improvement?
  • Treo Sitespeed shows you historical CrUX data. While PageSpeed Insights focuses on P75 over the last 28 days, Treo (and other services) can show you month-over-month trends. Is your site getting faster or slower over time?

For more details on how to access CrUX data, see Using the Chrome UX Report to look at performance in the field.

RUM Tools and Services

CrUX isn’t the only source of field data.

You can collect your own field data using standard performance APIs (e.g., see Measure LCP in JavaScript and Custom Metrics).

Open source libraries help make this easier:

There are also many premium RUM services (e.g., SpeedCurve, Datadog, and New Relic, to name a few). For an example of what features RUM services can provide, see SpeedCurve RUM – Sessions Dashboard Walkthrough.

Bookmark How to Make Sense of Real User Performance Metrics and watch it later when you really start digging in and asking critical questions about your RUM data.

I’m still searching for long-form, comprehensive resources on how to analyze RUM data. Please share recommendations! Tweet @atannerhodges or email [email protected].

Rethinking Performance

“The wisest man is he who understands that he understands nothing.” —Pajama Sam

  • Page Weight MattersChris Zacharias made YouTube 10x faster in the lab, but when project “Feather” launched their field data said it was actually slower. Why? How? (Hint: geography.)
  • Survivorship Bias in Web Performance – Not everyone shows up in your data. How many people leave before your analytics load? What’s your data not telling you?
  • Progressive Performance – “Your laptop is a filthy liar.” Mobile performance is harder than you think. Why is mobile so different? What are your dev tools not telling you? (Power = heat.)
  • The Mobile Performance Inequality Gap – Bookmark. Read. Your iPhone is nothing like the average Android, and the gap is growing. We need budgets, real devices, lab tests and field data to face the challenge.

Reaching Maturity

  • A Management Maturity Model for Performance – Performance is a journey, for both individuals and organizations. How we manage dictates performance. From fire fighting to strategic performance, Alex Russell’s maturity model is the best framework I’ve found so far outlining management’s path of progress.
  • Wikimedia Performance Team – Wikipedia is the gold standard for web performance. Their mission clearly aligns with the organization’s goals, and their culture sustains it. Their focus on outreach, monitoring, knowledge, and improvement is a model for all performance teams.

Frankly, I haven’t found many comprehensive resources on web performance maturity. Most discussions focus on tooling, tips, and tricks. Nearly all focus on technology. Few cover business and management. Even fewer cover statistics.

But we’re an active, ever-growing community. Our discipline is maturing too.

Hopefully this brief list helps you get started on your journey.

To continue the journey, follow the links below…

Keeping Up

Bonus: Diving Deeper

To keep this post short, I’ve had to leave out so, so many resources. If there are links you think I should add, please let me know. Eventually, I’ll follow up with a more complete list in a separate post.

Until then, here are just a few more links for those looking to dive deeper into the world of web performance:

  • My Challenge to the Web Performance Community – Lighthouse is a useful testing tool but a poor predictor of field results: “Almost half of all pages that scored 100 on Lighthouse didn’t meet the recommended Core Web Vitals thresholds.” How we talk about performance matters. Philip Walton’s recommendation: when you share performance wins, don’t just share Lighthouse scores. Share results in context: as distributions from real user data over a period of time.
  • Humans Can (Also) Measure Your Performance – Even RUM metrics can’t tell you everything. How satisfied were users with their experience? Gilles Dubuc shares how Wikipedia has explored complementing synthetic and RUM data with survey responses to measure their site performance.
  • Perceived Performance: The Only Kind That Really Matters – We tend to optimize for objective time because it’s easy to measure, but what if users don’t notice? “Time differences of 20% or less are imperceptible.” Eli Fitch shows how optimizing for perceived performance can make an even bigger impact.
  • Understanding Emotion for Happy Users – “How a user feels when using our site affects whether they’ll come back and ‘convert’ into customers.” Philip Tellis shares work the mPulse team has been doing to measure user emotions with metrics like rage clicks, wild mouse, “scrandom”, and backtracking, and how they correlate to business outcomes.
  • Relating Site Speed and Business Metrics – Ultimately, performance should always tie back to business (e.g., improving quality to meet customer needs to sustain business). Measuring performance ROI is a delicate task and, frankly, demands more rigor than we often achieve. This article provides helpful guidelines for A/B tests: run both versions of the page at the same time, prefer a server-side split, avoid other changes during the test, and measure with RUM. Tests like this require careful setup and analysis, but are one of the best ways to measure real performance impact.


Credits

Open Graph image by unsplash-logoLindsay Henwood