We don’t like testing the patience of our customers. So, like most e-commerce businesses, the technical performance of our site matters.
Measuring performance has become a core part of our development process. But occasionally it feels we focus too much on one specific metric; our Google PageSpeed score.
Within the team Google PageSpeed Insights has become the first tool we reach for when measuring the performance of a page. It’s also one of the few glimpses into the Google SEO system where it’s revealed how they rate an aspect of your site.
An absolute, numeric score and red-amber-green rating system is also very motivating for us developers; beating your colleague’s score is often a cause for bragging rights :)
So, why is it important to augment this ranking with other measures? Here are a few (Perhaps obvious? Certainly debatable!) reasons:
A 10-point PageSpeed gain may not improve your users’ experience by 10%
So you’ve followed Insight’s advice and flipped from Amber to Green – pushing your score from 75 to 85. The big red deploy button has been pressed and the celebratory drinks are free-flowing.
But the next day you spot that you’ve barely nudged the needle on your Real User Metrics (RUM) dashboard. Millions of page requests don’t lie; the site is only loading a few milliseconds faster.
So why has all that effort barely registered?
Insight is a pretty smart tool; for example, determining that certain styles would be better off inline (rather than downloaded) to render the initial view port. But as with any generic tool, it can only infer so much about what your page is trying to do.
So I guess Google have to assume we know what we’re doing :P
Take, for instance, those images we pre-loaded but only displayed on user action because we wanted to avoid a noticeable flicker. Or that unavoidably expensive DOM operation we’re performing to deliver a personalised experience. Each, perhaps, valid page operations in the eyes of a static(-ish) analysis tool, but possibly the biggest wins if optimised.
Key takeaways: identify the poor performers in your code using tools and your (human) understanding of its goals. Chrome’s Timeline view in Dev tools is great for finding performance offenders. Teach and peer review to prevent them in the first place. Capture performance metrics as part of your test suite.
Not all your users are co-located with Google bots
Based on the screenshot Insight renders and the locale information we display on site, I’ll assume the tests are run from a limited number of geographical locations. With customers all over the globe, we cannot assume they will receive the same quality of service as, say, an east-coast Google data centre.
For example, to aid our marketing and personalisation efforts, we rely on third-party services and tags. The latency of these services can vary wildly from territory to territory, typically based on how much they’ve invested in distributing their infrastructure. In fact, we’ve seen scripts that never finish downloading in some territories!
Quick out of the blocks… but limps to the finish line
I’m still a big fan of progressive enhancement, so I like to defer my script execution as late as possible. Insights will probably love me for it.
But once the page has initially rendered and Insights has decided it’s seen enough to rate your site, what happens when my incredibly inefficient JS kicks in?
We’ve seen poorly written scripts max-out CPU after page load. On lower-powered devices – such as mobile handsets – this has resulted in an unresponsive UI.
Key takeaways: performance shouldn’t just be measured by the initial sprint. Configure your tools to run the page for a few seconds post-load. Webpagetest, for example, includes a CPU utilisation chart in their reports. Look out for sustained peaks.
Like any game, “Who’s got the best score?” can be gamed
Boy, I could write a page that gets a 95+ score. My boss would think I’m a performance guru!
But to achieve that I’d follow Google’s insight to the letter. Perhaps I programmatically deferred loading a set of assets so Insights would think the page completed faster. Or I introduced a plain-as-vanilla, featureless but high-scoring interstitial view.
I love my shiny new score. But ultimately my users won’t love the unhelpful new experience.
Key takeaways: ensure you analyse multiple measures of performance, including Real User Metrics. Peer review performance.
Page Insights is a fantastic tool that encourages some worthwhile best practices. I think its most important role is encouraging developers to discover more about performance and understand how the browser processes a page. I can strongly recommend Udacity’s Website Performance Optimization course to get a firm understanding of the browser’s critical rendering path.