Our website performance commandments
Website speed and performance might be one of the most-repeated subjects in front-end land. Our front-end magician Sander walks you through his (not quite ten) performance commandments.
Before we dive into the finer details of web performance, it’s important to remind ourselves why we should be concerned about it. Our own Strategy and Service Design team has got my back.
The most important factor is the reign of mobile technology, which allows us all to act on any impulse, at any given moment. The pervasiveness of mobile devices means that everyone is empowered to take immediate action and act on their specific needs in the moment.
A few numbers to make that more tangible: 29% of smartphone users will immediately switch to another site or app if the first one doesn’t satisfy their needs (for example, they can’t find information or it’s too slow) and nearly half of web users expect a site to load in 2 seconds or less – and they tend to abandon a site that isn’t loaded within 3 seconds.
And then there’s that stat that everyone likes to throw around: Amazon’s revenue has increased by 1% for every 100ms of page loading time improvement they’ve made. It’s not a recent fact, but that doesn’t make it any less impressive. In fact, a more recent report from Alibaba indicated that upgrading their site to a Progressive Web App (PWA), they increased their conversions with 76% accross browsers.
In short, we have high expectations and very little patience.
When it comes to our online behaviour, we all expect zero-waiting experiences. And to successfully deliver on those, we as developers need to build platforms that are fast and performant.
The first step: Measure
Above all else, measure! Whether you need to look into performance gains now, or in a month, you need data to start with.
We’ve learned that just reviewing performance on release is not enough. The impact of a commit that is functional, but decimates performance can wreak havoc on your conversion rate. Your setup needs to fulfill two requirements: generating daily reports of the website and not relying on manual check-ups (read: it should send alert emails).
And while you’re at it: don’t just measure your own website: take a look at your competition, too.
Measure during your entire development cycle, and communicate performance as a KPI to your team – because as we all know, everyone *loves KPI’s.
Using SpeedCurve as part of our development infrastructure has very tangible benefits, even just the other day. The image above is an example of a staging environment we are monitoring for a project that is under active development. Overnight, we suddenly went from a 1,7s speed index to 2,5s.
Thanks to SpeedCurve, we immediately noticed that we had negatively impacted performance, after which we could easily figure out where we introduced new code or assets that had slowed down the initial load time immensely. As a result, we could mediate the problem and lower load time back to 1.8s.
Keep your content Compact
And if you’re working in a team, don’t forget your most important design asset – yes, your actual designers. Introduce size quota for your designers – they’ll love the constraints – and never worry about resizing again!
Apart from still images, animated gifs have also become an internet staple: we just can’t get enough of them. Unfortunately, a nice-quality animated gif also tends to be pretty big in file size. This is why recently, browsers shifted towards allowing mobile videos to be played on your mobile device. You can read up on these mobile autoplay policies for Chrome and Safari.
To do so, try using this format instead of a gif:
html <video controls autoplay loop muted playsinline> <source src="video.mp4" type="video/mp4"> <source src="video.webm" type="video/webm"> </video>
And then there’s Guetzli, Google’s newest attempt to speed up the web. Guetzli, a perceptual JPEG encoder, promises to shrink images by between 29 and 45 percent, although it uses a large amount of memory and CPU. The plugin claims that you should provide 300MB of memory per 1MP of the input image and about a minute of CPU computing time. However, some have voiced doubts about that number. The tests we have run so far seem look like it’s worth adopting for static images or performance-critical projects.
Automate, automate, automate!
If you’ve read some of our previous blog posts, you’re probably not surprised to read this. We’re big proponents of far-reaching automation throughout our ranks.
Don’t be silly, don’t do one-off’s.
The reason why you should automate is not just for consistency and efficiency. You might think that if it takes you a few hours to automate something that takes you 5 seconds to do manually, it’s just not worth the effort. However, you shouldn’t underestimate the impact that those 5 seconds of frustrating, manual, clicky-copy-pasty stuff has on your working day and general productivity. And as a plus, you can pick up some valuable automation skills through these reasonably small exercises.
We also automate the process of resizing and optimising our design assets. The only human interaction here should be Quality Control, and allowing for manually disabling/changing optimization rules for an image, but only in rare cases.
Critical Rendering Path
When optimising, the aim is to serve a view to the user as soon as possible. Use things like CriticalCSS, script async/defer attributes, etc. to minimize render blocking resources. You’ll find plenty of coverage on how to do this – the only thing I can add is to use your performance monitoring tools to spot perf opportunities/issues.
The big improvement in the last few years has been Ahead of Time compilation.
However, doing this will only get us so far in retaining our user. After all, (s)he still has to wait for the app to become functional. In other words, this does nothing to improve the Time to Interaction.
So Time to Interaction is the next hurdle to take – and for ways to tackle it, I recommend this very comprehensive article from Addy Osmani on the subject.
We’re very happy with all the learnings so far and the performance we’ve got in return. The next steps for our team will be to evaluate all these learning when we’re enabling http/2, evaluating preloading versus service worker caching and optimizing the JS payload with tree shaking.