Performance Testing With Lighthouse

Performance is one of the hottest topics and, in my experience, one of the hardest to get right. There are tools that will help developers measure and improve a site’s performance.

The idea behind Lighthouse is that by running it against your site, either staging or production periodically you can get a good overview of your site’s performance in the categories that lighthouse measures.

It will give you a baseline to measure against, find areas to improve on and track any regression that may have snuck by your testing.


Lighthouse is an automated tool for improving the quality of web pages. It has audits for performance, accessibility, best practices, SEO, and progressive web apps.

You can run Lighthouse in Chrome DevTools, as an extension, from the command line, or as a Node module. You give Lighthouse a URL to audit, it runs a series of audits against the page, and then it generates a report on how well the page did. Use the failing audits as indicators on how to improve the page… think of it as performance-driven development.

DevTools Audit Menu

To open DevTools hit command+I in Macintosh and control+I in Windows and Linux

Lighthouse is now part of the Chrome DevTools suite of developer productivity tools. If you look at the audit menu you will see the lighthouse logo and the configuration for the tool.

DevTools under Audits Menu

If you haven’t already navigated to the page you want to test and click the audit menu in DevTools. For the purpose of these examples, we’ll leave the default settings active; these may not all be necessary for all sites but give a good idea of how will your app or site work in a mid-level mobile device under less-than-optimal conditions.


The source code for Lighthouse is available on its Github Repository and, from my perspective, it’s the best way to run Lighthouse without having a browser open and it gives you access to the latest and greatest

To get the CLI running you have to install Yarn. You can do so with your favorite package manager (Homebrew in Mac or Chocolatery in Windows) or following the instructions in Yarn’s Installation page.

Once Yarn is installed, download the Lighthouse code from Github. The easiest way is to use the Git command line tool:

git pull --recursive

Once it has downloaded all the dependencies we can build the tools by running yarn build-all in the terminal.

Lighthouse runs a series of tasks in its own instance of Chrome. The results are provided in an HTML page that you can open in your default browser automatically.

The example below will use the Lighthouse CLI we just built to test a site and display the result document in the user’s default browser.

node lighthouse-cli/ --view

NPM Module

If you need to build lighthouse into your build or CI processes or don’t want to be bothered with another package manager, there is an NPM Plugin to handle your needs.

Install it like any other plugins. I prefer global installation in this case:

npm i -g lighthouse

Then run it against an example site. Using the site from the CLI example:


AV1 video in browsers

While at the Chrome Dev Summit I learned several things that made me really happy, one of them is about video: You can play AV1video in browsers (currently Chrome, Opera, Firefox, and Edge) using the video element.

Firefox and Bitmovin have had a demo for AV1 playback for a while but it’s hardcoded to play in Firefox nightly so it wasn’t a good way to test playback capabilities.

The supported browsers each come with caveats.

  • As of this writing, AV1 supports only works in Firefox Nightly
    • Must enable media.av1.enabled in about:config
  • Works in Chrome 70 and newer and Opera 57 and newer for Desktop only
    • Must enable the #enable-av1-decoder flag in chrome://flags or opera://flags
  • Supported in Edge but not IE
    • Must install the AV1 Video Extension (Beta) from the Microsoft Store

Compressing the video

I took an HEVC/H265 video in an MP4 cotainer and converted it using ffWorks, an FFMPEG front-end.

I’ve also created the AV1 video file using FFMPEG from the command line to validate the command line pipeline but, for the purpose of this article, how we get the video is secondary to actually having it.

Using the video element

The first way to test how AV1 works in the browser is to load it directly into the page using the video element.

The example below uses a single source element to illustrate usage. In production, you will want multiple sources with different formats that the browser can select from.

The one thing I did different for this example, and that I will do for most videos using AV1, is to fully specify the type attribute for the source including the codecs portion. I do this because there are other formats available in MP4 containers and we want to give browsers as much information as possible to make sure it only downloads the AV1 video when it can play it.

<video  controls
  <source src="footloose.mp4" type='video/mp4; codecs="avc1.4d401f"'>

Uploading to YouTube

YouTube allows content creators to upload AV1 videos (in MP4 container) without erroring out but, sadly, it seems to convert them to H264 as part of the upload process. I’ve asked YouTube Developers on Twitter if users are allowed to upload AV1 video to the platform and I’m still waiting to hear from them.

So, instead of using the same video, I’ve chosen to work with a video from YouTube’s AV1 Beta Launch Playlist to see how well it works.

<iframe width="560" height="315"
 allow="accelerometer; autoplay;
 encrypted-media; gyroscope; picture-in-picture"

As you can see the iframe embed for an AV1 video is no different than any other YouTube embedded video. The main advantage is that the files tend to be smaller than h264/h265 and slightly smaller than VP9 videos.

My video experiment

When I first started working with this I thought that my video was transcoded to MP4 on upload but wanted to make sure, the embed below plays as AV1 video and MP4a audio, exactly as encoded.

Once again the embed is no different than the embed we use to play other formats supported in Youtube.

allow="accelerometer; autoplay; encrypted-media; gyroscope;

And, if your browser supports it, you can play the video below:

Additional Syntax for @font-face

During Chrome Dev Summit I learned an interesting trick when working with variable fonts. Rather than use the default values for the font properties you need to specify the boundaries (upper and lower) for each font property: weight, width, and style.

Using Roboto and its values as an example, the @font-face declaration looks like this

@font-face: Roboto;
src:  url('../../fonts/Roboto-min-VF.woff') format('woff')
      url('../../fonts/Roboto-min-VF.woff2') format('woff2');
font-weight:  250 900;
font-width: 75 100;
font-style: -12 0;

We can then use the attributes as we would normally do but using the values in the range we defined in the declaration:

.semibold {
  font-weight: 575.25;

Make sure that you test the font-style declarations as there is a mismatch between the Open Type and CSS specs regarding the direction of negative numbers.

Future versions of browsers will change the behavior by respecting default values but even in this case is better to explicitly set the values when creating the @font-face declaration… just to be on the safe side.

Loading multiple styles of the same font using Font Face Observer

For most of my web work, I use Font Face Observer to handle checking that the fonts have loaded.

Using the following @fontface declarations:

@font-face {
  font-family: 'Roboto';
  src: url('../../fonts/Roboto-min-VF.woff2') format('woff2');
  font-weight: normal;
  font-style: normal;
  font-display: swap;

I can use the following script to make sure the font loaded and provide a fallback when it doesn’t.

Assuming that fontfaceobserver.js is already loaded I use the following script to add classes based on whether the font loaded

    const roboto = new FontFaceObserver('Roboto');

    let html = document.documentElement;


    ]).then(() => {
      console.log('All fonts have loaded.');
    }).catch(() => {
      console.log('One or more fonts failed to load');

When I use multiple fonts I add new FontFaceObserver objects as variables and to the Promise.all array.

But what happens when you load variants of the same font, like so:

@font-face {
  font-family: 'Work Sans';
  src: url('../../fonts/WorkSans-Regular.woff2') format('woff2'),
    url('../../fonts/WorkSans-Regular.woff2') format('truetype');
  font-weight: normal;
  font-style: normal;
  font-display: swap;

@font-face {
  font-family: 'Work Sans';
  src: url('../../fonts/WorkSans-Bold.woff2') format('woff2'),
    url('../../fonts/WorkSans-Bold.woff2') format('truetype');
  font-weight: bold;
  font-style: normal;
  font-display: swap;

Until recently I had not realized that there was a second parameter that lists the attributes of the font that we want to download.

In the example below, the workBold definition includes the second parameter with the weight of the font we’re using in the second declaration.

The second parameter is an object with one or more of weight, style, and stretch and it must match one of the font declarations you use to load the fonts.

    const work = new FontFaceObserver('Work Sans');
    const workBold = new FontFaceObserver('Work Sans', {
      weight: 'bold'

    let html = document.documentElement;


    ]).then(() => {
      console.log('All fonts have loaded.');
    }).catch(() => {
      console.log('One or more fonts failed to load');

Using this technique you can use Font Face Observer to load multiple instances of the same font without having to give them different names.


Default font stack

Using the default fonts for the operating system saves bandwidth (we don’t have to download the font since it’s already installed in the system) and improves performance (fewer assets to download) but it requires testing in all the platforms you’re targetting.

These are not the browsers you’re targetting, those may have their own issues when working with system fonts (particularly Chrome on Mac that needs a special declaration for the system font).

The browser looks for each successive font and will use the first font that it finds either on the system or defined in CSS.

The font-family declaration looks like this:

body {
  font-family: system-ui, -apple-system, BlinkMacSystemFont, 'Segoe UI', Roboto,
    Oxygen-Sans, Ubuntu, Cantarell, 'Helvetica Neue', Helvetica, Arial, sans-serif;

Each of the fonts is explained below:

  • system-ui is the new family defined in Fonts Module level 4 to represent the native OS font family
  • -apple-system is San Francisco, used on iOS and macOS (not Chrome, however)
  • BlinkMacSystemFont is San Francisco, used on Chrome for macOS
  • Segoe UI is used on Windows 10
  • Roboto is used on Android
  • Oxygen-Sans is used on GNU+Linux
  • Ubuntu is used on Linux
  • "Helvetica Neue" and Helvetica is used on macOS 10.10 and below (wrapped in quotes because it has spaces in the name)
  • Arial is a font widely supported by all operating systems
  • sans-serif is the fallback sans-serif font if none of the other fonts are supported

Caveats and warnings

If you’ve downloaded any of the fonts (particularly Roboto and Ubuntu) in browsers that don’t use them as default they may produce unexpected results.

If working with Oxygen Sans you need to pay special attention. Google Fonts offers a different font with the name Oxygen (both serif and sans) so, if you download it, you may get unexpected results.

System Fonts

OS Version System Font
Mac OS X El Capitan, Sierra and High Sierra San Francisco
Mac OS X Yosemite Helvetica Neue
Mac OS X Mavericks Lucida Grande
Windows Vista Segoe UI
Windows XP Tahoma
Windows 3.1 to ME Microsoft Sans Serif
Android Ice Cream Sandwich (4.0)+ Roboto
Android Cupcake (1.5) to Honeycomb (3.2.6) Droid Sans
Ubuntu All versions Ubuntu

Links and Resources