Relative Time in JavaScript

I wasn’t aware that there is a completely separate standard for ECMAScript internationalization (ECMA-402 or the ECMAScript 2019 Internationalization API) that goes beyond the core specification and covers internationalization in a lot more detail than the main specification can.

I came across this spec through a post from Mathias Bynens about a new internationalization API available in Chrome 71. The API makes creating relative time strings like ‘1 week ago’ or ‘in 2 weeks’ easier and faster since it’s a part of the proposed internationalization spec.

I’ve used moment.js but it’s a beast in terms of file size (16.4K for the basic package and 66.4K for the full package including locales) and most of the time you will only use a fraction of the locales provided.

The relative time API, as implemented in Chrome,

const rtf = new Intl.RelativeTimeFormat('en', {
  localeMatcher: 'best fit',
  style: 'long',
  numeric: 'auto',
});

And then use it like this:

rtf.format(3.14, 'second'); // → 'in 3.14 seconds'

rtf.format(-15, 'minute'); // → '15 minutes ago'

rtf.format(8, 'hour'); // → 'in 8 hours'

rtf.format(-2, 'day'); // → '2 days ago'

You can use positive and negative values to indicate time in the future or the past.

This is an interesting API. It provides a smaller, built-in, API to work with relative timings on your page.

Links and Resources

Performance Testing With Lighthouse

Performance is one of the hottest topics and, in my experience, one of the hardest to get right. There are tools that will help developers measure and improve a site’s performance.

The idea behind Lighthouse is that by running it against your site, either staging or production periodically you can get a good overview of your site’s performance in the categories that lighthouse measures.

It will give you a baseline to measure against, find areas to improve on and track any regression that may have snuck by your testing.

Lighthouse

Lighthouse is an automated tool for improving the quality of web pages. It has audits for performance, accessibility, best practices, SEO, and progressive web apps.

You can run Lighthouse in Chrome DevTools, as an extension, from the command line, or as a Node module. You give Lighthouse a URL to audit, it runs a series of audits against the page, and then it generates a report on how well the page did. Use the failing audits as indicators on how to improve the page… think of it as performance-driven development.

DevTools Audit Menu

To open DevTools hit command+I in Macintosh and control+I in Windows and Linux

Lighthouse is now part of the Chrome DevTools suite of developer productivity tools. If you look at the audit menu you will see the lighthouse logo and the configuration for the tool.

DevTools under Audits Menu

If you haven’t already navigated to the page you want to test and click the audit menu in DevTools. For the purpose of these examples, we’ll leave the default settings active; these may not all be necessary for all sites but give a good idea of how will your app or site work in a mid-level mobile device under less-than-optimal conditions.

CLI

The source code for Lighthouse is available on its Github Repository and, from my perspective, it’s the best way to run Lighthouse without having a browser open and it gives you access to the latest and greatest

To get the CLI running you have to install Yarn. You can do so with your favorite package manager (Homebrew in Mac or Chocolatery in Windows) or following the instructions in Yarn’s Installation page.

Once Yarn is installed, download the Lighthouse code from Github. The easiest way is to use the Git command line tool:

git pull --recursive https://github.com/googlechrome/lighthouse.git

Once it has downloaded all the dependencies we can build the tools by running yarn build-all in the terminal.

Lighthouse runs a series of tasks in its own instance of Chrome. The results are provided in an HTML page that you can open in your default browser automatically.

The example below will use the Lighthouse CLI we just built to test a site and display the result document in the user’s default browser.

node lighthouse-cli/ https://web-layout-experiments.firebaseapp.com --view

NPM Module

If you need to build lighthouse into your build or CI processes or don’t want to be bothered with another package manager, there is an NPM Plugin to handle your needs.

Install it like any other plugins. I prefer global installation in this case:

npm i -g lighthouse

Then run it against an example site. Using the site from the CLI example:

lighthouse-cli/ https://web-layout-experiments.firebaseapp.com

AV1 video in browsers

While at the Chrome Dev Summit I learned several things that made me really happy, one of them is about video: You can play AV1video in browsers (currently Chrome, Opera, Firefox, and Edge) using the video element.

Firefox and Bitmovin have had a demo for AV1 playback for a while but it’s hardcoded to play in Firefox nightly so it wasn’t a good way to test playback capabilities.

The supported browsers each come with caveats.

  • As of this writing, AV1 supports only works in Firefox Nightly
    • Must enable media.av1.enabled in about:config
  • Works in Chrome 70 and newer and Opera 57 and newer for Desktop only
    • Must enable the #enable-av1-decoder flag in chrome://flags or opera://flags
  • Supported in Edge but not IE
    • Must install the AV1 Video Extension (Beta) from the Microsoft Store

Compressing the video

I took an HEVC/H265 video in an MP4 cotainer and converted it using ffWorks, an FFMPEG front-end.

I’ve also created the AV1 video file using FFMPEG from the command line to validate the command line pipeline but, for the purpose of this article, how we get the video is secondary to actually having it.

Using the video element

The first way to test how AV1 works in the browser is to load it directly into the page using the video element.

The example below uses a single source element to illustrate usage. In production, you will want multiple sources with different formats that the browser can select from.

The one thing I did different for this example, and that I will do for most videos using AV1, is to fully specify the type attribute for the source including the codecs portion. I do this because there are other formats available in MP4 containers and we want to give browsers as much information as possible to make sure it only downloads the AV1 video when it can play it.

<video  controls
        playsinline
        class="video"
        poster="images/poster.jpg">
  <source src="footloose.mp4" type='video/mp4; codecs="avc1.4d401f"'>
</video>

Uploading to YouTube

YouTube allows content creators to upload AV1 videos (in MP4 container) without erroring out but, sadly, it seems to convert them to H264 as part of the upload process. I’ve asked YouTube Developers on Twitter if users are allowed to upload AV1 video to the platform and I’m still waiting to hear from them.

So, instead of using the same video, I’ve chosen to work with a video from YouTube’s AV1 Beta Launch Playlist to see how well it works.

<iframe width="560" height="315"
 src="https://www.youtube.com/embed/Fmdb-KmlzD8"
 frameborder="0"
 allow="accelerometer; autoplay;
 encrypted-media; gyroscope; picture-in-picture"
  allowfullscreen></iframe>

As you can see the iframe embed for an AV1 video is no different than any other YouTube embedded video. The main advantage is that the files tend to be smaller than h264/h265 and slightly smaller than VP9 videos.

My video experiment

When I first started working with this I thought that my video was transcoded to MP4 on upload but wanted to make sure, the embed below plays as AV1 video and MP4a audio, exactly as encoded.

Once again the embed is no different than the embed we use to play other formats supported in Youtube.

<iframe
allow="accelerometer; autoplay; encrypted-media; gyroscope;
picture-in-picture"
  allowfullscreen
  frameborder="0"
  height="315"
  src="https://www.youtube.com/embed/ZYidbf5Jtfc"
  width="560"></iframe>

And, if your browser supports it, you can play the video below:

Additional Syntax for @font-face

During Chrome Dev Summit I learned an interesting trick when working with variable fonts. Rather than use the default values for the font properties you need to specify the boundaries (upper and lower) for each font property: weight, width, and style.

Using Roboto and its values as an example, the @font-face declaration looks like this

@font-face: Roboto;
src:  url('../../fonts/Roboto-min-VF.woff') format('woff')
      url('../../fonts/Roboto-min-VF.woff2') format('woff2');
font-weight:  250 900;
font-width: 75 100;
font-style: -12 0;

We can then use the attributes as we would normally do but using the values in the range we defined in the declaration:

.semibold {
  font-weight: 575.25;
}

Make sure that you test the font-style declarations as there is a mismatch between the Open Type and CSS specs regarding the direction of negative numbers.

Future versions of browsers will change the behavior by respecting default values but even in this case is better to explicitly set the values when creating the @font-face declaration… just to be on the safe side.

Loading multiple styles of the same font using Font Face Observer

For most of my web work, I use Font Face Observer to handle checking that the fonts have loaded.

Using the following @fontface declarations:

@font-face {
  font-family: 'Roboto';
  src: url('../../fonts/Roboto-min-VF.woff2') format('woff2');
  font-weight: normal;
  font-style: normal;
  font-display: swap;
}

I can use the following script to make sure the font loaded and provide a fallback when it doesn’t.

Assuming that fontfaceobserver.js is already loaded I use the following script to add classes based on whether the font loaded

    const roboto = new FontFaceObserver('Roboto');

    let html = document.documentElement;

    html.classList.add('fonts-loading');

    Promise.all([
      roboto.load(),
    ]).then(() => {
      html.classList.remove('fonts-loading');
      html.classList.add('fonts-loaded');
      console.log('All fonts have loaded.');
    }).catch(() => {
      html.classList.remove('fonts-loading');
      html.classList.add('fonts-failed');
      console.log('One or more fonts failed to load');
    });

When I use multiple fonts I add new FontFaceObserver objects as variables and to the Promise.all array.

But what happens when you load variants of the same font, like so:

@font-face {
  font-family: 'Work Sans';
  src: url('../../fonts/WorkSans-Regular.woff2') format('woff2'),
    url('../../fonts/WorkSans-Regular.woff2') format('truetype');
  font-weight: normal;
  font-style: normal;
  font-display: swap;
}

@font-face {
  font-family: 'Work Sans';
  src: url('../../fonts/WorkSans-Bold.woff2') format('woff2'),
    url('../../fonts/WorkSans-Bold.woff2') format('truetype');
  font-weight: bold;
  font-style: normal;
  font-display: swap;
}

Until recently I had not realized that there was a second parameter that lists the attributes of the font that we want to download.

In the example below, the workBold definition includes the second parameter with the weight of the font we’re using in the second declaration.

The second parameter is an object with one or more of weight, style, and stretch and it must match one of the font declarations you use to load the fonts.

    const work = new FontFaceObserver('Work Sans');
    const workBold = new FontFaceObserver('Work Sans', {
      weight: 'bold'
    });

    let html = document.documentElement;

    html.classList.add('fonts-loading');

    Promise.all([
      work.load(),
      workBold.load(),
    ]).then(() => {
      html.classList.remove('fonts-loading');
      html.classList.add('fonts-loaded');
      console.log('All fonts have loaded.');
    }).catch(() => {
      html.classList.remove('fonts-loading');
      html.classList.add('fonts-failed');
      console.log('One or more fonts failed to load');
    });

Using this technique you can use Font Face Observer to load multiple instances of the same font without having to give them different names.

Links