rolling your own Node CLI tools

Most modern libraries and frameworks have a command line interface that will make it easier to build your own applications using the particular framework. Look at Angular CLI or Polymer CLI for an idea of what I’m talking about.

The idea is that we can create a tool like a Yeoman generator without having to learn the way Yeoman works.

Getting started

To start create a package.json file.

npm init

We’ll install the application’s dependencies next. Note that as of NPM 5.0 we don’t need to indicate we’re saving them as dependecies (--save) as this is assumed when you install a package.

npm install caporal colors prompt shelljs

Edit the package.json file so that it looks like the exmaple below.

Pay special attention to the bin section of the package. we’ve created a scaffold script that will run index.js.

  "name": "scaffold",
  "version": "1.0.0",
  "main": "index.js",
  "bin": {
    "scaffold": "index.js"
  "dependencies": {
    "caporal": "^0.3.0",
    "colors": "^1.1.2",
    "prompt": "^1.0.0",
    "shelljs": "^0.7.7"

Buildng the app

At a minimum Caporal requires:

  • a command
  • one or more arguments
  • an action to execute when we use the command

The example below will take a template and, optionally, a variant of the template we want to create. It will log the arguments and options to the console.

#!/usr/bin/env node

const prog = require('caporal');

  .command('create', 'Create a new application')
  .argument('<template>', 'Template to use')
  .option('--variant <variant>', 'Which <variant> of the template we\'ll create')
  .action((args, options, logger) => {
      args: args,
      options: options


To test the program use NPM’s link to add the project to our global npm space without having to publish it to the NPM registry and download it back to our development workstation.

npm link

This will put your scripts in your path and allow you to run your script without prepending the full path.

scaffold --help

If we run the full command:

scaffold create node --variant mvc

We should get this in response:

{ args: { template: 'node' }, options: { variant: 'mvc' } }

I know, we don’t really need to log the data we’ve just entered so we can change the action to point to a separate file (discussed below)

#!/usr/bin/env node

const prog = require('caporal');
const createCommand = require('./lib/create');

  .command('create', 'Create a new element')
  .argument('<element>', 'Element template to use')
  .option('--element-name <name>', 'What element we want to create')


The create.js file host all the application logic for the create command and is isolated from other commands and the Caporal logic itself. We’re using Shelljs to run Unix commands in our project and prompt to get user input.

The commands will prompt the user, replace placeholder elements, and copy content from our template to the elements we create.

const prompt = require('prompt');
const shell = require('shelljs');
const fs = require('fs');
const colors = require("colors/safe");

// Set prompt as blue
prompt.message ="Replace");

// Command function

module.exports = (args, options, logger) => {
  const variant = options.variant || 'default';
  const elementPath = `${__dirname}/../elements/${args.element}/${variant}`;
  const localPath = process.cwd();
  // File variables
  const variables = require(`${elementPath}/_variables`);

  // Copy Element
  // If the element path doesn't exist, bail
  if !(fs.existsSync(elementPath)) {
    logger.error(`The requested element for ${args.element} wasn’t found.`)
  } else {
    // otherwise copy the files'Copying files…');
    shell.cp('-R', `${elementPath}/*`, localPath);'✔ The files have been copied!');

  // Remove variables file from the current directory
  // since we only need it on the template directory
  if (fs.existsSync(`${localPath}/_variables.js`)) {
  }'Please fill the following values…');

  // Ask for variable values
  prompt.start().get(variables, (err, result) => {

    // Remove MIT License file if another is selected
    if (result.license !== 'MIT') {

    // Replace variable values in all files'-Rl', '.').forEach(entry => {
      if (entry.isFile()) {
        // Replace '[VARIABLE]` with the variable value
        // from the prompt
        variables.forEach(variable => {
          shell.sed('-i', `\\[${variable.toUpperCase()}\\]`,

        // Insert current year in files
        shell.sed('-i', '\\[YEAR\\]', new Date().getFullYear(),;
    });'✔ Success!');

Building templates

Now that we have the logic for creating elements, let’s look at creating the templates we’ll use as the original for each element.

Each element has its own template and we may have more than one set of templates for our application. The structure may look like this tree.

└─ elements
  └─ default
  └── publish-video
  └── publish-content
  └── publish-grid
└─ application

We’ll look at the default element that will act as our default template. Each element will mirror the structure of this default element. The directory has the following structure:

├── _variables.js
├── lib
├── package.json
├── myElement.js
  • LICENSE is our element’s license. MIT by default
  • _varaiables.js contains the items
  • llib is a directory to hold any additional files we need
  • package.json
  • myElement.js

_variables.js contains the variables we want to replace in our elements. The replacement and processing is handled in the create script so we’ll just present the file as is.

//  Variables to replace
//  They are asked to the user as they appear here.
//  User input will replace the placeholder values
//  in the template files

module.exports = [

The package.json files is an example of what the template looks like. Variables to be replaced match the names in _variables.js.

  "name": "[NAME]",
  "version": "[VERSION]",
  "description": "[DESCRIPTION]",
  "main": "server.js",
  "scripts": {
    "test": "echo \"Error: no test specified\" && exit 1",
    "start": "node server.js",
    "start:dev": "nodemon server.js"
  "author": "[AUTHOR]",
  "license": "[LICENSE]",
  "dependencies": {
    "dotenv": "^2.0.0",
    "hapi": "^16.1.0",
    "hoek": "^4.1.0"
  "devDependencies": {
    "nodemon": "^1.11.0"

myElement.js is the core of the template. We create a V1 Custom Element and take advantage of features in the specification, working with observed attributes and reflecting changes in the attributes to the code and vice versa.

Custom Elements V1 use only ES2015 exclusively.

class myElement extends HTMLElement {

  static get observedAttributes() {
    return ['disabled'];
  // A getter/setter for a disabled property.
  get disabled() {
    return this.hasAttribute('disabled');

  set disabled(val) {
    if (val) {
      this.setAttribute('disabled', '');
    } else {

  // Can define constructor arguments if you wish.
  constructor() {
    // If you define a constructor, always call super() first!
    // This is specific to CE and required by the spec.


  attributeChangedCallback(name, oldValue, newValue) {
    // When the drawer is disabled, update keyboard/screen reader behavior.
    if (this.disabled) {
      this.setAttribute('tabindex', '-1');
      this.setAttribute('aria-disabled', 'true');
    } else {
      this.setAttribute('tabindex', '0');
      this.setAttribute('aria-disabled', 'false');

// Associate the custom element with the class we just created
customElements.define('my-element', myElement);

So now that we have it all together we can publish it to NPM if we so choose because we linked the CLI tool using NPM we can run it without uploading it.

PWA Starter: The PWA Checklist

Text from Google’s PWA Checklist used under a Creative Commons Attribution 3.0 License.

Google’s PWA checklist presents both a basic and an advanced set of requirements for Progressive Web Applications. As I mentioned earlier not all these requirements are technical, some of them have to deal with the performance of your application/site and with best practices in responsive web design.

I’ve grouped them in two categories: Basic and Advanced requirements. I will comment on individual entries as needed.

Basic Requirements

Serving through HTTPS in a requirement for service workers and most, if not all, modern features available in browsers. Whether you run a PWA or not you should consider moving your site to secure hosting.

If cost is an issue there are way to obtain low cost or free SSL certificates. Tools like make it trivial to obtain ceertificates and install them on your server.

Site is served over HTTPS
To Test Use Lighthouse to verify Served over HTTPS
To Fix Implement HTTPS and check out to get started

You’re already creating responsive content, right? As designers and developers we know that it’s not enough to do a desktop-first design and then tweak it for other form factors.

Even if we look at browser popularity we might be surprised at the results. According to Statcounters ( the three most popular browsers, worldwide (08/16 to 08/17), are:

Browser Market Share
Chrome 54.89%
Safari 14.88%
UC Browser 7.43%
Firefox 5.9%
Opera 4%
IE 3.69%

If you’re looking at working in emerging markets, the figures change. According to Statcounter, the Browser market share for mobile devices in Asia breaks like this over the last year (same time period as before).

Browser Market Share
Chrome 50.46%
UC Browser 22.23%
Safari 10.27%
Opera 6.38%
Samsung Internet 5.97%
Android 3.31%

So any way you look at it, it pays to be responsive.

Pages are responsive on tablets & mobile devices
To Test Use Lighthouse to verify Yes to all of Design is mobile-friendly, although manually checking can also be helpful.
Check the Mobile Friendly Test
To Fix Look at implementing a responsive design, or adaptively serving a viewport-friendly site.

Use the service worker to make sure that at least the entry page to your site works while offline. This may be just a matter to cache the assets for the starting URL when you install the service worker (as we saw earlier).

Runtime caching should take care of ca ching the rest of your PWA.

The start URL (at least) loads while offline
To Test Use Lighthouse to verify URL responds with a 200 when offline.
To Fix Use a Service Worker.


Metadata provided for Add to Home screen
To Test Use Lighthouse to verify User can be prompted to Add to Home screen is all Yes.
To Fix Add a Web App Manifest to your project.

Working with a service worker will smooth out network issues and increase the perceived performance of your site or app. That said we still want the content to load fast and we should take all steps to make sure it happens even for older browsers that don’t support service workers or the polyfill.

First load fast even on 3G
To Test Use Lighthouse on a Nexus 5 (or similar) to verify time to interactive <10s for first visit on a simulated 3G network.
To Fix There are many ways to improve performance.
You can understand your performance better by using Pagespeed Insights (aim for score >85) and SpeedIndex on WebPageTest (aim for <4000 first view on Mobile 3G Nexus 5 Chrome)
A few tips are to focus on loading less script, make sure as much script is loaded asynchronously as possible using <script async> and make sure render blocking CSS is marked as such.
You can also look into using the PRPL pattern and tools like PageSpeed Module on the server.

Browsers are getting there in interoperability but they are not there quite yet. Make sure that it works on all your target browsers for mobile and desktop. Fix whatever doesn’t work (duh).

Site works cross-browser
To Test Test site in Chrome, Edge, Firefox and Safari
To Fix Fix issues that occur when running the app cross-browser


Page transitions don’t feel like they block on the network
Transitions should feel snappy as you tap around, even on a slow network, a key to perceived performance.
To Test Open the app on a simulated very slow network. Every time you tap a link/button in the app the page should respond immediately, either by:

  • Transitioning immediately to the next screen and showing a placeholder loading screen while waiting for content from the network.
  • A loading indicator is shown while the app waits for a response from the network
To Fix If using a single-page-app (client rendered), transition the user to the next page immediately and show a skeleton screen and use any content such as title or thumbnail already available while content loads.


Each page has a URL
To Test Ensure individual pages are deep linkable via the URLs and that URLs are unique for the purpose of shareability on social media by testing with individual pages can be opened and directly accessed via new browser windows.
To Fix If building a single-page app, make sure the client-side router can re-construct app state from a given URL.

Advanced Requirements (Exemplary PWAs)

The basic requirements for a PWA represent an “MVP” of what a PWA can be. These advanced requirements point out to additional resources, concepts and ideas that will make the app even better.

These fall under the “nice to have” or “these are optional but strongly encouraged” parts of a PWA.

Indexability & Social

Whether a PWA or a regular site we want to make sure that search engines can find your app and show it to potential users. Because the concept of PWA is new there hasn’t been much done in the search engine space and discoverability spacces. Microsoft Bing search engine has taken the lead in this area.

We [Microsoft] are already using the Bing Crawler to identify PWAs on the Web for our PWA research. The Web App Manifest is a proactive signal from developers that a given website should be considered an app; we’re listening to that signal and evaluating those sites as candidates for the Store. Once we identify quality PWAs, we’ll automatically generate the APPX wrapper format used by the Windows Store and assemble a Store entry based on metadata about the app provided in the Web App Manifest.

Aarong Gustafson – Progressive Web Apps and the Windows Ecosystem

For more information, see Google’s guide to social optimization and social discovery.

Site’s content is indexed by Google
To Test Use the Fetch as Google tool to preview how Google will see your site when it is crawled.
To Fix Google’s indexing system does run JavaScript but some issues may need to be fixed to make content accessible. For example, if you are using new browser features like the Fetch API, ensure that they are polyfilled in browsers without support.

Just like regular pages, the pages of a PWA bennefit from metadata. This will help Google and other searc engine crawlerss to better index your content. See the examples under the to-fix section for ideas of what you can do with Metadata. Also check James Williams course on HTML5 structured data on Linkedin Learning

Note that this refers to generic metadata, not the one you’d use for Facebook or Twitter. That comes in the next section. metadata is provided where appropriate metadata can help improve the appearance of your site in search engines.
To Test Use the testing tool to ensure title, image, description etc. are available.
To Fix Markup the content. For example:

Social metadata is the other part of making you applicaation known and includes Facebook, Twitter, Google+ and others.

Social metadata is provided where appropriate
To Test
  • Open a representative page in Facebook’s crawler and ensure it looks reasonable
  • Check that Twitter Cards meta data is present (for example <meta name=”twitter:card” content=”summary” />) if you feel it would be appropriate
To Fix Mark up content with Open Graph tags and as advised by Twitter.

Some sites still use 2 sites for the same content, one is the standard www and the other one is a specialized mobile site, usually called m-dot ( To help search engine crawlers use the rel="canonical attribute of the link tag to tell the crawler which one is the canonical version.

Canonical URLs are provided when necessary
This is only necessary if your content is available at multiple URLs.
To Test Determine whether any piece of content is available at two different URLs.
Open both of these pages and ensure they use <link rel=canonical> tags in the head to indicate the canonical version
To Fix Add a canonical link tag to the <head> of each page, pointing to the canonical source document. See Use canonical URLs for more information.

The history API lets you programmatically control the navigation of your app. Using it means that your users will be able to navigate throughout the application without having to worry if the browser will remember the sites you visited.

Pages use the History API
To Test For single page apps, ensure the site doesn’t use fragment identifiers. For example everything after the #! in!user/26601.
To Fix Use the History API instead of page fragments.

User experience

A PWA is no different than any other app in how users will react to poor UX and UI. We need to present a great user experience as we can.

Ensure all content, especially images and ads, have fixed sizing in CSS or inline on the element. consider ways to present a preview of the content until it’s downloaded.

Content doesn’t jump as the page loads
To Test Load various pages in the PWA and ensure content or UI doesn’t “jump” as the page loads
To Fix Ensure all content, especially images and ads, have fixed sizing in CSS or inline on the element. Before the image loads you may want to show a grey square or blurred/small version (if available) as a placeholder

This is an interesting case. I seldom see this being enforced but the user experience works consistently and doesn’t make you scroll up or down to get to the section of the content we were looking at.

Pressing back from a detail page retains scroll position on the previous list page
To Test Find a list view in the app. Scroll down. Tap an item to enter the detail page. Scroll around on the detail page. Press back and ensure the list view is scrolled to the same place it was at before the detail link/button was tapped.
To Fix Restore the scroll position in lists when the user presses ‘back’. Some routing libraries have a feature to do this for you.

In mobile most applications will use the system’s virtual keyboard. When the keyboard appears, make sure the content is not obscured under the keyboard.

When tapped, inputs aren’t obscured by the on screen keyboard
To Test Find a page with text inputs. Scroll to put the text input as low on the screen as you can make it. Tap the input and verify it is not covered when the keyboard appears.
To Fix Explore using features like Element.scrollIntoView() and Element.scrollIntoViewIfNeeded() to ensure the input is visible when tapped.


Content is easily shareable from standalone or full screen mode
To Test Ensure from standalone mode (after adding the app to your home screen) that you are able to share content, if appropriate, from within the app’s UI.
To Fix Provide social share buttons, or a generic share button within your UI. If a generic button, you may want to directly copy the URL to the user’s clipboard when tapped, offer them social networks to share to, or try out the new Web Share API to integrate with the native sharing system on Android.


Site is responsive across phone, tablet and desktop screen sizes
To Test View the PWA on small, medium and large screens and ensure it works reasonably on all.
To Fix Review our guide on implementing responsive UIs.


Any app install prompts are not used excessively
To Test Check the PWA doesn’t use an app install interstitial when loaded
To Fix There should only be one top or bottom app install banner
After the PWA is added to the user’s home screen, any top/bottom banners should be removed.

Whenever you use Add to Home Screen you’re responsible for not annoying your users by presenting A2HS before the user had the chance to evaluate your site and make a decision or at inoportune times.

The Add to Home Screen prompt is intercepted
To Test Check the browser doesn’t display the A2HS at an inopportune moment, such as when the user is in the middle of a flow that shouldn’t be interrupted, or when another prompt is already displayed on the screen.
To Fix Intercept the beforeinstallprompt event and prompt later
Chrome manages when to trigger the prompt but for situations this might not be ideal. You can defer the prompt to a later time in the app’s usage.


Users are pushy and expect more than our apps. They want them fast and they want responsive and they want high quality. This section will look at the performance requirements for great PWAs.

First load very fast even on 3G
To Test Use Lighthouse on a Nexus 5 (or similar) to verify time to interactive < 5s for first visit on a simulated 3G network (as opposed to the 10s goal for baseline PWAs)
To Fix Review the performance section of WebFundamentals and ensure you’re following the best practices.
You can understand your performance better by using Pagespeed Insights (aim for a score >85) and SpeedIndex on WebPageTest (aim for a score <4000 on the first view on Mobile 3G Nexus 5 Chrome).
A few tips are to focus on loading less script, make sure as much script is loaded asynchronously as possible using <script async> and make sure render blocking CSS is marked as such.


One of the great things about PWAs is that they smooth out network connections making PWAs look like they are faster.

We can accomplish the caching performance with service worker that use cache-first strategies. This will pull the data from the cache and, only if not cached, will fetch it from the network and store it in the cache.

There are exceptions to the rule. You may not want to cache large videos, audio or other large files. If so present the user with cues that the content is not available offline.

Site uses cache-first networking
To Test Set the network emulation to the slowest setting and browse around the app.
Then, set the network emulation to offline and browse around. The app should not feel faster when offline than on a slow connection.
To Fix Use cache-first responses wherever possible.

There are times when it’s important for the user to know if they are offline. For example: You should tell the user they are offline if they are trying to complete an ecommerce transation or trying to view videos that are only available online.

Site appropriately informs the user when they’re offline
To Test Emulate an offline network and verify the PWA provides an indication that you are offline.
To Fix Use the Network Information API to show the user an indication when they’re offline.

Push notifications

This check list only applies if notifications are implemented. Adding push notifications is not a requirement for an exemplary progressive web app.

Always provide context for the user to decide if they should enable notifications. Tell them what types of messages your app will push and be explicit as to what permissions you want. If you’re asking for permission to use Push Notifications then do just that.

If your app is requesting permission for push notification on first visit you better have a good explanation right up front as to why you’re asking before users have a chance to exprience your app.

Provide context to the user about how notifications will be used
To Test
  • Visit the site, and find the push notifications opt-in flow
  • When you are shown the permission request by the browser, ensure that context has been provided explaining what the site wants the permission for
  • If the site is requesting for the permission on page load, ensure it provides very clear context simultaneously for why the user should enable push notifications
To Fix See our guide to creating user-friendly Notifications permissions flows.

Please, please, please don’t annoy your users by presenting them with Push Notification signups on every page of your site. Remember that if they block your application, it will stay blocked until they decide to unblock it an add notifications from your site.

As the test below indicates, if the user doesn’t opt in to your application’s notification workflow you shouldn’t prompt them againn in the same session.

UI encouraging users to turn on Push Notifications must not be overly aggressive.
To Test Visit the site and find the push notifications opt in flow. Ensure that if you dismiss push notification, the site does not re-prompt in the same way within the same session.
To Fix If users say they don’t want a certain kind of notification, do not reprompt for at least a few days (for example, one week).


Site dims the screen when permission request is showing
To Test Visit the site and find the push notifications opt-in flow. When Chrome is showing the permission request, ensure that the page is “dimming” (placing a dark overlay over) all content not relevant to explaining why the site needs push notifications.
To Fix When calling Notification.requestPermission dim the screen. Undim it when the promise resolves.

If you decide to use push notifications in your site make sure they are relevant ant timely. By this I mean that the user only gets notifications when they perform and acction, therre is new/updated information, something happens that requires the user to take action and it’s about the site they are visting.

Push notifications must be timely, precise and relevant
To Test Enable push notifications from the site and ensure the use cases they’re using the push notifications for are:

  • Timely — A timely notification is one that appears when users want it and when it matters to them
  • Precise — A precise notification is one that has specific information that can be acted on immediately
  • Relevant — A relevant message is one about people or subjects that the user cares about
To Fix See our guide on creating great push notifications for advice. If your content is not timely and relevant to this user, consider using email instead.

Make sure the user has a way to disable and, hopefully, reenable notifications without going to the browser’s UI to do so.

Provides controls to enable and disable notifications
To Test Enable push notifications from the site. Ensure there is some place on the site that allows you to manage your notifications permissions or disable them.
To Fix Create a UI that allows users to manage their notification preferences.

Additional features

These are additional features that make PWAs easier to work with. They only apply if you’re using the API on your site/app.

User is logged in across devices via Credential Management API
This only applies if your site has a sign in flow.
To Test Create an account for a service and ensure you see the save password/account dialog show up. Click “Save”.
Clear cookies for the site (via clicking on the padlock or Chrome settings) and refresh the site. Ensure that you either see an account picker (e.g. if there are multiple accounts saved) or are automatically signed back in.

Sign out and refresh the site. Ensure that you see the account picker.

To Fix Follow our Credential Management API Integration Guide.
User is logged in across devices via Credential Management API
This check only applies if your site accepts payments.
To Test Enter the payment flow. Instead of filling out a conventional form, verify the user is able to pay easily via the native UI triggered by the Payment Request API.
To Fix Follow our Payment Request API Integration Guide.

PWA Starter: More APIs to make an even better PWA

Before we jump into the PWA checklist we’ll talk about APIs we can use in PWAs to enhance performance beyond the basic functionality of PWAs we’ve discussed so far.

Background Sync

The background sync API provides tools to create one-of and periodic data synchronization after the content was initially fetched for the application. It accomplishes these tasks by adding events for the service worker and additional functions for the service worker.

In the code below we register a service worker and, when we’re ready, we register a sync event; The name we register here is important; we’ll use it as the name of the event when we actually do the sync.

if ('serviceWorker' in navigator) {
  window.addEventListener('load', () => {
    .then(function(registration) {
      // Registration was successful
      console.log('ServiceWorker registration successful with scope: ', registration.scope);
    .catch(function(err) {
      // registration failed :(
      console.log('ServiceWorker registration failed: ', err);

// Register a one-off sync event
navigator.serviceWorker.ready.then(function(swRegistration) {
  return swRegistration.sync.register('image-fetch');

When we do the actual sync event we use the name of the tag we registered at sync registration time. In this case, we wait until the function executes. This function is a wrapper for fetching the image. If we’re working with more complex content the function can build more elaborated content.

self.addEventListener('sync', function (event) {
  if (event.tag === 'image-fetch') {

function fetchDogImage () {
    .then(function (response) {
      return response;
    .then(function (text) {
      console.log('Request successful', text);
    .catch(function (error) {
      console.log('Request failed', error);

Links and Resources: Backkground Sync

Push Notifications

Push notifications allow the server hosting your application to push information to be displayed even when the tab with your application is in the background or the entire browser is closed.

This is a complex topic and requires several moving pieces. Rather than try and condense the topic here I’ll refer you to Matt Gaunt Web Push Book, a thorough discussion of Web Push and Push Notifications.

Links and Resources: Push notifications

Credential Management API

The Credential Management API lets websites interact with a user agent’s password system so that websites can deal in a uniform way with site credentials and user agents can provide better assistance with the management of their credentials. For example, user agents have a particularly hard time dealing with federated identity providers or esoteric sign-in mechanisms that use more than just a username and password. To address these problems, the Credential Management API provides ways for a website to store and retrieve different types of password credentials. This gives users capabilities such as seeing the federated account they used to sign on to a site, or resuming a session without the explicit sign-in flow of an expired session.

There is a working example as part of Google Codelabs

Links and Resources: Credential Management

Payment Request API

Many problems related to online purchase abandonment can be traced to checkout forms, which are user-intensive, difficult to use, slow to load and refresh, and require multiple steps to complete. The Payment Request API is a system that is meant to eliminate checkout forms. It vastly improves user workflow during the purchase process, providing a more consistent user experience and enabling web merchants to easily leverage disparate payment methods.

Links and Resources: Payment Request API

Offline Analytics

When working with offline analytics we need to make sure that whatever events happen offline are captured to replay later when the user has regained connectivity. Both Google Offline Analytics libraries discussed below work the same way: They set up a new fetch event handler in your service worker, which response to requests made only to the Google Analytics domain.

The analytics fetch event uses a network fetch strategy to send analytics events to Google Analytics servers. If the user is online this network request will succeed and the analytics servers will store the data; everything is fine.

If the network request fails, the library will store information about the request to IndexedDB, along with a timestamp indicating when the request was initially made. Each time your service worker starts up, the library will check for queued requests and attempt to resend them, along with some additional Google Analytics parameters:

  • A qt parameter, set to the amount of time that has passed since the request was initially attempted, to ensure that the original time is properly attributed
  • Any additional parameters and values supplied in the parameterOverrides property of the configuration object passed to goog.offlineGoogleAnalytics.initialize(). For example, you could include a custom dimension to distinguish requests that were resent from the service worker from those that were sent immediately.

If the service worker succeeds, then the request is uploaded to the analytics servers and removed from IndexedDB.

If the retry fails, and the initial request was made less than 24 hours ago, it will be kept in IndexedDB to be retried the next time the service worker starts. Note that Google Analytics hits older than four hours are not guaranteed to be processed, but resending these older hits “just in case” shouldn’t hurt.

Standalone Offline Analytics

There is a standalone library for offline analytics and you’re writing your service worker by hand. To use it import the plugin using npm

npm install --save-dev sw-offline-google-analytics

And then use the following code in your service worker, before any fetch event:

// Import the library into the service worker global scope:

// Then, call goog.offlineGoogleAnalytics.initialize():

// At this point, implement any other service worker caching strategies
// appropriate for your web app.

Workbox Offline Analytics

If you’re using workbox.js you can use the following code:

// This code should be placed before 'fetch' event handlers are defined.
// Import the library into the service worker global scope:

// Then, call workbox.googleAnalytics.initialize():

Links and Resources: Offline Analytics

Online and offline events

The last API I wanted to discuss is online/offline. We need a way to communicate the online status to our users. Online/Offline events provide a solution to this communication need.

Browsers implement this property differently.

In Chrome and Safari, if the browser is not able to connect to a local area network (LAN) or a router, it is offline; all other conditions return true. So while you can assume that the browser is offline when it returns a false value, you cannot assume that a true value necessarily means that the browser has a working internet connection. You could be getting false positives, such as in cases where the computer is running a virtualization software that has virtual ethernet adapters that are always “connected.”

In Firefox and Internet Explorer, switching the browser to offline mode sends a false value. Until Firefox 41, all other conditions return a true value; since Firefox 41, in OS X and Windows, the value will follow the actual network connectivity.

An example script, taken from MDN, looks like this:

window.addEventListener('load', function() {
  var status = document.getElementById("status");
  var log = document.getElementById("log");

  function updateOnlineStatus(event) {
    var condition = navigator.onLine ? "online" : "offline";

    status.className = condition;
    status.innerHTML = condition.toUpperCase();

    log.insertAdjacentHTML("beforeend", "Event: " + event.type + "; Status: " + condition);

  window.addEventListener('online',  updateOnlineStatus);
  window.addEventListener('offline', updateOnlineStatus);

The full example is available in Codepen

Links and Resources: Online/Offline Detection

PWA Starter: Automating Service Worker Creation with Workbox.js

SW-precache and SW-toolbox made creating Service Workers with dynamic caching much easier but there were two separate libraries and SW-precache required a separate file with all the SW-toolbox libraries routes in it. This made it very error prone to edit and update.

Workbox.js is the evolution of Google’s Service Worker Libraries. It consolidates all Service Worker build steps into one task (Gulp, Webpack and NPM Script versions available) and abstracts a lot of the writing and configuration behind the scenes so developers don’t need to see the process, only the result.

If you want to create specialized routes manually, Workbox will help you there too.

Workbox is a Node package so I always install it as a development dependency with the command below

npm install workbox-build --save-dev

At the top of the gulpfile.js file place the following constant declaration to bring workbox-build into scope of the file.

const wbBuild = require('workbox-build');

Copy the task below to your gulpfile. Note that this task uses ES6 arrow functions and promises so it’ll work on newer versions of Node (5.x and newer).

gulp.task('bundle-sw', () => {
  return wbBuild.generateSW({
   globDirectory: './_site/', // 1
    swDest: './_site/sw.js', // 2
    staticFileGlobs: ['**\/*.{html,js,css}'], // 3
    globIgnores: ['sw.js'], // 4
    skipWaiting: true, // optional
    clientsClaim: true, // optional

  .then(() => {
    console.log('Service worker generated.');
  .catch((err) => {
    console.log('[ERROR] ', err);

The tasks tell workbox-build:

  1. Where to search for content
  2. Where to write the resulting Service Worker
  3. What files to add to the Service Worker (all HTML, CSS and Javascript files)
  4. What files to ignore. In this example, we don’t want to cache the service worker itself as caching would defeat the purpose
  5. Optionally set skipWaiting and clientsClaim to true. This will take over clients immediately after installing the service worker regardless of having tabs/windows open to the site

If it succeeds then we log a success message to console and if we fail we log the error to console as well. It’as important to note that this task runs after all your other build steps to make sure it will pick up all the changes made during the build process.

The default Workbox task described above produces a basic precaching service worker where we indicate the files that we want to precache. But it does not provide routing or special cases for specific routes. This would be good in most cases but sometimes it’s not enough.

Jeff Posnick pointed me to a solution to integrate workbox-routing and workbox-build on the same Service Worker and still use the Gulp task to populate the data.

// const workboxSW = new self.SWLib();
const workboxSW = new self.WorkboxSW();

// Pass in an empty array for our dev environment service worker.
// As part of the production build process, the `service-worker`
// gulp task will automatically replace the empty array with the
// current precache manifest.

// Use a cache first strategy for files from
  new RegExp(''),
    cacheName: 'googleapis',
    cacheExpiration: {
      // Expire after 30 days (expressed in seconds)
      maxAgeSeconds: 30 * 24 * 60 * 60,

// Note to self, woff regexp will also match woff2 :P
  new RegExp('.(?:ttf|otf|woff)$'),
    cacheName: 'fonts',
    cacheExpiration: {
      // Expire after 24 hours (expressed in seconds)
      maxAgeSeconds: 1 * 24 * 60 * 60,

  new RegExp('.(css)$'),
    cacheName: 'css',
    cacheExpiration: {
      maxAgeSeconds: 1 * 24 * 60 * 60,

// Use a cache-first strategy for the images
  new RegExp('.(?:png|gif|jpg|svg)$'),
    cacheName: 'images',
    cacheExpiration: {
      // maximum 50 entries
      maxEntries: 50,
      // Expire after 30 days (expressed in seconds)
      maxAgeSeconds: 30 * 24 * 60 * 60,
    // The images are returned as opaque responses, with a status of 0.
    // Normally these wouldn't be cached; here we opt-in to caching them.
    // If the image returns a satus 200 we cache it too
    cacheableResponse: {statuses: [0, 200]},

// Match all .htm and .html files use cacheFirst
  new RegExp('(.htm)$'),
    cacheName: 'content',
    cacheExpiration: {
      maxAgeSeconds: 1 * 24 * 60 * 60,

// For video we use a network only strategy. We don't want to log
// the cache with large video files
  new RegExp('.(?:youtube|vimeo).com$'),

// Local videos get the same treatment, only pull from the network
  new RegExp('/.(?:mp4|webm|ogg)$/'),

// The default route uses a cache first strategy
  handler: workboxSW.strategies.cacheFirst()

The solution is a two-step process. We first write our Service Worker as shown below. We pass an empty array as the parameter to workboxSW.precache and we populate the empty array from the bundle-sw task in Gulp.

We’ve also created custom routes using workboxSW.router.registerRoute to register the route and workboxSW.strategies to use one of the following strategies:

  • CacheFirst
  • CacheOnly
  • NetworkFirst
  • NetworkOnly
  • StaleWhileRevalidate

We can further customize each caching strategy. Let’s take the route below as an example. We register a route using a regular expression that will match all png, jpg and gif images and use the cache first strategy.

We refine the caching strategy by giving the cache a name and expiration parameters. In the expiration, we add a maximum number of entries (after which the oldest images will be purged from the cache) and a duration in seconds equal to 30 days.

The images are returned as opaque responses, with a status of 0. Normally these wouldn’t be cached; here we opt-in to caching them. If the image returns a status 200 we cache it too.

If you want more details about the parameters we can pass to cacheExpiration look at the source code in Github.

  new RegExp('/.(?:png|gif|jpg)$/'),
    cacheName: 'images',
    cacheExpiration: {
      maxEntries: 50,
      maxAgeSeconds: 30 * 24 * 60 * 60
    cacheableResponse: {statuses: [0, 200]}

The routes we define in the service worker will not change or will not change too frequently so doing it this way makes sure we get the best of both worlds.

The modified service worker takes a different approach than what we saw before using workbox-build. Instead of building the manifest directly, it injects the list of files in the manifest into the service worker. Remember that we put an empty array on the precache section of the service worker. This is the task that will populate the empty array with the files we need to precache.

gulp.task('service-worker', () => {
  return workboxBuild.injectManifest({
    swSrc: 'src/service-worker.js',
    swDest: '_site/service-worker.js',
    globDirectory: '_site',
    globIgnores: ['sw.js'],
    staticFileGlobs: [

And the best part is that, if we missed anything, the files will be cached at runtime and we make sure that we still cache the content.

In static or content heavy sites we may want to change the values for staticFileGlobs to specific file names that we should cache rather than wildcard paths. The default value may cache too many files and make the initial caching and subsequent loading take longer than we’d like.

PWA Starter: Caching Strategies

Jake Archibald’s Offline Cookbook provides additional ideas of when to cache data and different caching strategies. We’ll concentrate in the later and talk about caching strategies.

Cache only

Use this strategy for resources that are static to the site and that were cached during installation.

self.addEventListener('fetch', event => {
  // If a match isn't found in the cache, the response
  // will look like a connection error

Network Only

I use this strategy for resources that I don’t want in the cache like videos, non-GET requests, and others.

self.addEventListener('fetch', event => {
  // or simply don't call event.respondWith, which
  // will result in default browser behaviour

Cache, falling back to network (or Cache First)

Check if the resource is in the cache; if it then respond with it. If it’s not in the cache then fetch it from the network, store a copy in the cache and serve it to the client. This is, most likely, the default case. Everything else is special cased

self.addEventListener('fetch', event => {
    .then(response => {
      return response || fetch(event.request);

This is a simplified version of the fetch event example provided earlier.

Cache & network race

There are few situations, particularly with older mobile devices and slow hard drives, where network connectivity will be faster than cache access. In that case, we can race the network request and the cache access and return whatever response comes back first.

// Promise.race is no good to us because it rejects if
// a promise rejects before fulfilling. Let's make a proper
// race function:
function promiseAny(promises) {
  return new Promise((resolve, reject) => {
    // make sure promises are all promises
    promises = => Promise.resolve(p));
    // resolve this promise as soon as one resolves
    promises.forEach(p => p.then(resolve));
    // reject if all promises reject
    promises.reduce((a, b) => a.catch(() => b))
      .catch(() => reject(Error("All failed")));

self.addEventListener('fetch', event => {

Network falling back to cache

If the fetch request succeeds users get the newest content and if it doesn’t then they get the latest version of the content available in the cache. Remember that if the fetch request succeeds you should update the cached content.

There is one thing to consider. If the user has an intermittent or slow connection they’ll have to wait for the network to fail before they get any content already on their device. This is a very bad user experience.

self.addEventListener('fetch', event => {
      .catch(function() {
        return caches.match(event.request);

Cache falling back to network

This is an alternative to network falling back to cache and it requires the page to make two requests, one to the cache, one to the network. The idea is to show the cached data first, then update the page when/if the network data arrives.

Sometimes you can just replace the current data when new data arrives (e.g. game leaderboard), but that can be disruptive with larger pieces of content. Basically, don’t “disappear” something the user may be reading or interacting with.

Code in the page:

var networkDataReceived = false;


// fetch fresh data
var networkUpdate = fetch('/data.json')
.then(function(response) {
  return response.json();
}).then(function(data) {
  networkDataReceived = true;

// fetch cached data
caches.match('/data.json').then(function(response) {
  if (!response) throw Error("No data");
  return response.json();
}).then(function(data) {
  // don't overwrite newer network data
  if (!networkDataReceived) {
}).catch(function() {
  // we didn't get cached data, the network is our last hope:
  return networkUpdate;

Code in the ServiceWorker:

We always go to the network & update a cache as we go.

self.addEventListener('fetch', function(event) {
  event.respondWith('mysite-dynamic').then(function(cache) {
      return fetch(event.request).then(function(response) {
        cache.put(event.request, response.clone());
        return response;

Generic Response

If the content is not in the cache and you’re not online to fetch it or the fetch request times our, it would be nice to have a fallback to show the user. It can be as simple as returning a cached offline page like Jake does in the example below; Make sure you cache offline.html when you install the service worker.

self.addEventListener('fetch', function(event) {
    // Try the cache
    caches.match(event.request).then(function(response) {
      // Fall back to network
      return response || fetch(event.request);
    }).catch(function() {
      // If both fail, show a generic fallback:
      return caches.match('/offline.html');

Or returning an inline SVG image for images that can’t be displayed because they are not cached and not available online.

This example from Jeremy Keith’s Adactio runs a cache falling back to network strategy and then, if the resource is an image, stores a cloned copy on the cache and returns the image to the user.

If both the cache and fetch fail to return the image the catch portion of the promise is triggered and, if the request was for an image, we return a new Response object containing data to make an inline SVG image rather than provide a broken image and a suboptimal user experience.

self.addEventListener('fetch', event => {
let request = event.request;
let url = new URL(request.url);

// For non-HTML requests, look in the cache first, fall back to the network
    .then(response => {
      // CACHE
      return response || fetch(request)
    .then( response => {
        // NETWORK
        if (request.headers.get('Accept').includes('image')) {
            let copy = response.clone();
            stashInCache(imagesCacheName, request, copy);
        return response;
    .catch( () => {
        // OFFLINE
        if (request.headers.get('Accept').includes('image')) {
          return new Response('<svg role="img" aria-labelledby="offline-title" viewBox="0 0 400 300" xmlns=""><title id="offline-title">Offline</title><g fill="none" fill-rule="evenodd"><path fill="#D8D8D8" d="M0 0h400v300H0z"/><text fill="#9B9B9B" font-family="Helvetica Neue,Arial,Helvetica,sans-serif" font-size="72" font-weight="bold"><tspan x="93" y="172">offline</tspan></text></g></svg>', {headers: {'Content-Type': 'image/svg+xml', 'Cache-Control': 'no-store'}});

Jeremy does it for images and Jake does it for HTML, there is no reason why we can’t combine the catch statements from both service workers into one that looks like this:

self.addEventListener('fetch', function(event) {
    // Try the cache
    caches.match(event.request).then(function(response) {
      // Fall back to network
      return response || fetch(event.request);
    }).catch(function() {
      if (request.headers.get('Accept').includes('html')) {
        return caches.match('/offline.html');
      if (request.headers.get('Accept').includes('image')) {
      return new Response('<svg role="img" aria-labelledby="offline-title" viewBox="0 0 400 300" xmlns=""><title id="offline-title">Offline</title><g fill="none" fill-rule="evenodd"><path fill="#D8D8D8" d="M0 0h400v300H0z"/><text fill="#9B9B9B" font-family="Helvetica Neue,Arial,Helvetica,sans-serif" font-size="72" font-weight="bold"><tspan x="93" y="172">offline</tspan></text></g></svg>', {headers: {'Content-Type': 'image/svg+xml', 'Cache-Control': 'no-store'}});


We could add additional fallbacks matching the content type we want to test and provide different types of fallbacks based on the content.

Handling multiple content types in a fetch request

So far we’ve used a single strategy for fetching the content of our pages. We can combine these strategies to create a flexible service worker that will cache different types of content differently based on the headers it accepts.

For each special case we want to create check that the Accept header includes the type we want to use (for example HTML); we test using if (request.headers.get('Accept').includes('html')). If the headers include the content type then we carry on with caching and providing offline fallbacks.

If the request doesn’t match any of our special cases it’ll fall through to a default cache first strategy.

  if (request.headers.get('Accept').includes('html')) {
    event.respondWith(() => {
      .then(response => {
        // Fall back to network
        return response || fetch(event.request);
          .then(response => {
              .then(cache => {
                cache.put(event.request, response.clone());
                return response;
      .catch(() => {
        return caches.match('/offline.html');

This service worker is simple and provides a core set of functionality to work with Service Worker. We can do other things like providing an offline page if the content is not in the cache and the network is down and many other things that we explicitly code.

The service worker script is available in this Gist.


The Headers interface of the Fetch API allows you to perform various actions on HTTP request and response headers. These actions include retrieving, setting, adding to, and removing elements and values. A Headers object has an associated header list, which is initially empty and consists of zero or more name and value pairs. You can add to this using methods like append(). In all methods of this interface, we match header names by case-insensitive byte sequence.

For security reasons, only the user agent can control some headers. These headers include the forbidden header names and forbidden response header names.

You can retrieve a Headers object via the Request.headers and Response.headers properties, and create a new Headers object using the Headers.Headers() constructor.

We’ve seen examples of headers when we test to see if a request if of a given mime type. This will test if the request is for an HTML document (mime type text/html) and return a document to notify the application is offline.

if (request.headers.get('Accept').includes('html')) {
  return caches.match('/offline.html');

The Response constructor takes two arguments, the first being the body of the response (the content we get back from fetch). An optional second argument is an object specifying the status code, status text, and headers of the response. We can use these elements to modify the response we return to the client.

In the following example, we add a header to include in the returned content. You’ll notice a few tricks:

  • The response is read-only so we have to make the request again in order to process the changes
  • We use the technique discussed earlier to add a custom header that we can check if we need to
self.addEventListener('fetch', function(event){
  console.log('Caught request for ' + event.request.url);
      var init = {
        status:     response.status,
        statusText: response.statusText,
        headers:    {
          'X-Foo': 'My Custom Header'

        init.headers[k] = v;

      return response.text().then(function(body){
        return new Response(body, init);

CORS, NO-CORS and why it matters

One of the reasons why we discuss headers is to introduce the concept of CORS, what it is and how to use it to make requests to an origin different than where the application lives.

Cross-Origin Resource Sharing (CORS) is a W3C spec that allows cross-domain communication from the browser. By building on top of the XMLHttpRequest object, CORS allows developers to work with the same idioms as same-domain requests.

The use-case for CORS is simple. Imagine the site has some data that the site wants to access. The web’s same origin policy forbids this type of request. However, by supporting CORS requests, the owner of can add a few special response headers that allow to access the data.

CORS is a two-step process. The server tells you which, domains other than the origin can access the resource and the client must make explicit that they are asking for a CORS resource.

Setting up CORS request varies by server and the type of request you’re making. Note that the examples below use a wildcard pattern, meaning we don’t care who access the resources. This is not a safe configuration. Make sure you only allow access to hosts you want and not everyone.

To set up blanket CORS permissions in an Apache HTTPS server use:

<IfModule mod_headers.c>
  Header set Access-Control-Allow-Origin "*"

To enable CORS in Nginx use the Headers core module which is compiled into the server by default. Then add the following line to your configuration file.

add_header Access-Control-Allow-Origin *;

When working with Express.js make sure you do the following to enable CORS:

app.all('/', function(req, res, next) {
  res.header("Access-Control-Allow-Origin", "*")

Information for other servers can be found in the W3C wiki

The client side of the equation means we have to make the fetch request a CORS request by adding custom headers to the request.

var myHeaders = new Headers(); // 1

var myInit = { method: 'GET',
               headers: myHeaders,
               mode: 'cors', // 2
               cache: 'default' };

var myRequest = new Request('racecar.png', myInit);

  .then(function(response) {
    return response.blob();
  .then(function(myBlob) {
    var objectURL = URL.createObjectURL(myBlob);
    myImage.src = objectURL;

The example works as follows:

  1. Create a new Headers()
  2. Create an init object that will be added to the request as its second parameter. The important part is the mode child that will tell fetch how to process the request. Some of the possible values are:
    • same-origin — If a request is made to another origin with this mode set, the result is simply an error. You could use this to ensure that a request is always being made to your origin
    • no-cors — Prevents the method from being anything other than HEAD, GET or POST. If any ServiceWorkers intercept these requests, they may not add or override any headers except for these. In addition, JavaScript may not access any properties of the resulting Response. This ensures that ServiceWorkers do not affect the semantics of the Web and prevents security and privacy issues arising from leaking data across domains
    • cors — Allows cross-origin requests, for example, to access various APIs offered by 3rd party vendors. These are expected to adhere to the HTTP access control (CORS) protocol. Only a limited set of headers are exposed in the Response, but the body is readable

We then make a blob of the response and build a URL to display to the user.

If mode not defined in step 2, the default value of no-cors is assumed.

Links and Resources: Headers

Intercepting Responses

Fetch events allows developers to intercept and replace responses with our own content. We’ve seen this before when we discussed offline fallbacks and providing alternative content when the user is offline.

But we can intercept requests while we are online. The example below shows how to provide a different response to a request if the URL to fetch include the string cats.jpg and replaces it with dogs.png

self.addEventListener('fetch', function(event){
    console.log('Caught request for ' + event.request.url);
    if (event.request.url.includes('cat.jpg'){
      event.respondWith(new Response('dogs.png'));