Working With Opaque Responses

While working with Workbox I discovered a hidden danger of opaque responses. This post will address these shortcomings and provide an imperfect solution for how to do it.

Normally, opaque responses are not cacheable along with additional restrictions. See Jeff Posnick’s Stack Overflow Answer for details but it boils down to the following items:

  • Because opaque responses are meant to be a black-box you won’t get meaningful information from most of the properties of the Response class, or call the various methods that make up the Body interface, like json() or text()
  • Browsers pad the opaque resource size. In the case of Chrome the minimum size that any single cached opaque response contributes to the overall storage usage is approximately 7 megabytes
  • The status property of an opaque response is always set to 0, regardless of success or failure
  • The add()/addAll() methods of the Cache API will reject if any response is outside the 2XX range

Are opaque requests useful?

Yes, there are places where opaque requests can be used without problem. Based on MDN’s Cross-origin network access article:

  • <script src="…"></script>. Error details for syntax errors are only available for same-origin scripts
  • <link rel="stylesheet" href="…"/> as long as it’s served with the correct mime type
  • Images displayed by <img >
  • <video> and <audio>
  • Plugins embedded with <object>, <embed />, and <applet>
  • Fonts applied with @font-face. Some browsers allow cross-origin fonts, others require same-origin
  • Anything embedded by <frame> and <iframe>

Caching opaque responses

The example below creates a new Request for an opaque response.

We then use the Fetch API to retrieve the requested object and we put it in the cache.

The idea is that by putting the resource in the cache we’re opting into accepting whatever the resource is, even an error.

Because of the padding, we need to be extra careful when we decided how many items we want to cache before our origin runs out of quota

const request = new Request('', {
    mode: 'no-cors'
// Assume `cache` is an open instance of the Cache class.
fetch(request).then(response => cache.put(request, response));

Workbox.js provides additional functionality to make it easier to work with opaque responses by using plugins.

The handler below uses two plugins: expiration and cacheableResponse.

The expiration plugin (workbox.expiration.Plugin()) dictates how long Workbox will keep the resources in the cache.

The cacheableResponse plugin (workbox.cacheableResponse.Plugin()) changes the behavior of Workbox regarding opaque responses.

statuses indicates what HTTP status codes we want to accept for this handler. I’ve chosen to automatically accept 0 in addition to 200 as valid status codes for this request.

purgeOnQuotaError: true tells Workbox that it’s ok to delete this cache when we hit the quota limit for this domain. We do this because we’re accepting opaque responses and they are padded (at least 7 MB each in Chrome).

const extFontHandler = workbox.strategies.staleWhileRevalidate({
  cacheName: 'external-fonts',
  plugins: [
    new workbox.expiration.Plugin({
      maxAgeSeconds: 30 * 24 * 60 * 60,
      // maxEntries: 20,
    new workbox.cacheableResponse.Plugin({
      statuses: [0, 200],
      // Automatically cleanup if quota is exceeded.
      purgeOnQuotaError: true,

Each handler is associated with one or more routes. My project is working only with Adobe Fonts (Typekit) and Google Fonts. I split the domains into two, one for each provider I’m working with.

The first route uses a regular expression to match

The second one is more complicated. The regular expression matches either or There may be other domains and that will mean that we need additional routes for those domains.

// Third party fonts from typekit
workbox.routing.registerRoute(/https:\/\/use\.typekit\.net/, (args) => {
  return extFontHandler.handle(args);

// Third party fonts from google fonts
workbox.routing.registerRoute(/https:\/\/fonts\.(googleapis|gstatic)\.com/, (args) => {
  return extFontHandler.handle(args);

We’ve accepted opaque responses and have configured the Service Worker to expire the requests after a given period of time and to delete the cache when we hit the quota limit.

So this provides an imperfect solution to caching opaque requests. I don’t know if this is better than hosting all resources locally or not. It is what I have now and it works as intended.

Progressive Enhancement Matters

Progressive Enhancement is a powerful methodology that allows Web developers to concentrate on building the best possible websites while balancing the issues inherent in those websites being accessed by multiple unknown user-agents. Progressive Enhancement (PE) is the principle of starting with a rock-solid foundation and then adding enhancements to it if you know certain visiting user-agents can handle the improved experience. PE differs from Graceful Degradation (GD) in that GD is the journey from complexity to simplicity, whereas PE is the journey from simplicity to complexity. PE is considered a better methodology than GD because it tends to cover a greater range of potential issues as a baseline. PE is the whitelist to GD’s blacklist. Part of the appeal of PE is the strength of the end result. PE forces you to initially plan out your project as a functional system using only the most basic of Web technologies. This means that you know you’ll always have a strong foundation to fall back on as complexity is introduced to the project.

Progressive Enhancement: What It Is, And How To Use It?

Progressive enhancement (PE) is the principle of starting with a rock-solid foundation and then adding enhancements to it. Every so often the discussion between PE, its cousin Graceful Degradation (GD) and the need to provide such features will rear its head in debates about Javascript not being available or not needing to think about PE or GD in a world where browsers are evergreen.

This is not a post on what is Progressive Enhancements. If you’re interested in that area, Aaron Gustafson wrote three articles on the subject, the fist of which is a good introduction to the subject. The articles are:

  1. Understanding Progressive Enhancement
  2. Progressive Enhancement with CSS
  3. Progressive Enhancement with JavaScript

How we build a base experience?

One of the hardest things to decide when it comes to progressive enhancement is what makes for a core experience. How do we decide on the baseline we want to provide to all our users?

It would be tempting to make the baseline modules and grid, but that wouldn’t be as useful as we think.

CSS Grid is supported in all modern browsers so it’s not an issue moving forward. However, there are three cases to be made for providing grid as a progressive enhancement:

  • Earlier versions of our evergreen browsers did not support the feature
  • Browsers like IE and Opera Mini don’t support Grid at all (and save yourself the comment on how not even Microsoft supports IE… there are plenty of people still using it)
  • Having to work around interoperability bugs makes

If you want to write defensive CSS you can use feature queries like this.

div {
  float: right;

@supports (display: grid) {
  div {
    display: grid;

If you’re using Javascript then the way to check for CSS feature support looks like this:

hasGrid = CSS.supports("display: grid");

if (hasGrid) {
  console.log('We support grid');
  // Do something for browsers that support grid
} else {
  console.log('Grid is not supported');
  // Do something for browsers that don't support grid

I’m not saying don’t use Grids… quite the opposite. I’m saying to provide a base experience that will support as many browsers as possible and then use Grids as an enhancement for those browsers that can work with them.

In Javascript take modules and async/await as an example. All versions of modern browsers that support modules support async/await but not all versions of modern browsers that support async/await support modules. So you get to decide which supported features are more important for your application.

Another question that you need to ask is whether transpilation is needed for your target browsers. Tools like Babel will convert your modern JavaScript (ES2015 and later) into an older version of Javascript for browsers that don’t support. Using the env preset and a list of the oldest browser versions you want to support you can write your code once and let Babel deal with making the code work in your older supported browsers.

The biggest challenge is how to make the transpiled bundles as performant as the original code.

How to enhance it?

As with many things in web development and design, it depends. It depends on the type of site you’re building, how much data it needs to load from the server and how you’re caching the content for future re-use.

If we are building a content site we may want to populate the base content first and then run CSS and JavaScript to enhance the base content or add additional material.

If build a catalog page for a store the most expedient way may be to create templates that get populated from data from the server. But because we are sensitive to network hiccups, and a number of other reasons why Javascript may time out or otherwise fail to load, particularly in older or lower-end devices.

Once we have our core experience, one that will work without CSS and whether JavaScript is enabled or with as little Javascript as possible we can start thinking about how to enhance it and how to do it in an accessible way.


I know that you don’t have to make the experience identical for all devices but, to me, that doesn’t mean that we should provide a subpar experience to those browsers that “don’t cut the mustard”, particularly when you don’t have to.

I like an escalator because an escalator can never break, it can only become stairs. There would never be an escalator temporarily out of order sign, only an escalator temporarily stairs. Sorry for the convenience.

Mitch Hedberg, Comedy Central Presents

We should make our apps into escalators, not part of the wealthy western web.

Links and Resources

PWAs: Don’t build it all at once

One of the things that have been tempting when working on a PWA for my layout experiments has been to do everything at once and work on them in parallel.

Reading Jason Grigsby’s Progressive Web Applications book from A Book Apart reminded me that we need to take it easy, plan ahead and do incremental deployment of PWA features.

The basics

What do I mean by this? There are three technical items that we know we need to turn a site into a PWA:

  • Host the site using HTTPS
  • A Web Manifest
  • A Service Worker

The basics are easy. We can add an SSL certificate using free tools like let’s encrypt.

The (basic) Manifest is also fairly straightforward, you can generate it manually using instructions like those on Google developers or generate it automatically using one of many Web Manifest Generators. I suggest PWA Builder, mostly because it will help you generate both the manifest and a basic service worker that you can later tinker with.

The Service Worker is also fairly easy. One thing that we need to remember is that we don’t need to do everything the first time. PWA Builder will give you options for creating your Service Worker and the code to insert into your site’s entry page to use it.

Thinking things through

These are not all the items worth reviewing an analyzing before you implement your site as a PWA but it’s a good starting point.
For a more thorough discussion of how to gradually roll out a PWA, check Progressive Web Applications.

There is more than the basics when it comes to PWAs and web applications in general. Yes, we can slap a manifest and a service working into a mediocre website and making into a mediocre PWA.

But there are a lot of other things we need to consider when turning our sites into applications even before we write our first line of code.

Some of these considerations have to do with our site as it exists before we implement PWA technologies.


The first item to consider, for me, is the site’s navigation. If we make the site into a full-screen application then we need to make sure that users can navigate without the browser’s chrome available.

Performance matters

Another aspect to consider is how is your site performing before we implement a PWA. My favorite tool is Lighthouse available as a CLI that you can integrate into your existing workflows and as part of DevTools audits.

To channel my inner Alex Russell, any performance test you run must run in the devices using chrome://inspect to debug it remotely. The results from DevTools are good approximations but will never match the results of running in a real device.

The reason we run performance tests is to make sure we’re not using the Service Worker as an excuse for slow loading content.

Service Worker: what part of the Swiss army knife do we need?

When planning a Service Worker you have to decide how hard or how easy do you want to build it. Do you want a site that automatically caches all assets on the first load? Do we want a Service Worker that will pre-cache certain assets (the shell of an app or the assets needed to load an index page)? Push notifications? User activated background fetch?

The cool thing is that we can do all these things over time. The idea behind service workers is that you can build them out over time and users will only see the new features.

We can start with a basic service worker that will cache all resources to a single cache. We can progress from there. Every time you change the service worker, browsers will treat them as a brand new worker and will update whatever needs to change.

There are also tools like Workbox.js that will automate service worker creation and usage. It makes it easier to create multiple caches, using different caching strategies and it gives you access to newer technologies built on top of service workers.

It also gives you more time to develop your strategy for how you will implement the worker.

Frameworks and PWAs

If you’re using a framework, you can still consider evaluate PWAs Angular and React both provide PWA implementation for new apps/sites… Angular through the CLI and the @angular/pwa package and React through the create-react-app tool. In my limited research, I wasn’t able to figure out if this is only for new applications or if we’d be able to update an existing one to make it a PWA but if you’re familiar with the tools you should be familiar with the tools and the communities where you can find additional information.

Links and resources

Archiving and Storing Content

Journalism Now episode on archival issues raised some interesting issues when it comes to archiving content and the longevity of the web.

In this post, I will cover some of the issues I think are important for archiving the web and provide some ideas (at least the beginning of some ideas) for automating the archival of content and applications.

It is important to note that not all these techniques will allow you to view the content right away and, in some cases, may only provide the data you can use to restore the content to a viewable state in some way, shape, or form.

I use WordPress, which is why most of these techniques are geared towards that CMS. But there’s no reason some of them wouldn’t work with other CMS systems.

Archiving old WordPress pages and posts

Some techniques for archiving WordPress content.

Archiving the site. Option 1: generating a static site with plugins

Since we’re not adding new content it may be a good idea to create a static site. The first option is to use a WordPress plugin to generate a static version of the site, doing minor site tweaks, and then moving that to the archival location.

Plugins like simply static by Code of Conduct will generate a full configurable static version of your WordPress site ready to upload to an archival or backup server.

Archiving the site. Option 2: generating a static site with Puppeteer

starting from the code in Stefan Baumgartner’s Saving and scraping a website with Puppeteer we can get a basic scrapping system in a few lines of Javascript.

const puppeteer = require('puppeteer');
const {URL} = require('url');
const fse = require('fs-extra');
const path = require('path');

async function start(urlToFetch) {
  const browser = await puppeteer.launch();
  const page = await browser.newPage();

  page.on('response', async (response) => {
    try {
      const url = new URL(response.url());
      let filePath = path.resolve(`./output${url.pathname}`);
      if (path.extname(url.pathname).trim() === '') {
        filePath = `${filePath}/index.html`;
      await fse.outputFile(filePath, await response.buffer());
    } catch (error) {

  await page.goto(urlToFetch, {waitUntil: 'networkidle2'});

  setTimeout(async () => {
    await browser.close();
  }, 60000 * 4);


There are two things left to this script to make it really useful:

  • Make it recursive: Right now it captures a single URL. For it to be really useful we need to make sure it captures all the local URLs in a page
  • Make it configurable: The base URL to crawl is hardcoded into the script. To add flexibility we may want to create a CLI around it

Option 3: Generating a wrx backup file

One of the things I like a lot about WordPress is how easy it is to create a full data backup that can then be imported. This will only work if you have access to the WordPress administrator backend.

If you’re accessing the exporter for the first time you will be prompted to download the exporter plugin. The following steps assume that you’ve already downloaded it.

Under the tools > export menu you can choose what parts of your site you want to backup. Unless the site is large, I usually pick all content.

Once you’ve downloaded the export file you can go to your new installation and use the tools > import menu and select WordPress.

If the site is too large we may hit an upload size limitation when uploading and we will have to come back to the exporter and create multiple, smaller, back up files and import them individually. This is a limitation of PHP and can be changed. How to increase the upload limit is beyond the scope of this post.

If there are no errors you will have equivalent content in both blogs. Now you have to worry about the presentation.

Making sure the content matches the presentation

Backing up the content is easy but how do you make sure your theme and plugins are the same in both instances? How do you decide if full parity is needed?

For example, if you do comment moderation or allow comments in your site at all, you may not want to do so in the archive site to ensure that the content is not polluted by spam.

Likewise, you’ll have to de decide if each of the features in the original site need to be ported to the archived version.

Archiving interactive content

I hear many people talk about the SpaceJam website as a sign of the web’s resilience. What people don’t realize is that the original site was modified when it was moved to the WB Archive site

Archiving older content means that the content must be playable as close to the original as possible.

In the Space Jam site, this means having a properly configured Apache server that can handle server side include directives which may also require a virtual host or a VM configured with Apache.

With WordPress, it becomes more complex as the version of PHP and the number external modules you must configure is highly dependent on what plugins and functionality you want to enable. Not installing the appropriate PHP modules will, at best, render plugins unusable and, at worst, stop WordPress from working altogether.


Creating high fidelity archives of web content is not a trivial undertaking and must be carefully thought out and planned.

But, unless the site uses proprietary features to a given web server (like IIS or the old Netscape web servers) it shouldn’t be too difficult to implement.

Thinking about the text’s voice

As I’ve played more and more with fonts, both in terms of trying to find good pairings and looking at different serif fonts for body copy.

Reading issue 61 of Coffee Table Typography brought some display fonts for titles to my attention. It also raised some interesting questions about the voice of a text and how do we convey that voice through typography and other elements specific to the web when crafting type and reading experiences online.

Two covers from Alfred Bester Novels using Serif Gothic

There are fonts that call immediate evoke a certain mood or feeling.

Look at the figure above of two book covers for works from Alfred Bester using Serif Gothic and other fonts in Science Fiction to see how you can create a “brand” or “identity” for your work.

Another excellent example of how can a typeface can affect the way we perceive or react to a visual display is Marvin. The page contains a wealth of information and shows you what you can do with the font.

An example postcard from the Marvin website

What is the text trying to say?

I think I finally understand now why editors, graphic designers and people involved with layout and typesetting are strongly encouraged to read the material. It helps you understand the voice of the text, what is it trying to tell the reader?

Take Handlee and Pacifico as the first examples. They both are handwritten fonts that give a more personal to the content.

Handlee is more playful and informal.

Example of Handlee Regular.

Pacifico is the opposite, more formal and what I would expect to see in a diploma or other formal documents.

Example of Pacifico font.

Looking matching fonts

Google Design’s Choosing Web Fonts: A Beginner’s Guide gives good guidance on how to select fonts and font pairings. It touches on a wide variety of topics regarding the fonts and the content you’re using them with.

Sites like Typewolf and Fontpair will show you what pairs of fonts look like when working together while Delta Fonts will tell you what font they thin a given item was made with.

Next steps

This is a bare first pass at working with fonts. To me, it’s an interesting and intriguing first pass and it has given me plenty of ideas about where to go next 🙂