Don’t cross the streams

Streams are a very interesting concept and a new set of tools for the web. The idea is that we can read and write, depending on the type of stream we’re using, chunks of content… either write them to a location or read them from a location. This will improve performance because we can start showing things to the user before it has completed loading.

The example below how we can asynchronously download and display content to the user. The problem with this, if you can call it that, is that fetch will wait to download the entire file before settling the promise and only then will populate the content into the page.

const url = 'https://jsonplaceholder.typicode.com/photos';
const response = await fetch(url);
document.body.innerHTML = await response.text();

Streams seek to provide a better way to fetch content and display it to the user. The content gets to the browser first and we can then render it to the user as it arrives rather than have to wait for all the content to arrive before display.

The example below does the following:

  1. Fetches the specified resource
  2. Creates a reader from the body of the response object
  3. Creates a readable stream
  4. In the reader’s start menu we create a push function to do the work and read the first chunk of the stream
  5. We create a TextDecoder that will convert the value of the chunk from Uint8 to text
  6. If we hit done it’s because there are no more chunks to read so we close the controller and return
  7. Enqueue means we add the chunk we read to the stream and then we append the decoded string to the page
  8. We call the function again to continue processing the stream until there are no more chunks to read and done returns true
  9. We return a new response with the stream as the value and a new Content-Type header to make sure it’s served as HTML
fetch("https://jsonplaceholder.typicode.com/photos").then((response) => { // 1
const reader = response.body.getReader(); // 2
const stream = new ReadableStream({ //3
  start(controller) {
    function push() {
      reader.read().then(({ done, value }) => { // 4

        let string = new TextDecoder("utf-8").decode(value); // 5

        if (done) { // 6
          controller.close();
          return;
        }
        controller.enqueue(value); // 7
        document.body.innerHTML += string;
        push()
      });
    };
    push(); // 8
  }
});

  return new Response(stream, { // 9
    headers: {
      "Content-Type": "text/html"
    }
  });
});

This reader becomes more powerful the larger the document we feed it is.

Creating my own streams

The example above also illustrates some of the functions and methods of ReadableStream and controller. The syntax looks like this and we’re not required to use any of the methods.

let stream = new ReadableStream({
  start(controller) {},
  pull(controller) {},
  cancel(reason) {}
}, queuingStrategy);
  • start is called immediately. Use this to set up any underlying data sources (meaning, wherever you get your data from, which could be events, another stream, or just a variable like a string). If you return a promise from this and it rejects, it will signal an error through the stream
  • pull is called when your stream’s buffer isn’t full and is called repeatedly until it’s full. Again, If you return a promise from this and it rejects, it will signal an error through the stream. Pull will not be called again until the returned promise fulfills
  • cancel is called if the stream is canceled. Use this to cancel any underlying data sources
  • queuingStrategy defines how much this stream should ideally buffer, defaulting to one item. Check the spec for more information

And the controller has the following methods:

  • controller.enqueue(whatever) – queue data in the stream’s buffer.
  • controller.close() – signal the end of the stream.
  • controller.error(e) – signal a terminal error.
  • controller.desiredSize – the amount of buffer remaining, which may be negative if the buffer is over-full. This number is calculated using the queuingStrategy.

Playing with typography

If you don’t know I have a ton of different type and layout experiments in their own website. I’ll start sharing some of the demos I’ve been working on via Twitter and explain the code for some here.

The full demo is available in Codepen.

The HTML

The HTML is as simple as it comes. It’s an h1 heading element inside a container div. We will use the container to place the title and the myTitle heading as the target for lettering.js

<div class='container'>
  <h1 class='myTitle'>Nightfall</h1>
</div>

This example uses only the heading. We could add more text and assume that this is the title for a document.

Javascript

Unlike most of my projects, Lettering.js is a jQuery plugin. While I don’t normally use or recommend jQuery for production (it’s not a value judgment on jQuery, it’s just an additional dependency that is usually not needed) I’ll make an exception for this demo but will illustrate an alternative without jQuery and some of the problems I encountered when using it.

The first part of this section is to add jQuery. To do so I use a technique I learned from the HTML5 Boilerplate that works when jQuery is not present for whatever reason.

<script src="http://code.jquery.com/jquery-1.12.4.min.js"
  integrity="sha256-ZosEbRLbNQzLpnKIkEdrPv7lOy9C27hHQ+Xp8a4MxAQ="
  crossorigin="anonymous"></script>
<script>window.jQuery ||
  document.write('<script src="/js/jquery-1.12.4.min.js"><\/script>'
</script>

We first load jQuery from a CDN as normal. In this case, I’ve chosen jQuery’s own CDN.

As soon as we load it we check for the global window.jQuery object. If it exists we use it, otherwise, we use document.write to dynamically create a link to a local version of the script matching the version we get from CDN.

Since jQuery is still popular we will seldom encounter this issue in existing projects but brand new projects, particularly when starting in your workstation.

Next, we load Lettering.js and initialize it.

<script src='js/jquery.lettering.js'></script>
<script>
  $(document).ready(function() {
    $('.myTitle').lettering();
  });
</script>

The rest of the work is done in CSS.

We first import that Typekit project that we want to use. Typekit recommends using the link element to load the stylesheet but I want to make sure that the font is available before we do all the manipulation.

When defining the body element, I set the overall background color and the default font for the document, which is not the font we’ll be using for the heading; this is on purpose.

@import url("https://use.typekit.net/aet8yjj.css");

body {
  background-color: #fbfbf6;
  font-family: Raleway, sans-serif;
}

The container element is where the magic starts. We set up a linear gradient for the background, the height and width for the element, the font size, and the breaking behavior.

Because we will treat each letter as its own container we want to break whenever we need to.

One last item regarding the container. I’ve omitted the vendor-prefixed syntax. Depending on what browsers you must support I recommend testing this to make sure that they support the syntax you provide for the gradient.

.container {
  margin-top: -1.25em;
  background-color: rgb(33, 35, 66);
  background: linear-gradient(to bottom, #212342 0%, #fff 100%);
  color: rgb(255, 255, 255);
  height: 100%;
  width: 65%;
  font-size:18em;
  word-break: break-all;
  overflow-wrap: break-word;
}

For the h1 element we do a few things: We set up the font we want to use, we make it all uppercase, we set up the line height to be closer than normal and finish by adding padding to the element so it won’t be flush against the margins and lose some of the text shadow effects.

All spans elements that Lettering.js generates will get display: relative so we can play with moving them around.

h1 {
  font-family: 'bebas-neue', sans-serif;
  text-transform: uppercase;
  line-height: .65em;
    padding: .05em;
}

span {
  position: relative;
}

Lettering will dynamically inject a span element with a class equal to char plus a number indicating the location of the letter in the word we initialized.

They all have three attributes in common:

  • z-index to indicate the stacking order among the letters; larger positive numbers indicate a higher position in the stack, closer to the viewer and negative numbers indicate lower positions in the stack, away from the viewer
  • text-shadow produces a shadow from the source element. Parameters are: offset-x (x-axis blur distance from the text), offset-y (y-axis blur distance from the text), blur-radius (the bigger the blur the wider and lighter it becomes) and color (the color of the shadow)
  • margin-left to indicate how close letters are to each other

We can add other elements to individual characters as needed to get the effect that we wanted. One idea I’ve been playing with is to use SASS to generate random colors for each letter.

.char1 {
  z-index: 4;
  text-shadow: -0.02em 0.02em 0.2em rgba(10, 10, 10, .8);
  margin-left: -0.05em;
}

.char2 {
  z-index: 3;
  text-shadow: -0.02em 0.02em 0.2em rgba(10, 10, 10, .8);
  margin-left: -0.025em;
  top: 0.05em;
}

.char3 {
  z-index: 9;
  text-shadow: -0.02em 0.02em 0.05em rgba(10, 10, 10, .8);
  margin-left: -0.05em;
}

.char4 {
  z-index: 5;
  text-shadow: 0.01em -0.01em 0.14em rgba(10, 10, 10, .8);
  margin-left: -0.05em;
  top: -0.01em;
}

.char5 {
  z-index: 2;
  text-shadow: -0.02em -0.02em 0.14em rgba(10, 10, 10, .8);
  margin-left: -0.06em;
  top: 0.02em;
}

.char6 {
  z-index: 10;
  text-shadow: -0.02em -0.02em 0.14em rgba(10, 10, 10, .8);
  margin-left: -0.06em;
  top: -0.02em;
}

.char7 {
  z-index: 8;
  text-shadow: -0.02em -0.02em 0.14em rgba(10, 10, 10, .8);
  margin-left: -0.05em;
}

.char8 {
  z-index: 6;
  text-shadow: -0.02em -0.02em 0.14em rgba(10, 10, 10, .8);
  margin-left: -0.08em;
  top: -0.02em;
}

.char9 {
  z-index: 7;
  text-shadow: -0.02em -0.02em 0.14em rgba(10, 10, 10, .8);
  margin-left: -0.08em;
}

One last aspect is to make sure that it looks decent in our target devices and browsers. I have to look at it in an iPad and iPhone to make sure.

Non jQuery Alternative

Based on Jeremy Keith’s gist this is a quick way to do some of the slicing and span/class addition without having to use jQuery.

The HTML and CSS remain the same, although we may have to tweak the CSS to make it look identical. The Javascript changes to the code shown below:

 function sliceString(selector) {
    if (!document.querySelector) return;
    var string = document.querySelector(selector).innerText,
        total = string.length,
        html = '';
    for (var i=0; i<total ; i++) {
        html+= `<span class="char${i+1}">${string.charAt(i)} `;
    }
    document.querySelector(selector).innerHTML = html;
}
sliceString('.myTitle');

This needs further testing, particularly in Firefox where some users of Jeremy’s code reported problems

Working With Opaque Responses

While working with Workbox I discovered a hidden danger of opaque responses. This post will address these shortcomings and provide an imperfect solution for how to do it.

Normally, opaque responses are not cacheable along with additional restrictions. See Jeff Posnick’s Stack Overflow Answer for details but it boils down to the following items:

  • Because opaque responses are meant to be a black-box you won’t get meaningful information from most of the properties of the Response class, or call the various methods that make up the Body interface, like json() or text()
  • Browsers pad the opaque resource size. In the case of Chrome the minimum size that any single cached opaque response contributes to the overall storage usage is approximately 7 megabytes
  • The status property of an opaque response is always set to 0, regardless of success or failure
  • The add()/addAll() methods of the Cache API will reject if any response is outside the 2XX range

Are opaque requests useful?

Yes, there are places where opaque requests can be used without problem. Based on MDN’s Cross-origin network access article:

  • &lt;script src="…">&lt;/script>. Error details for syntax errors are only available for same-origin scripts
  • &lt;link rel="stylesheet" href="…"/> as long as it’s served with the correct mime type
  • Images displayed by &lt;img >
  • &lt;video> and &lt;audio>
  • Plugins embedded with &lt;object>, &lt;embed />, and &lt;applet>
  • Fonts applied with @font-face. Some browsers allow cross-origin fonts, others require same-origin
  • Anything embedded by &lt;frame> and &lt;iframe>

Caching opaque responses

The example below creates a new Request for an opaque response.

We then use the Fetch API to retrieve the requested object and we put it in the cache.

The idea is that by putting the resource in the cache we’re opting into accepting whatever the resource is, even an error.

Because of the padding, we need to be extra careful when we decided how many items we want to cache before our origin runs out of quota

const request = new Request('https://third-party-no-cors.com/', {
    mode: 'no-cors'
  });
// Assume `cache` is an open instance of the Cache class.
fetch(request).then(response => cache.put(request, response));

Workbox.js provides additional functionality to make it easier to work with opaque responses by using plugins.

The handler below uses two plugins: expiration and cacheableResponse.

The expiration plugin (workbox.expiration.Plugin()) dictates how long Workbox will keep the resources in the cache.

The cacheableResponse plugin (workbox.cacheableResponse.Plugin()) changes the behavior of Workbox regarding opaque responses.

statuses indicates what HTTP status codes we want to accept for this handler. I’ve chosen to automatically accept 0 in addition to 200 as valid status codes for this request.

purgeOnQuotaError: true tells Workbox that it’s ok to delete this cache when we hit the quota limit for this domain. We do this because we’re accepting opaque responses and they are padded (at least 7 MB each in Chrome).

const extFontHandler = workbox.strategies.staleWhileRevalidate({
  cacheName: 'external-fonts',
  plugins: [
    new workbox.expiration.Plugin({
      maxAgeSeconds: 30 * 24 * 60 * 60,
      // maxEntries: 20,
    }),
    new workbox.cacheableResponse.Plugin({
      statuses: [0, 200],
      // Automatically cleanup if quota is exceeded.
      purgeOnQuotaError: true,
    }),
  ],
});

Each handler is associated with one or more routes. My project is working only with Adobe Fonts (Typekit) and Google Fonts. I split the domains into two, one for each provider I’m working with.

The first route uses a regular expression to match use.typekit.net.

The second one is more complicated. The regular expression matches either fonts.google.com or fonts.gstatic.com. There may be other domains and that will mean that we need additional routes for those domains.

// Third party fonts from typekit
workbox.routing.registerRoute(/https:\/\/use\.typekit\.net/, (args) => {
  return extFontHandler.handle(args);
});

// Third party fonts from google fonts
workbox.routing.registerRoute(/https:\/\/fonts\.(googleapis|gstatic)\.com/, (args) => {
  return extFontHandler.handle(args);
});

We’ve accepted opaque responses and have configured the Service Worker to expire the requests after a given period of time and to delete the cache when we hit the quota limit.

So this provides an imperfect solution to caching opaque requests. I don’t know if this is better than hosting all resources locally or not. It is what I have now and it works as intended.

Progressive Enhancement Matters

Progressive Enhancement is a powerful methodology that allows Web developers to concentrate on building the best possible websites while balancing the issues inherent in those websites being accessed by multiple unknown user-agents. Progressive Enhancement (PE) is the principle of starting with a rock-solid foundation and then adding enhancements to it if you know certain visiting user-agents can handle the improved experience. PE differs from Graceful Degradation (GD) in that GD is the journey from complexity to simplicity, whereas PE is the journey from simplicity to complexity. PE is considered a better methodology than GD because it tends to cover a greater range of potential issues as a baseline. PE is the whitelist to GD’s blacklist. Part of the appeal of PE is the strength of the end result. PE forces you to initially plan out your project as a functional system using only the most basic of Web technologies. This means that you know you’ll always have a strong foundation to fall back on as complexity is introduced to the project.

Progressive Enhancement: What It Is, And How To Use It?

Progressive enhancement (PE) is the principle of starting with a rock-solid foundation and then adding enhancements to it. Every so often the discussion between PE, its cousin Graceful Degradation (GD) and the need to provide such features will rear its head in debates about Javascript not being available or not needing to think about PE or GD in a world where browsers are evergreen.

This is not a post on what is Progressive Enhancements. If you’re interested in that area, Aaron Gustafson wrote three articles on the subject, the fist of which is a good introduction to the subject. The articles are:

  1. Understanding Progressive Enhancement
  2. Progressive Enhancement with CSS
  3. Progressive Enhancement with JavaScript

How we build a base experience?

One of the hardest things to decide when it comes to progressive enhancement is what makes for a core experience. How do we decide on the baseline we want to provide to all our users?

It would be tempting to make the baseline modules and grid, but that wouldn’t be as useful as we think.

CSS Grid is supported in all modern browsers so it’s not an issue moving forward. However, there are three cases to be made for providing grid as a progressive enhancement:

  • Earlier versions of our evergreen browsers did not support the feature
  • Browsers like IE and Opera Mini don’t support Grid at all (and save yourself the comment on how not even Microsoft supports IE… there are plenty of people still using it)
  • Having to work around interoperability bugs makes

If you want to write defensive CSS you can use feature queries like this.

div {
  float: right;
}

@supports (display: grid) {
  div {
    display: grid;
  }
}

If you’re using Javascript then the way to check for CSS feature support looks like this:

hasGrid = CSS.supports("display: grid");

if (hasGrid) {
  console.log('We support grid');
  // Do something for browsers that support grid
} else {
  console.log('Grid is not supported');
  // Do something for browsers that don't support grid
}

I’m not saying don’t use Grids… quite the opposite. I’m saying to provide a base experience that will support as many browsers as possible and then use Grids as an enhancement for those browsers that can work with them.

In Javascript take modules and async/await as an example. All versions of modern browsers that support modules support async/await but not all versions of modern browsers that support async/await support modules. So you get to decide which supported features are more important for your application.

Another question that you need to ask is whether transpilation is needed for your target browsers. Tools like Babel will convert your modern JavaScript (ES2015 and later) into an older version of Javascript for browsers that don’t support. Using the env preset and a list of the oldest browser versions you want to support you can write your code once and let Babel deal with making the code work in your older supported browsers.

The biggest challenge is how to make the transpiled bundles as performant as the original code.

How to enhance it?

As with many things in web development and design, it depends. It depends on the type of site you’re building, how much data it needs to load from the server and how you’re caching the content for future re-use.

If we are building a content site we may want to populate the base content first and then run CSS and JavaScript to enhance the base content or add additional material.

If build a catalog page for a store the most expedient way may be to create templates that get populated from data from the server. But because we are sensitive to network hiccups, and a number of other reasons why Javascript may time out or otherwise fail to load, particularly in older or lower-end devices.

Once we have our core experience, one that will work without CSS and whether JavaScript is enabled or with as little Javascript as possible we can start thinking about how to enhance it and how to do it in an accessible way.

Conclusion

I know that you don’t have to make the experience identical for all devices but, to me, that doesn’t mean that we should provide a subpar experience to those browsers that “don’t cut the mustard”, particularly when you don’t have to.

I like an escalator because an escalator can never break, it can only become stairs. There would never be an escalator temporarily out of order sign, only an escalator temporarily stairs. Sorry for the convenience.

Mitch Hedberg, Comedy Central Presents

We should make our apps into escalators, not part of the wealthy western web.

Links and Resources

PWAs: Don’t build it all at once

One of the things that have been tempting when working on a PWA for my layout experiments has been to do everything at once and work on them in parallel.

Reading Jason Grigsby’s Progressive Web Applications book from A Book Apart reminded me that we need to take it easy, plan ahead and do incremental deployment of PWA features.

The basics

What do I mean by this? There are three technical items that we know we need to turn a site into a PWA:

  • Host the site using HTTPS
  • A Web Manifest
  • A Service Worker

The basics are easy. We can add an SSL certificate using free tools like let’s encrypt.

The (basic) Manifest is also fairly straightforward, you can generate it manually using instructions like those on Google developers or generate it automatically using one of many Web Manifest Generators. I suggest PWA Builder, mostly because it will help you generate both the manifest and a basic service worker that you can later tinker with.

The Service Worker is also fairly easy. One thing that we need to remember is that we don’t need to do everything the first time. PWA Builder will give you options for creating your Service Worker and the code to insert into your site’s entry page to use it.

Thinking things through

These are not all the items worth reviewing an analyzing before you implement your site as a PWA but it’s a good starting point.
For a more thorough discussion of how to gradually roll out a PWA, check Progressive Web Applications.

There is more than the basics when it comes to PWAs and web applications in general. Yes, we can slap a manifest and a service working into a mediocre website and making into a mediocre PWA.

But there are a lot of other things we need to consider when turning our sites into applications even before we write our first line of code.

Some of these considerations have to do with our site as it exists before we implement PWA technologies.

Navigation

The first item to consider, for me, is the site’s navigation. If we make the site into a full-screen application then we need to make sure that users can navigate without the browser’s chrome available.

Performance matters

Another aspect to consider is how is your site performing before we implement a PWA. My favorite tool is Lighthouse available as a CLI that you can integrate into your existing workflows and as part of DevTools audits.

To channel my inner Alex Russell, any performance test you run must run in the devices using chrome://inspect to debug it remotely. The results from DevTools are good approximations but will never match the results of running in a real device.

The reason we run performance tests is to make sure we’re not using the Service Worker as an excuse for slow loading content.

Service Worker: what part of the Swiss army knife do we need?

When planning a Service Worker you have to decide how hard or how easy do you want to build it. Do you want a site that automatically caches all assets on the first load? Do we want a Service Worker that will pre-cache certain assets (the shell of an app or the assets needed to load an index page)? Push notifications? User activated background fetch?

The cool thing is that we can do all these things over time. The idea behind service workers is that you can build them out over time and users will only see the new features.

We can start with a basic service worker that will cache all resources to a single cache. We can progress from there. Every time you change the service worker, browsers will treat them as a brand new worker and will update whatever needs to change.

There are also tools like Workbox.js that will automate service worker creation and usage. It makes it easier to create multiple caches, using different caching strategies and it gives you access to newer technologies built on top of service workers.

It also gives you more time to develop your strategy for how you will implement the worker.

Frameworks and PWAs

If you’re using a framework, you can still consider evaluate PWAs Angular and React both provide PWA implementation for new apps/sites… Angular through the CLI and the @angular/pwa package and React through the create-react-app tool. In my limited research, I wasn’t able to figure out if this is only for new applications or if we’d be able to update an existing one to make it a PWA but if you’re familiar with the tools you should be familiar with the tools and the communities where you can find additional information.

Links and resources