Pandoc, Multiformat Publishing

Pandoc is a converter to and from different text formats. What attracted me to the tool is that it allows me to work with Microsoft Word documents and convert them to Markdown or any other formats.

Figure 1 shows the formts that Pandoc converts to and from.

Pandoc Input and Output Format Visualization
Pandoc Input and Output Format Visualization

Since I already work with Markdown this is a valued added tool as it allows me to convert Markdown to formats that would be very difficult or impossible to do without a tool.

We’ll explore converting Markdown to epub3, an ebook standard and the starting point for Kindle conversion using Kindlegen, convert the same Markdown document to Latex and then explore an alternative to my CSS for Paged Media and PrinceXML way to create PDF documents.

Are these solutions perfect? No, definitely not. They are good starting points for future work.

  • Using Pandoc to create epub books saves me from having to do the grunt work of manually creating the XML files required for epub.
  • The Latex conversion gives me a working latex file that I can then further customize by adding packages and additional environments.
  • The PDF conversion is a backup in case PrinceXML changes its current practice of not charging for development, only for production work

Markdown to epub

Epub, and more specifically, epub 3, is an ebook formatcreated by IDPF and now being submitted to the W3C as paart of the merger of the two institutions.

The format itself is a zipped file with a application/epub+zip mimetype. The contenxt of an example ebook are show in the following listing. We’ll disect it below

│   ├──
│   └── container.xml
│   ├── ch01.xhtml
│   ├── ch02.xhtml
│   ├── cover.xhtml
│   ├── css
│   ├── images
│   ├── notes.xhtml
│   ├── package.opf
│   ├── toc.ncx
│   └── toc.xhtml
└── mimetype

The META-INF directory

The META-INF directory contains information about the book.

The iBooks propietary tells iBooks about special characteristics for one or more versions of the application (macOS, iPad or iPhone).

In this case we tell it that for all platforms we want to use custom fonts and we don’t want to make this an interactive book. The code to do this is this:

  <!-- all devices -->
  <platform name="*">
    <!-- set to "true" when embedding fonts -->
    <option name="specified-fonts">true</option>
    <!-- set to "true" when using javascript or canvas -->
    <option name="interactive">false</option>

In container.xml we tell the epub reader where to find the root of the book. This is the package.opf file, not an index.html or similar file. In our example content, the file looks like this and it points to the package.opf file inside the OEBPS directory (discussed in the next section):

<?xml version="1.0"?>
<container version="1.0" xmlns="urn:oasis:names:tc:opendocument:xmlns:container">
    <rootfile full-path="OEBPS/package.opf"

If you’re not targeting iBooks with your file you can remove the META-INF directory but then iBooks will always use system fonts, even if you’ve packaged the fonts in your system.

The OEBPS directory

The OEBPS directory contains the actual book content plus a few XML files used to describe and define the structure to the book.

It’s important to note that the contenxt is written in XHTML, either 1.1 or the XHTML version of HTML5. This poses additional restrictions and requirements.

  1. All XHTML documents must have specific elements
    • DOCTYPE declaration
    • HTML element
    • HEAD elemen
    • TITLE element
    • BODY element
  2. All XHTML tag names & attribute names must be in lowercase
  3. All XHTML elements must close. If it doesn’t have a closing element then it must be closed within the opening tag like this: <br />
  4. All XHTML elements must be properly nested
  5. All XHTML attribute values must be quoted

IMAGES and CSS contain the associated resources for the content.

The package.opf file is the core of the book. It provides the ebook reader with metadata for the publicaation as well as navigation and table of content structure.

The final file in this section, toc.ncx, acts as a backwards compatible bridge to epub 2, the previous version of the specification and still used by many ppublishers around the world.

The mimetype file

At the root of the book directory we must place a mimetype file. It has no extension and the only content in the file is the string application/epub+zip without a carriage return

Why Pandoc? How to create an epub

I’ve worked in creating epub and Kindle ebooks from scratch. Pandoc doesn’t provide the cleanest ebook in the market but it creates all the XML files and it’s just a matter of deciding how much cleanup you want to do.

The basic command is simple. Using a Markdown file as the source we use the following command.

pandoc -o sample.epub

We can add metadata using a syntax similar to YAML

- type: main
text: My Book
- type: subtitle
text: An investigation of metadata
- role: author
text: John Smith
- role: editor
v 64
Pandoc User’s Guide Creating EPUBs with pandoc
text: Sarah Jones
- scheme: DOI
text: doi:10.234234.234/33
publisher: My Press
rights: © 2007 John Smith, CC BY-NC

Pandoc supports the following data types:

  • identifier Either a string value or an object with fields text and scheme. Valid values for scheme are ISBN-10, GTIN-13, UPC, ISMN-10, DOI, LCCN, GTIN-14, ISBN-13, Legal deposit number, URN, OCLC, ISMN-13, ISBN-A, JP, OLCC
  • title Either a string value, or an object with fields file-as and type, or a list of such objects. Valid values for type are main, subtitle, short, collection, edition, extended
  • creator Either a string value, or an object with fields role, file-as, and text, or a list of such objects. Valid values for role are MARC relators, but pandoc will attempt to translate the human-readable versions (like “author” and “editor”) to the appropriate marc relators
  • contributor Same format as creator
  • date A string value in YYYY-MM-DD format. (Only the year is necessary.) Pandoc will attempt to convert other common date formats
  • lang (or legacy: language) A string value in BCP 47 format. Pandoc will default to the local language if nothing is specified
  • subject A string value or a list of such values
  • description A string value
  • type A string value
  • format A string value
  • relation A string value
  • coverage A string value
  • rights A string value
  • cover-image The path to the cover image
  • stylesheet The path to the CSS stylesheet
  • page-progression-direction Either ltr or rtl. Specifies the page-progression-direction attribute for the spine element

By default, pandoc will download linked media (including audio and video) and include it in the EPUB container, providing a complete epub package that will work regardless of network connectivity and other external facotrs.

If you want to link to external media resources instead, use raw HTML in your source and add data-external=”1″ to the tag with the src attribute.

For example:

<audio controls="1">
<source src=""
  data-external="1" type="audio/mpeg">

I recommend against linking to external resources unless you provide alternative feedback as this will make your book dependent on your nextwork connectivity and that far from reliable.

Markdown to Latex

LaTeX is a document preparation system. When writing, the writer uses plain text as opposed to the formatted text found in WYSIWYG word processors like Microsoft Word, LibreOffice Writer and Apple Pages. The writer uses markup tagging conventions to define the general structure of a document (such as article, book, and letter), to stylise text throughout a document (such as bold and italics), and to add citations and cross-references. A TeX distribution such as TeX Live or MikTeX is used to produce an output file (such as PDF or DVI) suitable for printing or digital distribution. Within the typesetting system, its name is stylised as LATEX.

I’ve always been interested in ways to move away from word processors into more convenient and easier to use than the bloated binary files resulting from Word, Pages and other Word Processors. My current favorite is Markdown because it’s easier to read and I’ve worked on toolchains to convert the markdown to HTML and PDF.

LaTex is a good backup option that allows me to create PDF (and will be the intermediary step when we convert Markdown to PDF) and HTML from LaTex sources.

The command to convert Markdown to LaTex is simple:

pandoc -s -o sample.tex

The -s flag will make sure that we generate a complete document rather than a fragment. Otherwise the LaTex content will not work with other items in the toolchain.

An alternative: Markdown to PDF

The final task I want to discuss is converting Markdown to PDF with a toolchain other than what I currently use (Markdown to HTML and HTML through CSS Paged Media to PDF using PrinceXML). This process provides an alternative tool chain going from Mardown to LaTex and from LaTex to PDF.

The format of the PDF looks too much like a LaTex document and I’ve never been a fan. But the toolchain is open source (eventhough it’s my least favorite license, GPL) so I don’t have to worry about the vendor changing its mind about the licensing for the tool.

pandoc -s -o sample.pdf

Further thoughts

We’ve just scratched the surface of what Pandoc can do. One interesting idea is to convert Markdown to ICML (In Copy Markup Language) that we can then import to an InDesign template that we’ve set up in advance.

The possibilities look promising 🙂

PostCSS and crazy things you can do with it

PostCSS is an interesting project. In a nutshell, It takes CSS and turns it into an Abstract Syntax Tree, a form of data that JavaScript can manipulate. JavaScript-based plugins for PostCSS then performs different code manipulations. PostCSS itself doesn’t change your CSS, it allows plugins to perform transformations they’ve been designed to make.

There are essentially no limitations on the kind of manipulation PostCSS plugins can apply to CSS. If you can think of it, you can probably write a PostCSS plugin to make it happen.

It’s important to also know what PostCSS is not. This material is adapted from Envato-tuts+ PostCSS Deep Dive: What You Need to Know

PostCSS is Not a Pre-processor

Yes, you can absolutely use it as a preprocessor, but you can also use PostCSS without any preprocessor functionality. I only use Autoprefixer and, some times, CSS Nano. Neither of these tools is a pre-processor.

PostCSS is Not a Post-processor

Post processing is typically seen as taking a finished stylesheet comprising valid/standard CSS syntax and processing it, to do things like adding vendor prefixes. However PostCSS can do more than just post process the file; it’s just limited by the plugins you use and create.

PostCSS is Not “Future Syntax”

There are some excellent and very well known PostCSS plugins which allow you to write in future syntax, i.e. using CSS that will be available in the future but is not yet widely supported. However PostCSS is not inherently about supporting future syntax.

Using future syntax is your choice and not a requirement. Because I come from SCSS I do all weird development there and use PostCSS in a much more limited capability. If I so choose I can turn to PostCSS and use future looking features without being afraid that my target browsers will not support it.

PostCSS is Not a Clean Up / Optimization Tool

The success of the Autoprefixer plugin has lead to the common perception of PostCSS as something you run on your completed CSS to clean it up and optimize it for speed and cross browser compatibility.

Yes, there are many fantastic plugins that offer great clean up and optimization processes, but these are just a few of the available plugins.

Why I picked PostCSS and what we’ll do with it

I initially decided not to use PostCSS until I discovered that Autoprefixer and CSSNano, some of my favorite tools, are actually PostCSS plugins. That made me research PostCSS itself and see what it’s all about. What I found out is a basic tool and a rich plugin ecosystem that can do a lot of the things you may want to do with your CSS from adding vendor prefixes based on what you expect your users to have to analyzing your code for compliance with a given methodology like BEM.

I also like how PostCSS advocates for the single responsibility principle as outlined by Martin:

The single responsibility principle is a computer programming principle that states that every module or class should have responsibility over a single part of the functionality provided by the software, and that responsibility should be entirely encapsulated by the class.

Wikipedia Entry: Single responsibility principle

Basically each PostCSS plugin should handle only one task and do it well. We should not create classes that do more than one thing and we shouldn’t duplicate functionality that is already available through another PostCSS plugin.

In this post we’ll explore how to build a PostCSS workflow using Gulp, how to build a plugin and how would you add plugins to the PostCSS workflow we created.

Running PostCSS

I work primarily in a Gulp environment so I built this task to work with PostCSS plugins and Autoprefixer in particular. Assuming you haven’t done before, install Gulp globally

npm i -g gulp

And then install the plugins we need: gulp, postcss and autoprefixer into the project you’re working it. the -D flag will save the plugins as development dependency.

npm i -D gulp gulp-postcss autoprefixer

The task itself is currently made of two parts:

  • The list of processors to use
  • The task itself

The task pipes the input through sourcemaps, then it runs postCSS and Autoprefixer that has already been configured with what versions of browsers to prefix for. It then writes the sourcemap and the output to the destination directory.

gulp.task("processCSS", () => {
  // What processors/plugins to use with PostCSS
  const PROCESSORS = [
    autoprefixer({browsers: ['last 3 versions']})
  return gulp
      pretty: true,

If we run this task last in our CSS handling process we can change the destination from src/css to dest/css but the process where this task was first used there was an additional compression process beyond what SASS gave me; I wasn’t using CSSNano so I had to keep the files in the source directory to do further processing. We’ll revisit this when we discuss other plugins we can use.

Adding a second plugin

Even through the CSS for this task is compressed using SASS compressed format. I want more compression so we’ll use CSS Nano to do further compression.

To use it we first need to install the plugin

npm i -D cssnano

Next we need to modify our build script to require CSS Nano:

const cssnano = require('cssnano');

And, finally, we need to modify the task to incorporate CSS Nano. We do this by adding CSS Nano to our PROCESSORS array. The modified task now looks like this:

gulp.task("processCSS", () => {
  // What processors/plugins to use with PostCSS
  const PROCESSORS = [
    autoprefixer({browsers: ['last 3 versions']}),
  return gulp
      pretty: true,

We can add further processors following the same formula: install and require the plugin, add the plugin (and any configuration) to the PROCESSORS array and test to make sure that it does what you want it to.

Building a plugin

The code for this section originally appeared in Tuts+ PostCSS Deep Dive: Create Your Own Plugin

What I find the most intriguing about PostCSS is the API and how easy it makes it for developers to create plugins to address specific needs.

What the CSS code will look like

Let’s assume, for example, that we have a set of fonts that Marketing has decided we should use in our content. Rather than type the full string of all the fonts in the stack you can do something like this instead:

html {
  font-family: stack("Arial");
  weight: normal;
  style: normal;

And the resulting CSS will appear like this:

html {
  font-family: 'Arial, "Helvetica Neue", Helvetica, sans-serif';
  weight: normal;
  style: normal;

Configuring the project

To initialize the plugin project we have to create a folder and initialize the package with NPM and accept the defaults automatically. We do this with the following commands:

mkdir local-stacks # Creates the directory
cd local-stacks # Changes to the directory we just created
npm init --yes # Inits NPM accepting all defaults automatically

Now we must create the file we’ll use as our plugin’s entry point, index.js. We can create this with:

touch index.js

Or create the file in your text editor. I normally use Visual Studio Code.

Writing the code

To get the plugin going we need to install and require two plugins: The PostCSS core engine (postcss) and Underscore (underscore) that we will use to merge local and plugin configurations. I am not using ES6 module import syntax (although it would make the code simpler) because I want to use the module with older versions of Node.

We then define an array of the font stacks that we want to use. The name we want to use for the stack is the key and the stack itself is the value for the key.

const postcss = require('postcss');
const _ = require('underscore');

// Font stacks from
const fontstacks_config = {
  'Arial': 'Arial, 'Helvetica Neue', Helvetica, sans-serif',
  'Times New Roman': 'TimesNewRoman, 'Times New Roman', Times, Baskerville, Georgia, serif',
  'Lucida Grande':, 'Lucida Sans Unicode', 'Lucida Sans', Geneva, Verdana, sans-serif;

toTitleCase will convert the string passed to it so that the first letter of each word is capitalized. The regular expression that we use to capture the string to title case is a little complicated (it was for me when I first saw it) so I’ve unpacked it below:

  • \w matches any word character (equal to [a-zA-Z0-9_])
  • \S* matches any non-whitespace character (equal to [^\r\n\t\f ])
  • * Quantifier — Matches between zero and unlimited times, as many times as possible, giving back as needed (greedy)
  • g modifier – Return all matches (don’t return after first match)
// Credit for this function to in SO
function toTitleCase(str) {
  return str.replace(/\w\S*/g, function(txt){return txt.charAt(0).toUpperCase() +

The module we’re exporting is the actual plugin. We give it a name, local-stacks, and we define it as function. In the function:

  • We walk through all the rules in the stylesheet using walkRules, part of the PostCSS API
  • For each rule we walk through all the declaractions using walkDecls, also part of the PostCSS API
  • We test if there is a fontstack call in the declaration. If there is one we:
    1. Get the name of the fontstack requested by matching the value inside the parenthesis and then replacing any quotation marks
    2. Title case the resulting string in case the user didn’t
    3. Look the name of the font stack in the fontstack_config object
    4. Capture any value that was in the string before the fonstack call
    5. Create a new string with both the first font and the value of our font stack
    6. Return the new value as the value of our declaration
module.exports = postcss.plugin('local-stacks', function (options) {

    return function (css) {

      options = options || {};

    fontstacks_config = _.extend(fontstacks_config, options.fontstacks);

    css.walkRules(function (rule) {

      rule.walkDecls(function (decl, i) {
        var value = decl.value
        if (value.indexOf( 'fontstack(' ) !== -1) {

          var fontstack_requested = value.match(/\(([^)]+)\)/)[1]
            .replace(/["']/g, "");

          fontstack_requested = toTitleCase(fontstack_requested);

          var fontstack = fontstacks_config[fontstack_requested];

          var first_font = value.substr(0, value.indexOf('fontstack('));

          var new_value = first_font + fontstack;

          decl.value = new_value;


Next Steps and Closing

This is a very simple plugin as far as plugins go. You can look a Autoprefixer and CSS Nano for more complex examples and ideas of what you can do with PostCSS. If you’re interested in exploring the API, it is fully documented at

An important reminder, you don’t have to reinvent the wheel. There is a large plugin ecosystem available at We can use these plugins to get the results we want.
This makes writing your own plugin a fun but not always necessary exercise.

Once you’ve scoped your CSS project you can decide how much of PostCSS you need and how your needs can be translated into existing plugins and custom code.

Links, Resources and Ideas

Page Visibility

There is nothing more anoying than having audio/video playing in a tab when it’s in the background or when the browser is minimized.

The Page Visibility API gives us control of what to do with media or any other element of a page when the tab is hidden or not visible. Some of the things we can do with this API:

  • Pausing a video when the page has lost user focus.
  • Stop an HTML5 canvas animation from running when a user is not on the page.
  • Show a notification to the user only when the page is active.
  • Stop movement of a slider carousel when the page has lost focus.

The API introduces two new attributes to the document element: document.visibilityState and document.hidden.

document.visibilityState holds four different values which are as below:

  • hidden: Page is not visible on any screen
  • prerender: Page is loaded off-screen and not visible to user
  • visible: Page is visible
  • unloaded: Page is about to unload (user is navigating away from current page)

document.hidden is boolean property, that is set to false if page is visible and true if page is hidden.

The first example pauses a video if the container tab is hidden or not visible and plays it otherwise.

We start by adding a visibilitychange event listener to the document. Inside the listener we check if the document is hidden and pause the video if it is; otherwise we play it.

video = document.getElementById('myVideo');

document.addEventListener('visibilitychange', function () {
    if (document.hidden) {
    } else {;

The most obvious use is to control video playback when the tab is not visible. When I wrote about creating custom controls for HTML5 video I used a click event handler like this one to control video play/pause status:

play.addEventListener('click', e => {
  // Prevent Default Click Action
  if (video.paused) {;
    playIcon.src = 'images/icons/pause.svg';
  } else {
    playIcon.src = 'images/icons/play-button.svg';

We can further enhance it so that it will only play the video if the document is visible.

We wrap the if block that controls playback in another if block that tests the page’s visibility state and only moves forward if we can see the page. If the page is not visible then we pause the video, regardless of whether it’s currently playing.

The code now looks like this:

play.addEventListener('click', e => {
  // Prevent Default Click Action
  if (document.visibilityState === 'visible') {
    if (video.paused) {;
      playIcon.src = 'images/icons/pause.svg';
    } else {
      playIcon.src = 'images/icons/play-button.svg';
  } else {

With those simple changes we’ve ensured that the video will not play in the background and that there will be no other distractions when we work on other tabs. We then should do something similar for our keyboard video controls.

HTTP2 Server Push, Link Preload And Resource Hints

HTTP/2 Server Push, Link Preload And Resource Hints

We’ve become performance obsessed, it’s important and the obsession shows. Firtunately we’re also given the tools to acommodate the need for speed. We’ll look at three ways of helping our servers give us the resources we need before we actually need them.

We’re not covering service workers in this post. Even thoough they are very much a performance feature they are client-side and we want to discuss server side performance improvements or improvements t hat are used directly from the HTML code, not Javascript.

What is Server Push

Accessing websites has always followed a request and response pattern. The user sends a request to a remote server, and with some delay, the server responds with the requested content.

The initial request to a web server is usually for an HTML document. The server returns the requested HTML resource. The browser parses the HTML and discoversreferences to other assets (style sheets, scripts, fonts and images). The browser requests these new assets which runs the process again (a stylesheet may have links to fonts or to images being used as background).

The problem with this mechanism is that users must wait for the browser to discover and retrieve critical assets until after an HTML document has been downloaded. This delays rendering and increases load times.

What problem does server push solve?

With server push we now have a way to preemptively “push” assets to the browser before the browser explicitly request them. If we do this carefully then we can increase perceived performance by sending things we know users are going to need.

Let’s say that our site uses the same fonts throughout and it uses one common stylesheet named main.css. When the user requests our site’s main page, index.html we can push these files we know we’ll need right after we send the reponse for index.html.

This push will increase perceived speed because it enables the browser to rende the page faster than waiting for the server to respond with the HTML file and then parsing it to discover additional resources that it must request and parse.

Server push also acts as a suitable alternative for a number of HTTP/1-specific optimizations, such as inlining CSS and JavaScript directly into HTML, as well as using the data URI scheme to embed binary data into CSS and HTML.

If you inline CSS into an HTML document within <style> tags, the browser can begin applying styles immediately to the HTML without waiting to fetch them from an external source. This concept holds true with inlining scripts and inlining binary data with the data URI scheme.

However the big pitfall of these techniques is that they can’t be cached outside of the page they are embedded in and, if the same code is used in more than one page, we end up with duplicated code in our pages.

Say, for example, that we want to push a CSS style sheet and a Javascript file for all requests that are HTML pages. In an Apache HTTP Server (version 2.4.17 and later) you can configure the server to push with something like this:

<if "%{DOCUMENT_URI} =~ /\.html$/">
  H2PushResource add css/site.css
  H2PushResource add js/site.js

We can then configure push resources that are specific to each section of our site, for example:

<if "%{DOCUMENT_URI} == '/portfolio/index.html'">
  H2PushResource add /css/dist/critical-portfolio.css?01042017

<if "%{DOCUMENT_URI} == '/code/index.html'">
  H2PushResource add /css/dist/critical-code.css?01042017

Yes, this means we have to play with server configurations, either the default configuration, a virtual host or on a directory basis using .htaccess. I still consider this an effort worth making.

It’s not all positive. Some things to be aware of:


Just like when building an application shell with a service worker, we need to be extemely careful about the size of the assets we choose to push. Too many files or files that are too large will defeat the purpose of pushing assets to improve performance as they’ll delay rendering.


This is not necessarily a bad thing if you have visitor analytics to back up this strategy or you know that the asset will be used elsewhere on your site. When in doubt don’t push. Remember that you may be costing people real money whe pushing unnecessary resources to people who are on restricted mobile data plans.


Some servers give you a lot of server push-related configuration options. Apache’s mod_http2 has some options for configuring how assets are pushed. Check your server’s configuration for details about what options can be configured and how to do it.


There has been some questions on whether server push could cause assets to be unnecessarily pushed to users when they return to our site. One way to control this is to only push assets when a cookie indicating the assets were pushed is not present.

An example of this technique is in Jeremy Wagner’s article Creating a Cache-aware HTTP/2 Server Push Mechanism at CSS Tricks. It provides an example of a way to create cookies to check against when pushing files from the server.

Link Preload

The Preload Specification aims at improving performance and providing more granular loading control to web developers. We can customize loading behavior in a way that doesn’t incur the penalites of loader scripts.

With preload the browser can do things that are just no possible with H2 Push:

  • The browser can set a resource’s priority, so that it will not delay more important resources, or lag behind less important resources
  • The browser can enforce the right Content-Security-Policy directives, and not go out to the server if it shouldn’t
  • The browser can send the appropriate Accept headers based on the resource type
  • The browser knows the resource type so it can determine if the resource could be reused

Preload has a functional onload event that we can leverage for additional functionality. It will not block the window.onload event unless the resource is blocked by a resource that blocks the event elsewhere in your content.

Loading late-loading resources

The basic way you could use preload is to load late-discovered resources early. Not all resources that make a web page are visible in the initial markup. For example, an image or font can be hidden inside a style sheet or a script. The browser can’t know about these resources until it parses the containing style sheet or script and that may end up delaying rendering or loading entire sections of your page.

Preload is basically telling the browser “hey, you’re going to need this later so please start loading it now”.

Preload works as a new rel attribute of the link element. It has three attributes:

  • rel indicates the type of link, for preload links we use the preload value
  • href is the relative URL for the asset we’re preloading.
  • as indicates the kind of resource we’re preloading. It can be one of the following:
    • “script”
    • “style”
    • “image”
    • “media”
    • “document”

Knowing what the attributes we can look at how to use it responsibly.

<link rel="preload" href="late_discovered_thing.js" as="script">

Early loading fonts and the crossorigin attribute

Loadinng fonts is just the same as preloading other types of resources with some additional constraints

<link rel="preload"

You must add a crossorigin attribute when fetching fonts, since they are fetched using anonymous mode CORS. Yes, even if your fonts are on the same origin as the page.

The type attribute is there to make sure that this resource will only get preloaded on browsers that support that file type. Only Chrome supports preload and it also supports WOFF2, but not all browsers that will support preload in the future may support the specific font type. The same is true for any resource type you’re preloading and which browser support isn’t ubiquitous.

Markup-based async loader

Another thing you can do is to use the onload handler in order to create some sort of a markup-based async loader. Scott Jehl was the first to experiment with that, as part of his loadCSS library. In short, you can do something like:

<link rel="preload"

The same can also work for async scripts.

We already have <script async> you say? Well, </script><script async> is great, but it blocks the window’s onload event. In some cases, that’s exactly what you want it to do, but in other cases it might not be.

Responsive Loading Links

Preload links have a media attribute that we can use to conditionally load resources based on a media query condition.

What’s the use case? Let’s say your site’s large viewport uses an interactive map, but you only show a static map for the smaller viewports.

You want to load only one of those resources. The only way to do that would be to load them dynamically using Javascript. If you use a script to do this you hide those resources from the preloader, and they may be loaded later than necessary, which can impact your users’ visual experience, and negatively impact your SpeedIndex score.

Fortunately you can use preload to load them ahead of time, and use its media attribute so that only the required script will be preloaded:

<link rel="preload"
      media="(max-width: 600px)">

<link rel="preload"
      media="(min-width: 601px)">

Resource Hints

In addition to preload and server push we can also ask the browser to help by providing hints and instructions on how to interact with resources.

For this section we’ll discuss

  • DNS Prefetching
  • Preconnect
  • Prefetch
  • Prerender

DNS prefetch

This hint tells the browser that we’ll need assets from a domain so it should resolve the DNS for that domain as quickly as possible. If we know we’ll need assets from we can write the following in the head of the document:

<link rel="dns-prefetch" href="//">

Then, when we request a file from it, we’ll no longer have to wait for the DNS lookup. This is particularly useful if we’re using code from third parties or resources from social networks where we might be loading a widget from a </script><script>.


Preconnect is a more complete version of DNS prefetch. IN addition to resolving the DNS it will also do the TCP handshake and, if necessary, the TLS negotiation. It looks like this:

<link rel="preconnect" href="//">

For more information, Ilya Grigorik wrote a great post about this handy resource hint:


This is an older version of preload and it works the same way. If you know you’ll be using a given resource you can request it ahead of time using the prefetch hint. For example an image or a script, or anything that’s cacheable by the browser:

<link rel="prefetch" href="image.png">

Unlike DNS prefetching, we’re actually requesting and downloading that asset and storing it in the cache. However, this is dependent on a number of conditions, as prefetching can be ignored by the browser. For example, a client might abandon the request of a large font file on a slow network. Firefox will only prefetch resources when “the browser is idle”.

Since we know have the preload API I would recommend using that API (discussed earlier) instead.


Prerender is the nuclear option, since it will load all of the assets for a given document like so:

<link rel="prerender" href="">

Steve Souders wrote a great explanation about this technique:

This is like opening the URL in a hidden tab – all the resources are downloaded, the DOM is created, the page is laid out, the CSS is applied, the JavaScript is executed, etc. If the user navigates to the specified href, then the hidden page is swapped into view making it appear to load instantly. Google Search has had this feature for years under the name Instant Pages. Microsoft recently announced they’re going to similarly use prerender in Bing on IE11.

But beware! You should probably be certain that the user will click that link, otherwise the client will download all of the assets necessary to render the page for no reason at all. It is hard to guess what will be loaded but we can make some fairly educated guesses as to what comes next:

  • If the user has done a search with an obvious result, that result page is likely to be loaded next.
  • If the user navigated to a login page, the logged-in page is probably coming next.
  • If the user is reading a multi-page article or paginated set of results, the page after the current page is likely to be next.

Combining h2 push and client side technologies

Please make sure you test the code in the sections below in your own setup. This may improve your site’s performance or it may degrade beyond acceptable levels.
You’ve been warned

We can combine server and client side teechnologies to further increase performance. Some of the things we can do include:

Gzip the content you serve

One way we can further reduce the size of our payloads is compressing them while in transit. This way we make our files smaller in transite and they are expanded by the browser when they receive them.

How we compress data depends on the server we’re using. The first example works with Apache mod_deflate and the configuration goes in the global Apache server configuration, inside a virtual host directive or an .htaccess file.

We’re not compressing images as I’m not 100% certain that they’ll survive the trip as other resources will and I already compress them before uploading them to the server.

We also skip files that already have a Content-Encoding header. We don’t need to compress them if they are already compressed 🙂

<ifModule mod_gzip.c>
  mod_gzip_on Yes
  mod_gzip_dechunk Yes
  mod_gzip_item_include file .(html?|txt|css|js|php|pl)$
  mod_gzip_item_include handler ^cgi-script$
  mod_gzip_item_include mime ^text/.*
  mod_gzip_item_include mime ^application/x-javascript.*
  mod_gzip_item_exclude mime ^image/.*
  mod_gzip_item_exclude rspheader ^Content-Encoding:.*gzip.*

Using Nginx HTTP GZip Module the code looks like this.

Nginx compression will not work with versions of IE before 6. But, honestly, if you’re still serving browsers that old you have more serious issues)

We also add a vary header to stop proxy servers from sending gzippedd files to IE6 and older.

gzip on;
gzip_comp_level 2;
gzip_http_version 1.1;
gzip_proxied any;
gzip_min_length 1100;
gzip_buffers 16 8k;
gzip_types text/plain text/html text/css
  application/x-javascript text/xml application/xml
  application/xml+rss text/javascript;

# Disable for IE < 6 because there are some known problems
gzip_disable "MSIE [1-6].(?!.*SV1)";

# Add a vary header for downstream proxies
# to avoid sending cached gzipped files to IE6
gzip_vary on;

Preload resources and cache them with a service worker cache

The code below is written in PHP. I’m working on coverting it to Javascript/Node. If you have such an example, please share it 🙂

There has been some questions on whether server push could cause assets to be unnecessarily pushed to users when they return to our site. One way to control this is to only push assets when a cookie indicating the assets were pushed is not present; we then store those assets in the service worker.

An example of this technique is in Jeremy Wagner’s article Creating a Cache-aware HTTP/2 Server Push Mechanism at CSS Tricks. It provides an example of a way to create cookies to check against when pushing files from the server.

function pushAssets() {
  $pushes = array(
    "/css/styles.css" => substr(md5_file("/var/www/css/styles.css"), 0, 8),
    "/js/scripts.js" => substr(md5_file("/var/www/js/scripts.js"), 0, 8)

  if (!isset($_COOKIE["h2pushes"])) {
    $pushString = buildPushString($pushes);
    setcookie("h2pushes", json_encode($pushes), 0, 2592000, "", "", true);
  } else {
    $serializedPushes = json_encode($pushes);

    if ($serializedPushes !== $_COOKIE["h2pushes"]) {
        $oldPushes = json_decode($_COOKIE["h2pushes"], true);
        $diff = array_diff_assoc($pushes, $oldPushes);
        $pushString = buildPushString($diff);
        setcookie("h2pushes", json_encode($pushes), 0, 2592000, "", "", true);

function buildPushString($pushes) {
  $pushString = "Link: ";

  foreach($pushes as $asset => $version) {
    $pushString .= "<" . $asset . ">; rel=preload";

    if ($asset !== end($pushes)) {
      $pushString .= ",";
  return $pushString;
// Push those assets!

This function (taken from Jeremy’s article) checks to see if there is a h2pushes cookie stored in the user’s browser. If there isn’t one then it uses the buildPushString helper function to generates preload links for the resources specified in the $pushes array and send them over as headers for the page and adds the h2pushes cookie to the browser with a representation of the paths that were preloaded.

If the user has been here before we need to decide if there’s anything to preload. Because we want to re-push assets if they change, we need to fingerprint them for comparison later on. For example, if you’re serving a styles.css file, but you change it, you’ll use a cache busting strategy like adding a random string to the file name at build time at bor appending a value to the query string to ensure that the browser won’t serve a stale version of the resource.

The function will decode the values stored on the cookie and compare the values with what you want to preload. If they are the same then it does nothing and moves forward, if the values are different then the function will take the new values, create preload links and update the cookie with the new values.

If you preload too many files this function may have detrimental effects on performance. As we discussed earlier you need to be mindful of what you preload and how large are the files you preload. But with this method at least we can be confident we’re not pushing duplicate assets to the page

Links and resources

Using modules in browser

Browsers are beginning to support es6 modules without polyfills! This means that we can take modules and use them as is without having to transpile if we’re only supporting modern browsers.

We’ll revisit modules: what they are and how they work. Unlike the cursory look we did when we discussed modules in the context of Rollup and Webpack, we’ll take a deeper look at how do modules work in the browser and look at examples of how we can best leverage them today by using new syntax on the script tag.

ES6/ES2015 Modules

Modules allow you to package related variables and functions in a single module file. The data and functions in your modules are invisible to the outside world unless you explicitly make them available.

Browser support for ES2016 modules

Modules in browsers are mostly supported behind flags. The currently supported browsers are:

  • Safari 10.1.
  • Chrome Canary 60 – behind the Experimental Web Platform flag in chrome:flags
  • Firefox 54 – behind the dom.moduleScripts.enabled setting in about:config
  • Edge 15 – behind the Experimental JavaScript Features setting in about:flags

Why use modules

Just like ShadowDOM allows you to encapsulate HTML, CSS and Javascript, modules allow you to encapsualte your scripts. You have full control over what gets exposed to outside scripts and can keep your implementation details hidden by simply not exposing them.

Creating modules

There are two ways to create a module. External and internal modules, each of which can export and import multiple named functions from other modules.

Multiple exports and imports

Take the following utils.js external module that exports text manipliation functions: One to add text to a div in the body of a page and one to create an h1 element.

// utils.js
export function addTextToBody(text) {
  const div = document.createElement('div');
  div.textContent = text;

export function createHeader(text) {
  const header = document.createElement('h1');
  header.textContent = text;

This internal module imports addTextToBody and createHeader from utils.js and uses them as local functions without name spacing.

<script type="module">
  import {addTextToBody, createHeader} from 'utils';

  addTextToBody('Modules are pretty cool.');

You can rename your imports to make them easier to work with. Working with the same example, we can shorten the name of our addTextToBody import by using the as keyword and the name we want to give it. We then use the name we chose rather than the original function name.

<script type="module">
  import {
    addTextToBody as addText, 
    createHeader} from 'utils';

  createHeader('Hellow World');
  addText('Modules are pretty cool.');

Importing the complete module

When we have multiple imports we can also import the complete module rather than specifying items to import. The module is written as normal.

When it comes to import and use the module, however, we use a different syntax.

import * as util from  'utils';

util.createHeader('wassup, doc');

util.addTextToBody('I\'m huntting wabbits');

Unlike when we imported specific functions we must qualify the functions we import using a wildcard. This may be useful when working with multiple modules as it may avoid name collisions.

Exporting a default function or class

We can also define a single function or class to export by adding the default keyworks to a class or function. In this example we export a addTextToBody as a default function.

// utils.js
export default function addTextToBody(text) {
  const div = document.createElement('div');
  div.textContent = text;

You can also use anonymous functions declarations when working with default exports, we can make addTextToBody and anonymous functon and use it as a default export.

// utils.js
export default function (text) {
  const div = document.createElement('div');
  div.textContent = text;

When it comes time to import it, we give it a name and use the same syntax we used with multiple imports. The name of the function we’re importing is less important, because we’ve identified the default function we want to import.

//------ main1.js ------
import addText from 'utils.js';

We can do the same thing with classes. We declare a default export of an anonymous class.

// utilsClass.js
export default class { ··· } // no semicolon!

When it comes time to import the class we use the same syntax but we then initialize the class using a constant or variable, like shown below:

//------ main2.js ------
import MyClass from 'utilClass';
const inst = new MyClass();

Mix and match

You can also mix and match named and default exports. Doing this is perfectly legal:

export default function addTextToBody(text) {
  const div = document.createElement('div');
  div.textContent = text;

export function createHeader(text) {
  const header = document.createElement('h1');
  header.textContent = text;

and then use the following import statement:

import {default as addText, createHeader} from 'utils';

// do work with the functions

It is advisable to only mix the different export strategies in a single module only when you have a good reason. They will make code harder to reason

Fallbacks for older browsers

The last concern when working with native module implementations is how to handle older browsers. Most modern browsers have repurposed the type attribute of the script element: If it’s value is module the JS engine will treat the content as a module with different rules than those for normal scripts.

To target older browsers use the nomodule attribute.

<script type="module" src="module.js"></script>
<script nomodule src="fallback.js"></script>

Differences between regular scripts and module scripts when used in the browsers (taken from exploring ES6:

  Scripts Modules
HTML element <script> <script type="module">
Default mode non-strict strict
Top-level variables are global local to module
Value of this at top level window undefined
Executed synchronously asynchronously
Declarative imports (import statement) no yes
Programmatic imports (Promise-based API) yes yes
File extension .js .js

Things to consider

Imports and exports must be at the top level. They must be at the top most level of your script, otherwise they’ll throw errors.

Imports are hoisted to the top of the script so it doesn’t matter where they are in the script and you can use an imported function before you actually import it.

Imports are read-only views on export meaning that you can’t change an imported function. If you need to change it make a local version of it and use it instead.

They only run once per page no matter how many times you load it.

Modules run on strict mode by default. There are several implications for this:

  • Variables can’t be left undeclared
  • Function parameters must have unique names (or are considered syntax errors)
  • with is forbidden
  • Errors are thrown on assignment to read-only properties
  • Octal numbers like 00840 are syntax errors
  • Attempts to delete undeletable properties throw an error
    delete prop is a syntax error, instead of assuming delete global[prop]
  • eval doesn’t introduce new variables into its surrounding scope
  • eval and arguments can’t be bound or assigned to
  • arguments doesn’t magically track changes to method parameters
  • arguments.callee throws a TypeError, no longer supported
  • arguments.caller throws a TypeError, no longer supported
  • Context passed as this in method invocations is not “boxed” (forced) into becoming an Object
  • No longer able to use fn.caller and fn.arguments to access the JavaScript stack
  • Reserved words (e.g protected, static, interface, etc) cannot be bound

Modules and script modules never block rendering. They always run as if the defered attribute was set in the calling script tag. The defer tag means that the script will execute after the content is downloaded but before the DOMContentLoaded event is fired.

Modules and inline (script) modules can use async attribute meaning that they can be made to load without blocking HTML rendering but it also means that you can no longer guarantee execution order. If the order your scripts run in is important rely on the defered attribute discussed earlier.

Must use a valid Javascript Mime Type or it will not execute. In this context, a valid Javascript Mime Type is one of those listed in the HTML Standard:

  • application/ecmascript
  • application/javascript
  • application/x-ecmascript
  • application/x-javascript
  • text/ecmascript
  • text/javascript
  • text/javascript1.0
  • text/javascript1.1
  • text/javascript1.2
  • text/javascript1.3
  • text/javascript1.4
  • text/javascript1.5
  • text/jscript
  • text/livescript
  • text/x-ecmascript
  • text/x-javascript

Performance may not be as good as we’d like.

Because module support in browser is new, it may not perform as well as bundled modules. Just as I was getting ready to publish this article I found a post on module performance versus bundled content.

In Browser module loading – can we stop bundling yet? Sérgio Gomes walks down how he tested performance of bundled versus unbundled modules. His results are interesting and worth trying to reproduce.

I expect things will improve as browsers fix bug and improve performance. The best solution was/is/will continue to be to use HTTP2 and preload.

Links and resources