Designing a service worker

Since I decided to do an app shell architecture for the project we’ll have to split the way we cache content with service worker. We’ll use both sw-precache to cache the app shell and sw-toolbox to cache the other pages of the application and any associated resources.

Yes, we could build the service worker manually but updating becomes more and more complex. You have to remember to update the worker whenever you make a change and that makes it error prone.

I don’t particularly like using third party libraries to build my code but in this case the advantages far surpass the potential problems that I may experience moving forward.

Jeff Posnick presented about App Shell architecture, sw-precache and sw-toolbox libraries and how to make them work together which I think is a good starting point for our work.

Gulp based build system

We could use sw-precache and sw-toolbox from the command line but why go through the hassle when we’re already using Gulp for other optimization tasks on the project?

The code for this project uses a Gulp-based build system to programmatically build the service worker. At a later time we may explore what it takes to write the same code manually. It shouldn’t be that different.

sw-precache and the application shell

To cache the shell of the application we’ll use sw-precache. This will also generate cache busting and the logic we need to get a fully functioning service worker for our reader.

Right now the task used to generate the service worker is a greedy blog that takes everything in. We definitely want to shrink the amount of files the blog consumes to only the basic elements we need for fast first render.

We also import two scripts into the service worker we are creating.

sw-toolbox.js is the main file for sw-toolbox, which will make our life much easier when it comes to caching dynamic content.

The second file, toolbox-script.js will be discussed in more detail in the next section.

/*jshint esversion: 6*/
/*jshint -W097*/
'use strict';

import gulp from 'gulp';
import gulpLoadPlugins from 'gulp-load-plugins({lazy: true})';

// Imports required for sw-precache
import path from 'path';
import swPrecache from 'sw-precache';

// Aliases $ to the gulp-load-plugins entry point
// so that $.function will work
const $ = gulpLoadPlugins();

const paths = {
  src: 'app/',
  dest: 'dest/'
};

gulp.task('generate-service-worker', (callback) => {
  swPrecache.write(path.join(paths.src, 'service-worker.js'), {
    staticFileGlobs: [
      paths.src + '/**/*.{js,html,css,png,jpg,gif}'
    ],
    importScripts: [
      paths.src + '/js/sw-toolbox.js',
      paths.src + '/js/toolbox-scripts.js'
    ],
    stripPrefix: paths.src
  }, callback);
});

toolbox-scripts.js is our implementation of dynamic caching using sw-toolbox. The content the shell of our application, is not supposed to change often and, when it does, it’s just a matter of adding new files to the Gulp task and running the task again to pick up the changes.

Active content, on the other hand, needs different handling. It is impossible for me to predict how often content will change or how many new files we will add to the reader at a given time. So instead of manually typing the names of the files we want to cache, we use sw-toolbox to help us with the task.

We’ve defined 6 routes:

  • The first one matches anything from any origins containing googleapis and ending with .com. The route uses the cacheFirst strategy (check the cache first and only go to the network if the item is not found on the cache) and will store up to 20 items in a cache called googleapis
  • The second route will match anyting in the /images/ path and using the cacheFirst strategy will store its items in the images-cache cache without limit
  • The third route will match anything in the /pages/ path and using the networkFirst strategy store the content in the content-cache cache. We use network first because we want to get the freshest content possible and only if the network is not available go to cache
  • The fourth route will match anything in the /videos/ path and will use a networkOnly strategy. If we’re not online we don’t want to store potentially very large files on the cache
  • The fifth route will match anything coming from youtube.com or vimeo.com and only do something with it if the network is active. Same rationale as before. We don’t want to store potential hundreds of megabytes of vide on our caches
  • The last route is the default. For anything that doesn’t matches the previous routes use a cacheFirst strategy.
(function(global) {
  'use strict';

  // The route for any requests from the googleapis origin
  global.toolbox.router.get('/(.*)', global.toolbox.cacheFirst, {
    cache: {
      name: 'googleapis',
      maxEntries: 20,
    },
    origin: /\.googleapis\.com$/
  });

  // We want no more than 50 images in the cache. We check using a cache first strategy
  global.toolbox.router.get(/\.(?:png|gif|jpg)$/, global.toolbox.cacheFirst, {
    cache: {
      name: 'images-cache',
      maxEntries: 50
    }
  });

  // pull html content using network first
  global.addEventListner('fetch', function(event) {
    if (event.request.headers.get('accept').includes('text/html')) {
      event.respondWith(toolbox.networkFirst(event.request));
    }
    // you can add additional synchronous checks based on event.request.
  });

  // pull video using network only. We don't want such large files in the cache
  global.toolbox.router.get('/video/(.+)', global.toolbox.networkOnly);
  // If the video comes from youtube or vimeo still use networkOnly
  global.toolbox.router.get('(.+)', global.toolbox.networkOnly, {
    origin: /\.(?:youtube|vimeo)\.com$/
  });

  // the default route is global and uses cacheFirst
  global.toolbox.router.get('/*', global.toolbox.cacheFirst);
})(self);

What’s next

There is still a little bit more research to do before declaring this code production ready.

  • How do we add a fallback to the video routes so that, when we are offline, we display a placeholder SVG image?
  • How do we create a timeout for a request, similar to what we can do with promise.race?
  • can we mix toolbox and non-toolbox in the same service worker?

Reading for everyone

Third in a series. Other two parts are What kind of web do we want? and Who are the next billion users and how do we accommodate them

How we build content that will be read wherever and whenever we are regardless of the device we use to access it?

Reading for everyone

I’ve also found myself in discussions on Medium about Portable Web Publications and when I asked what happens to people using existing devices (I used Kobo and iBooks as examples) I was told that PWP are web applications not ebooks. So what do you do in devices that don’t have a native browser or that use a proxy like UC browser or Opera Mini to save on bandwidth?

We’re all too centered on western bandwidth requirements and devices. We forget that people in other countries have different requirements, additional constrains, and prefered methods to view content.

I love PWAs and I love the concept of reading on the web. But I’m also a realist and understand that, unless we can target as many devices as we can within reason, we’re only adding more fragments to a really fragmented market.

Defining core and accessories

We need a very strict definition of what our core content is: the text of our publication is our core content and everything else is bells and whistles. Our core content has to display in as many devices as possible with the understanding that our normal may not be the normal everywhere.

Who is our target audience? Are we building for a US-only market or are we working towards a more universal distribution system?

How much testing do we need to do? How many devices? Do we pay attention to feature phones and proxy browsers like Opera Mini and UC Browser? How do we handle existing ebook systems like iBooks, Kobo or different versions of Kindle? Can we afford to ignore them or leave them behind? Or do we mind creating multi platform reading experiences?

If we consider the text the core of our reading experiences then all the additions discussed in Progressive and Subcompact books: Technical Notes become secondary to our content

It’s taken me a while to figure out how to progressively enhance an application, what I see as adding things to our base content in roughly this order:

  1. Add the home screen tags to the index.html page
  2. Add the link to the web app manifest
  3. Add Service Worker
  4. Add annotator and footnotes functionality
  5. (If needed) add any additional scripting or network functionality
  6. (if wanted) create CSS to convert the HTML to PDF using PrinceXML or Antenna House

Each of these steps on their own make a reading experience open to everyone. We add capabilities on top of the basic HTML we create for our content. We provide the tools to add the reading app to our mobile device home screen. We provide a service worker bringing with it offline caching, push notifications and background sync. We add annotations and advanced footnote capabilities and optionally we add more online network functionality and CSS for paged printed media.

Reading for everyone (redux)

There is a lot to think about and plan if we want to build really universal applications. We are not covering all of the requirements. There’s internationalization, taking a deeper dive into accessibility to analyze if we need to implement accessibility constraints beyond what the web platform already provides.

What kind of web do we want?

The web, technology wise, is in a great place.

We’ve grown closer to parity with native apps, that’s the gist of progressive web applications, we can have pretty close to the same experience form web apps as we can from native.

Javasacript has improved both in terms of speed and interoperability. New HTML5 APIs have contributed to Javascript resurgence and popularity. It’s also new features in ES6 (ratified in 2015) like classes, standardized modules and other APIs make it fun and useful to code in vanilla Javascript.

CSS has moved forward y leaps and bounds. We can do a lot of things that lead directly to our current Responsive Web Design paradigm. The scope of CSS will only make sense when you look at the complete list of CSS specifications.

Yet we forget that not everyone has access to the latest and greatest devices and technologies. We forget that Javascript may not be available to peple everywhere or that the computers they are using may not be up to speed to support the latest and greatest features of our web applications.

Chris Heilmann tackles some of these question in his JSConf.Asia 2015 presentation. I will follow up on some Chris’s presentation and address some additional items that I think are also essential.

To Javascript or not to Javascript?

Matt Griffin’s The Future of The Web sums up the current debate really well.

Progressive web applications has again reignited the debate between progressive enhancement and the extensible web camp.

On the Extensible Web side…

… we can see the people who think JavaScript is the way forward for the web. And there’s some historical precedent for that. When Brendan Eich created JavaScript, he was aware that he was putting it all together in a hurry, and that he would get things wrong. He wanted JavaScript to be the escape hatch by which others could improve his work (and fix what he got wrong). Taken one step further, JavaScript gives us the ability to extend the web beyond where it currently is. And that, really, is what the Extensible Web Manifesto folks are looking to do.

The web needs to compete with native apps, they assert. And until we get what we need natively in the browser, we can fake it with JavaScript. Much of this approach is encapsulated in the idea of progressive web apps (offline access, tab access, file system access, a spot on the home screen)—giving the web, as Alex Russell puts it, a fair fight.

On the other hand…

… in the progressive enhancement camp, we get folks that are worried these approaches will leave some users in the dust. This is epitomized by the “what about users with no JavaScript” argument. This polarizing question—though not the entire issue by far—gets at the heart of the disagreement.

I think it’s good to remind ourselves what happens when we turn off Javascript in our browsers because it addresses a deeper issue on the modern web. What happens when you’re not “good enough” for the content you want to access? What happens when your device doesn’t support an API and it wasn’t polyfilled for your application? Living in a world of evergreen browsers doesn’t mean these issues are not important or that we shouldn’t keep them front and center.

Eevee accidentally left Javascript off after debugging on her browser and then experienced the painful side of the modern web… It doesn’t work as intended, if it works at all.

if you want to get a taste for what this feels like and how it (doesn’t) work do the following: Turn off Javascript on your browser. Here are instructions for:

Once it is disabled use the web like you would normally and realize how different the experience is. And it doesn’t have to be Javascript turned off, it can be any of these situations that can cause your Javasctipt not to load:

  • Slow/old computer
  • Slow connection
  • Slow computer and slow connection
  • Old browser on a computer they don’t control
  • Someone trying mirror your site for offline access
  • Search engine crawler or the Internet Archive
  • Text only browser (Lynx, Links, Emacs W3)
  • Your CDN goes down
  • Corrupted JavaScript

It all turns into a matter of what’s the base experience that we’re enhancing. Will I be able to accomplish my task if I don’t have JavaScript enabled? Will I be able to submit that form if it doesn’t get all the fancy upgrades from React or Angular? Will the elements that would have been upgraded by Polymer still work when no JavaScript means no Polymer? Will the site work in my slow connection or via a text browser?

I’m guilty of not bothering to check. The worst example:

I have a project list built with Polymer that fetches data from MongoDB hosted in MLab. Do you see the problem? Even if we always had fast and reliable network (we don’t) if JavaScript is disabled for any reason the entire experience goes down the drain.

How many times do we check our apps to make sure they still work in poor connectivity, when JavaScript is disabled or how tools other than web browsers (screen readers and screen magnification devices come to mind) interact with our content? Did we check to see if we structured the content of the app in a way that the screen reader won’t go crazy trying to read alll the information on the page and drive us nuts trying to understand what it’s reading to us?

I seldom, if ever, do.

And it’s as basic as this: it doesn’t matter how cool or powerful an experience we provide to our users if they can’t access it in some fashion after all the bells and whistles are taken away.

Marcy Sutton reminds us that making content accessible is not hard and shouldn’t be but we need to be smart about it.

When working in higher education I remember several colleagues working with the disability support group on campus to test products and technologies. What would be the equivalent of doing that for a business? How expensive would it be?

What is core, what is an enhancement?

Yes, we are talking about apps and interaction heavy sites but, at some point, we must provide a core we can enhance. Scott Jenson has an interesting G+ post on the subject titled My Issues with Progressive Enhancement. He asks:

What I’m chaffing at is the belief that when a page is offering specific functionality, Let’s say a camera app or a chat app, what does it mean to progressively enhance it?

Jeremy Keith gives what to me is the the best answer I’ve heard so far:

If that were what progressive enhancement meant, I’d be with him all the way. But progressive enhancement is not about offering all functionality; progressive enhancement is about making sure that your core functionality is available to everyone. Everything after that is, well, an enhancement (the clue is in the name).

The trick to doing this well is figuring out what is core functionality, and what is an enhancement. There are no hard and fast rules.

Sometimes it’s really obvious. Web fonts? They’re an enhancement. Rounded corners? An enhancement. Gradients? An enhancement. Actually, come to think of it, all of your CSS is an enhancement. Your content, on the other hand, is not. That should be available to everyone. And in the case of task-based web thangs, that means the fundamental tasks should be available to everyone …but you can still layer more tasks on top.

Even in reading applications we need to ask ourselves these questions. They drive development and they should also drive UX/UI design.

What’s the core of our user experience? To get people content that they want to engage with in some form regardless of the level of technology that we use to access the content

The shinny new thing syndrome

How often do we hear about the newest, greatest and fastest library? Have you seen articles like:

So what’s wrong with the existing tools and technologies? It seems like the community is looking at doing things their way rather than working together to implement a best of breed solution.

We have what I call the “new and shinny” syndrome. We latch on to the latest and greatest technology rather than make sure the existing technology is working fine before we measure whether the application needs to be updated at all.

Furthermore, as Paul mentiones in the video above, there is some serious cost involved in moving from one framework to another and, some times between versions of the same Framework. What is the learning curve to learn React over Angular or to Learn Angular instead of Polymer?

I never stopped to consider the cost of having to learn new frameworks as I’ve managed to keep myself on a fairly strict diet of frameworks and technologies on my development stack. But I do realize that sometimes it’s not up to the individual developer or team. Some times the tools you use in a project are dictated by what the project makes available to you.

I found myself on this conundrum when I first started working with Javascript task runners and build systems. I started using Grunt for my personal projects and I was OK, may not have been the best but it was the tool I grew comfortable with. When I started working with Polymer, particularly the Polymer Starter Kit, I found out that the project used Gulp and that there was no equivalent Grunt tasks, so I had to learn Gulp and then I was too lazy to switch back to Grunt; I can still understand what they do but I no longer remember the rationale for the scripts. If I have to make changes I’m more likely to create a brand new Gulp script than fixing the existing Grunt tool.

For task runners and build systems we have so many options to make it dangerous to even suggest one. The ones I found when searching:

All these systems perform the same task, getting your files ready for production, in slightly different ways.

I can only imagine what would happen with Frameworks and how many times would you have to switch projects if all you do is use the latest and greatest framework.

Progressive and Subcompact Books: Technical notes

Progressive and Subcompact Books: Technical notes

This is meant as living document. Feedback is appreciated and will be incorporated when appropriate. The idea is to use this and its sister philosophical document as the basis for a proof of concept application

Why a progressive web app?

I’ll take the original list of attributes Frances and Alex described for a progressive web application and explain why I consider them important for web publications

  • Responsive: Let’s face it, reflowable ebooks could look much better. I’m not talking about typography but of essential layout and the inability to use modern web technologies because of the nature of the applications these expriences live inside of
  • Connectivity independent: It is a really powerful idea to be able to access our content offline. After we access the publication for the first time we no longer need to be online to read and when we’re online the reading experience is still faster because we are reading cached content rather than having to get from the network every timewe want to access it.
  • App-like-interactions: We can create shells where diverse pieces of content can live. More than one book or more than one issue of a magazine can exist inside our shell and all of it online on the web technologies that we, as developers, are already familiar with
  • Fresh: The process of updating our content no longer requires active participation. By changing our service worker we can trigger an automatic update process
  • Safe: We’re still talking about technologies so we need encryption to keep the content and the user interactions safe from eavesdropping
  • Discoverable: The underlying technologies of PWAs allow search engines to find them
  • Re-engageable: Once the user has accessed our content we have no way to bring them back to the content. Progressive web applications can leverage the engagement features such as Push Notifications to re-engage the user when there is new content or existing content has been updated
  • Installable: Each “book” is installable to the home screen through browser-provided prompts. Based on how much and often we interact with our books we can make them first class citizens of our online experience
  • Linkable: Progressive Web Apps are still web content. We can link to them, we can share them and we can leverage the full force of the web platform

I’m leaving DRM out of the conversation deliberately. I don’t believe in restricting access to content people pay for and the licensing model for most ebook vendors leaves a lot to be desired.

What browsers to target?

The easiest answer is Evergreen Browsers. But we need to think what that means beyond putting the list out. In my first, knee jerk reaction, the list is this:

  • Chrome
  • Opera
  • Firefox
  • Edge

As Scott Hanselman points out we have a responsibility as developers to keep our sites fresh as much as browser makers have a reponsibility to update their browsers in ways that won’t break “all of the web”.

In a world where we all write our websites with feature detection and (generally) gracefully degrade when features aren’t around, things just work. But at the same time, it does make the Web itself a moving target.

Can we afford to run a minimum common denominator? According to Brian Kardell’s support gaps table, out of the 134 features tracked in caniuse.com are supported by at least 3 evergreen browsers without having support in Internet Explorer 9 or earlier.

this table which uses data from webbrowsercompatability.com shows 460 of the  properties/features that test for are available in most evergreen browsers but, again, pretty much none of them before IE9.  

From a developer’s point of view it’s an easy choice but from an end user’s perspective it’s not so simple. In The problem with evergreen browsers we are reminded again about some of the dawbacks of browser compatibility. The biggest one is still the fact that some corporate IT groups block manual and automatic browser updates so their users cannot take advantage of the new evergreen features until IT decides that the risk is warranted and that their legacy applications will work with the latest version of all browsers.

So how do we deal with this support nightmare?

Most of the web today works in a progressive enhancement paradigm. We start with the basic experience and then enhance the experience for browsers that can handle it. The opposite is graceful degradation where we provide the full experience as a starting point and degrade gracefully for browsers that don’t support the full package or which size is an issue.

I’m choosing to work through graceful degradation. Starting with a full size experience we’ll explore how to make it work in smaller form factors without loosing sight that the evergreen browser is the primary experience for this particular use case.

It is also important to realize that older browsers will still be able to see the content, just not in as rich a way as the target browsers can.

Building an HTML reading platform

There are are couple things I still struggle with when considering how to build this reading application:

Single page app or not One of the biggest questions I always have when developing an application is wehter to make it a single page app or not. In the case of a book or portions where we know the length of the content we’d be ok using a Single Page Application, but if we’re trying to work with serial content or, in case of a book, we’re trying to provide a serial-like experience we need to load each document in addition to the shell. How to do that without duplicating navigation code in all pages and/or sections.

We could build the app with content injection and only partial files but I’ve never been fully sold on that idea. What happens when you have javascript disabled or when you’re on LIE-FI (your device thinks it has connection but it’s so poor that it doesn’t work at all) and your device doesn’t connect but it continues to act asif the connection was working.

I will start with a site built on vanilla HTML, CSS and Javascript with additional libraries that will be described where appropriate. This will allow me to concentrate on the application side rather than worry about the content creation. Others may have different opinions but this is a good place to start.

Offline reading

From my perspective the biggest drawback of any web-based application currently available is that the content would only be available online: the only solution for providing and offline experience left a lot to be desired as chronicled by Jake Archibald in Application Cache is a Douchebag and it was written after a successful App Cache deployment.

Then Service Workers came into the picture.

It’s taken me a while to fully warmup to the idea of Service Workers. Yes, they do overcome the drawbacks from App Cache by making everything explicit… If you want the Service Worker to do something you have to explictly code it

The downside to being forced into explicit behavior is that Service Workers requires coding with new technologies… we can’t rely on implicitb behavior anymore and the way these new APIs are written and coded is new and requires a different mindset than current ES5 code. If you’re coming from current ES5 environments the learning curve is a little steep as you have to learn Promises and the service worker API. It’s worth it, I assure you.

Ok, so now we have a way to get the content to display whether the user is online or not. What’s next?

Next we build the shell.

Building the shell for the reader.

The first thing I want to do is build the shell for the application. This also brings up the first set of questions:

  • How much of your content do you want to cache on install of the Service Worker so it’ll be ready to go when the user access your app the second time?
  • What parts of your reader’s structure will you cache? Why?

First we’ll build the code for the Service Worker. This is a simple Service Worker that will do the following:

  • Load and cache the shell for the application
  • Cache any requests for content of our application before displaying it to the user
'use strict';
// Chrome's currently missing some useful cache methods, this polyfill adds them.
importScripts('serviceworker-cache-polyfill.js');

// Define constants for cache names and versions
const SHELL_CACHE = 'shell-cache-v1';
const CONTENT_CACHE = 'content-cache-v1';

// Content to  cache when the ServiceWorker is installed
const SHELL_CONTENT = [
  '/path/to/javascript.js',
  '/path/to/stylesheet.css',
  '/path/to/someimage.png',
  '/path/to/someotherimage.png',
  '/offline.html'
];

self.addEventListener('install', function(event) {
  event.waitUntil(
    caches.open(SHELL_CACHE).then(function(cache) {
      return cache.addAll(SHELL_CONTENT);
    })
  );
});

self.addEventListener('activate', function(event) {
  event.waitUntil(
    caches.keys().then(function(cacheNames) {
      return Promise.all(
        cacheNames.filter(function(cacheName) {
          return cacheName.startsWith('shell-cache') &&
            cacheName !== SHELL_CACHE;
        }).map(function(cacheName) {
          return caches.delete(cacheName);
        })
      );
    })
  );
});

self.addEventListener('fetch', function(event) {
  event.respondWith(
    caches.match(event.request).then(function(response) {
      if (response) { return response; }
      return fetch(event.request).then(function(response) {
        return caches.open(CONTENT_CACHE).then(function(cache) {
          cache.put( event.request.url, response.clone() );
          return response;
        });
      })
    })
  );
});

With this Service Worker we provide a consistent response regardless of the network (or lack thereof.) Think about it when you add a native application: the load time is very slow the first time but performance remains constant in subsequent visits; we no longer need to rely exclusively on the network for our applications.

We will further enhance the service worker as we move along. We will also explore other ways to create Service Workers programmatically as we move through the design and development process.

Since we’ll break down the content of our cache between the shell and the rest of our content it makes sense to cache as little as possible on our shell. This means we will cache the following items:

  • HTML index page (including inline CSS to render the critical path)
  • Javascript (if any) necessary to load the page (other than the Service Worker itself)
  • Any images needed for branding the site

In the Service Worker demo above, all the assets need for the shell should be included in the SHELL_CONTENT constant. This will be picked up by the script and will add the files to the shell cache.

For future visits the worker will check the cache first and use the content from there, if available and then, only if necessary, it will go to the network to get the resource we want and store them in the cache to speed things up for the next load.

Now that we have the shell we can start playing with the content itself, how we’ll structure it and some additional tricks and extensions.

Adding to homesscreen

While it’s been a recent development for Chrome to change the heuristics regarding how to add web applications to the homescreen it only works in mobile and it only works for certain devices.

The following code goes in the head of an HTML document and it provides basic support across platforms:

<!— Place favicon.ico in the app/ directory -->
<link rel="icon" type="image/png" href="app/icon.png">

<!-- Chrome for Android theme color -->
<meta name="theme-color" content="#2E3AA1">

<!-- Web Application Manifest -->
<link rel="manifest" href="manifest.json">

<!-- Tile color for Win8 -->
                    <meta name="msapplication-TileColor" content="#3372DF">

<!-- Add to homescreen for Chrome on Android -->
<meta name="mobile-web-app-capable" content="yes">
<meta name="application-name" content="YOUR NAME HERE">
<link rel="icon" sizes="192x192" href="images/touch/chrome-touch-icon-192x192.png">

<!-- Add to homescreen for Safari on iOS -->
<meta name="apple-mobile-web-app-capable" content="yes">
<meta name="apple-mobile-web-app-status-bar-style" content="black">
<meta name="apple-mobile-web-app-title" content="YOUR NAME HERE">
<link rel="apple-touch-icon" href="images/touch/apple-touch-icon.png">

<!-- Tile icon for Win8 (144x144) -->
<meta name="msapplication-TileImage" content="images/touch/ms-touch-icon-144x144-precomposed.png">

To activate newer features in Chrome we have to create a manifest.json file and explore what it does. We’ll shorten the file by removing some of the icon links

{
  "name": "Book Reader",
  "short_name": "Book Reader",
  "icons": [{
  "src": "images/touch/icon-72x72.png",
  "sizes": "72x72",
  "type": "image/png"
}, {
  "src": "images/touch/icon-96x96.png",
  "sizes": "96x96",
  "type": "image/png"
}],
  "background_color": "#3E4EB8",
  "display": "standalone",
  "theme_color": "#2E3AA1"

We also have icons for multiple resolutions to accommodate for different resolutions. If you’re not interested in supporting all the resolutions (and I personally wouldn’t) you can skip the resolutions you are not working with.

Chrome in supported platforms will use this information and create the application icon. The metadata we added to the head of the document will take care of iOS, Windows 8 and older versions of Chrome and Opera for Android.

Build system

I’ve given up on the idea of just using the web rather than building applications for the web. Partly because it’s an exercise in futility and partly because I can now see the advantages of such a system.

I’ve been working on a Gulp based system for a while now and documented the intial steps on my blog. I’ve created a repository and documentation as a starting point for my work on build systems. Because it is already thoroughly documented I will refer to one task that processes the CSS after the SCSS file is processed.

Refer to the gulp-starter Github repository repository for additional information and the gulpfile and package.json files to incorporate the build process into your own projects.

In addition to these tools there are a couple libraries from Google that I want to use in the build process as they make life easier when building Service Worker scripts: sw-precache and sw-toolbox / shed.

You use sw-precache with your build system, Gulp in my case, to generate a list of the files to precache. This is much better than doing it manually… you only have one place to update and the build script will take care of the tedious process. One way to precache only some files for your project looks like this:

// This would most likely be defined elsewhere
const rootDir = myApp;

const filesToCache =  [
  rootDir + '/bower_components/**/*.{html,js,css}',
  rootDir + '/elements/**',
  rootDir + '/fonts/**',
  rootDir + '/images/**',
  rootDir + '/scripts/**',
  rootDir + '/styles/**/*.css',
  rootDir + '/manifest.json',
  rootDir + '/humans.txt',
  rootDir + '/favicon.ico',
  rootDir + '/data-worker-scripts.js'
];

// part of the sw-precache library
swPrecache(filesToCache, callback);

sw-toolbox does the same for Service Workers. It provides 5 methods that prepackage some of the most often used caching strategies.

toolbox.networkFirst
toolbox.cacheFirst
toolbox.fastest
toolbox.cacheOnly
toolbox.networkOnly

If you want more information about these strategies, check sw-toolbox API and Jake Archibald’s Offline Cookbook

HTML and grid templates

One of the advantages of working with evergreen browsers is that we can play with the latest CSS technologies and that most of the features that required plugins in the past (audio, video, SVG) are now baked into the platform so we need far fewer plugins to accomplish the same visual display.

CSS also offers us many more native layout and display options than what we had even a year ago. The two I find the most intriguing are flexbox and grid layout.

Flexbox allows you to create repeating content layouts like photo galleries or navigation menus. I’ve been playing with the spec for a while and have created demos like this image gallery to prove the concept. Galleries like this can be included directly on your HTML and don’t really need preprocessors, just a few additional classes on your markup and CSS definitions on your style sheet.

Grids are newer and more intriguing. You can create a grid similar to Skeleton, Bootstrap or Foundation and can be further refined with media queries. The advantage is that we don’t need an additional library and associated overhead for just the grid.

These are two SASS mixins for a prototype grid system I’m currently working on.

@mixin grid-wrapper ($columns: 12, $gutter: 8){
  display: grid;
  margin: 0 auto;
  width: 100%;
  max-width: 960px;
  grid-template-columns: repeat($columns, 1fr);
  // $columns columns of equal width
  grid-template-rows: auto;
  // This should make new rows while respecting our column template
  grid-row-gap: ($gutter * 1px);
  grid-column-gap: ($gutter * 1px);
}

The first mixin will create the grid itself. In its default configuration it will create a 960px wide grid with 12 columns and 8px gutter.

.grid-container {
  @include grid-wrapper()
}

The values for the columns and gutters are configurable, if we want a 16 column grid with 16pixel gutters we can do call the mixin like this:

.grid-container {
  @include grid-wrapper(16, 16);
}

And SASS/CSS will create 16 equal columns with 16 pixel gutters between them and 16px gutters between rows. 

The second mixin will place content inside the grid. 


```scss
@mixin placement ($column-start, $column-end, $row-start, $row-end) {
  grid-row: $row-start / $row-end;
  grid-column: $column-start / $column-end;
}

We do this by specifying row and column start and end for each element.

.figure1 {
  @include placement(4, 5, 2, 4)
}

Using the 16 column grid we created above, we’ll place the figure&nbsp; with class figure1 in the corresponding coordinates:

  • Starting Colum: 4
  • Ending Column: 5
  • Starting Row: 2
  • Ending Row: 4

 Because the CSS Grid specification is not a recommendation yet it may change from what’s shown here. I’ll continue to test this and update the docs and mixins with appropriate code.

See Rachel Andrew’s Grid by example for ideas and exaples of what you can currently do with CSS Grids.

As far as HTML is concerned there are several things we need to include in our index.html file before it’ll pass the test of a progressive web application. There are also fallbacks for iOS and older Android devices. Given all these requirements, the HTML for our index file may look like this:

< !doctype html>

<html lang="en">

  <head>
    <meta charset="utf-8"/>
    <meta name="description" content=""/>
    <meta name="viewport" content="width=device-width, initial-scale=1"/>
    <title>Sample application</title>

    <!-- Place favicon.ico in the `app/` directory -->

    <!-- Chrome for Android theme color -->
    <meta name="theme-color" content="#2E3AA1"/>

    <!-- Web Application Manifest -->
    <link rel="manifest" href="manifest.json"/>

    <!-- Tile color for Win8 -->
                        <meta name="msapplication-TileColor" content="#3372DF"/>

    <!-- Add to homescreen for Chrome on Android -->
    <meta name="mobile-web-app-capable" content="yes"/>
    <meta name="application-name" content="YOUR NAME HERE"/>
    <link rel="icon" sizes="192x192" href="images/touch/chrome-touch-icon-192x192.png"/>

    <!-- Add to homescreen for Safari on iOS -->
    <meta name="apple-mobile-web-app-capable" content="yes"/>
    <meta name="apple-mobile-web-app-status-bar-style" content="black"/>
    <meta name="apple-mobile-web-app-title" content="YOUR NAME HERE"/>
    <link rel="apple-touch-icon" href="images/touch/apple-touch-icon.png"/>

    <!-- Tile icon for Win8 (144x144) -->
    <meta name="msapplication-TileImage" content="images/touch/ms-touch-icon-144x144-precomposed.png"/>

    <!-- build:css styles/main.css -->
    <link rel="stylesheet" href="css/main.css"/>
    <!-- endbuild-->
  </head>

  <body>

    <!-- content goes here -->

    <!-- build:remove -->
    <span id="browser-sync-binding"></span> 
    <!-- endbuild -->

    <!-- Service workers -->
    <script src="sw-basic.js"></script>
    <!-- build:js es6/app.js -->
    <script src="js/app.js"></script>
    <!-- endbuild-->
  </body>

</html> 

Javascript plugins and libraries

Transitioning from ES5 (current version of the Javascript language) to ES6 (approved as standard in June, 2015) makes for an interesting choice.

We now have most, if not all, the construct once only available in libraries like jQuery, MOOTools or Dojo as part of the default language specification. Should we use a library like jQuery or Dojo?

As much as I woud love to work in plain ES6 there are things that frameworks smooth out a lot of browser bugs and inconsistencies as outlined by John-David Dalton and Paul Irish in their answer to You May Not Need jQuery.

As suggested by Paul Irish I’ve run the following command against the jQuery source to see how many of these bugs (as defined by the comment Support:) are in the jQuery sourcecode

curl http://code.jquery.com/jquery-git.js | grep -n Support: | wc -l

It returned 103 instances where jQuery is working to support some older kind of browser.

curl https://code.jquery.com/jquery-3.0.0-beta1.js | grep -n Support: | wc -l

The same command against the release version of jQuery 3.0 Beta returns 101 instances which is interesting considering that jQuery 3.0 dropped support for IE8.

I’m inclined to include jQuery 3.0 final but a final decision will depend on what plugins I choose to use, how much would they bloat the source code after processing and how many workarounds would I have to implement to support all the browser versions I want to target.

Depending on what other libraries we choose to use we may have to run jQuery in no conflict more as documented in this page to make sure all libraries work as intended.

Some of the plugins I’m thinking about are listed below. They may require the page to run the jQuery Migrate plugin to check for compatibility issues between the version the plugin was created with and the current version of jQuery (currently 3.0 release.)

Furthermore jQuery 3.0’s modular build process allows you to remove modules from the build, enabling developers to use 3rd party libraries and reducing the size of jQuery itself.

Modernizr

Modernizr is a feature detection library. It works by doing 2 things: It adds classes for presence or absence of CSS features; it also creates a Modernizr Javascript object that can be used to test for the features we configured for testing.

To work with CSS we take all the classes Modernizr inserts in the HTML element. The example below shows all the features from a custom Modernizr build in Chrome 50.

htmlimports cookies geolocation json postmessage serviceworker svg templatestrings typedarrays websockets webaudio supports no-es6array es6collections generators es6math es6number es6object promises no-contains documentfragment audio canvas canvastext contenteditable video webanimations webgl bgpositionshorthand csscalc cssgradients multiplebgs opacity csspointerevents cssremunit rgba csschunit no-es6string mediaqueries unicode fontface generatedcontent lastchild nthchild cssvhunit cssvmaxunit cssvminunit cssvwunit fullscreen indexeddb indexeddb-deletedatabase requestanimationframe raf backgroundblendmode cssanimations bgpositionxy bgrepeatround bgrepeatspace backgroundsize bgsizecover borderradius boxshadow boxsizing csscolumns csscolumns-width csscolumns-span csscolumns-fill csscolumns-gap csscolumns-rule csscolumns-rulecolor csscolumns-rulestyle csscolumns-rulewidth csscolumns-breakbefore csscolumns-breakafter csscolumns-breakinside cssfilters flexbox cssmask shapes csstransforms csstransforms3d csstransitions

The same build reflected in Safari 9.1.1 in Yosemite.

no-htmlimports cookies geolocation json postmessage no-serviceworker svg templatestrings typedarrays websockets webaudio supports es6array es6collections no-generators es6math es6number es6object promises no-contains documentfragment audio canvas canvastext contenteditable video no-webanimations webgl bgpositionshorthand csscalc cssgradients multiplebgs opacity csspointerevents cssremunit rgba csschunit no-es6string mediaqueries unicode fontface generatedcontent lastchild nthchild cssvhunit cssvmaxunit cssvminunit cssvwunit fullscreen indexeddb indexeddb-deletedatabase requestanimationframe raf backgroundblendmode cssanimations bgpositionxy bgrepeatround bgrepeatspace backgroundsize bgsizecover borderradius boxshadow boxsizing csscolumns csscolumns-width csscolumns-span csscolumns-fill csscolumns-gap csscolumns-rule csscolumns-rulecolor csscolumns-rulestyle csscolumns-rulewidth csscolumns-breakbefore csscolumns-breakafter csscolumns-breakinside cssfilters flexbox cssmask shapes csstransforms csstransforms3d csstransitions

The examples below use the test for HTML audio.

When working with CSS we create 2 selectors based on the result of the classes Modernizr added. If the browser does not support audio the class added will be .no-audio and we will hide the #music element. If we support audio then the class is just .audio and we style elements accordingly.

/* In your CSS: */
.no-audio #music {
  display: none; /* Don't show Audio options */
}
.audio #music button {
  /* Style the Play and Pause buttons nicely */

When working with Javascript we test on the Modernizr object for the element we want to test, in this case audio.

if (!Modernizr.audio) {
  /* properties for browsers that do not support audio */
}

else{
  /* properties for browsers that support audio */
}

CSS @supports

An alternative to libraries like Modernizr is to use the @supports rule. It takes the full propery that you’re testing for and what to do if the property is supported.

@supports (display: flex) {
  div { display: flex; }
}

You can also use not to negate the test. The example below returns true for browsers that do not support the native flex property.

@supports not (display: flex) {
  div { float: left; } /* alternative styles */
}

Many browsers support prefixed versions of attributes and properties. We can test for them simultaneously using or as in the exmple below where we @support will return true if the browser supports any of the flex values.

@supports (display: -webkit-flex) or
          (display: -moz-flex) or
          (display: flex) {

  /* use styles here */
}

We can also chain our properites to gether and return true only if both properties are supported (and why would you want to use appearance: caret is beyond me, it’s just an example.)

@supports (display: flex) and (-webkit-appearance: caret) {
  /* something crazy here */
}

JavaScript CSS.supports

The JavaScript counterpart to CSS @supports is window.CSS.supports. The CSS.supports spec provides two methods of usage. The first method of usage includes providing two arguments: one for the property and another for the value:

var supportsFlex = CSS.supports("display", "flex");

The second usage method includes simply providing the entire string to be parsed:

var supportsFlexAndAppearance = CSS.supports("(display: flex) and (-webkit-appearance: caret)");

Before using the JavaScript method of supports, it’s important to detect the feature first. Older versions of Opera used a different method name so that throws things for a bit. We need to validate this method is still necessary

var supportsCSS = !!((window.CSS && window.CSS.supports) || window.supportsCSS || false);

Annotation plugin

I’ve always loved the way that Amazon’s Kindle allows you to create a community corpus of annotations of the books you own. I’ve always dreamed for a similar capability of sharing the annotations and marginalia with others but was never able to figure out how. When I started researching options I found two that I think merit further research

annotator.js is a jQuery (1.6 or higher) based system that allows highlights and annotations directly on top of web content. I’m guessing that this will also require the JQuery compatibility plugin… which may talk me out of using it altogether.

To use annotator we load the script and css required. This may change if we concatenate all the scripts together.

<script src="//assets.annotateit.org/annotator/v1.2.5/annotator-full.min.js"></script>
<link rel="stylesheet" href="http://assets.annotateit.org/annotator/v1.2.5/annotator.min.css">

We then configure the annotator plugin with code like the ones below where we perform multiple configuration tasks for the plugin.

$( document ).ready(function() {
  // Customise the default plugin options with the third argument.
  $('#content').annotator()
    .annotator('setupPlugins', {}, {
      // Disable the tags plugin
      Tags: false,
      // Filter plugin options
      Filter: {
        addAnnotationFilter: false, // Turn off default annotation filter
        filters: [{label: 'Quote', property: 'quote'}] // Add a quote filter
      }
    });
});

The annotator system also requires showdown.js in order to run Markdown on your annotations, otherwise they are just text.

My other concern with annotator is the backend requirements. You have to configure the backend services before you can share your annotations and I have yet to see how well they work offline.

Emphasis is a Library created by the New York Times. It provides a different type of annotation system. It appends a string to the URL that, when pasted into a web browser, wll display the annotations from that URL on the page’s content. These URLs can be shared between readers to get share the annotations.

There is currently no way for a user to save their annotations short of copying the URL or bookmarking it. I’m exploring possibilities of using local storage to save them locally or creating a service that stores them in a database so people can see everyone’s annotations. This would make Emphasis very similar to annotator.js

Footnotes

I’ve researched libraries for footnotes before. Bigfoot.js is by far my favorite one mostly because it gives you the option of floating bubbles right next to the footnote mark or the traditional link to the bottom of the page. I also liked the idea of switching between the two types of footnotes.

I’m also exploring whether Bigfoot can be turned into an endnote script; maybe even pointing to a different page for the footnotes or an annotated bibliography.

Animations

How can we translate work like Explorable Explanations and Parable of the Polygons into a book-like experience? is this the right question to ask? it’s part of my quest to better understand animations and their place in the reading experience.

As usual, there’s more than one way to do it. Below I’m looking at some options of what you can do.

CSS Animations and Transitions

The most traditional way to animate on the web is to use CSS animations and transitions. They work reliably across browsers and allow a ton of possible work. Because they are geared towards CSS they suffer some of CSS’ drawbacks: they are bound to prefix hell, they may not be supported in all browsers and they may not work with SVG elements, only CSS.

Web Animation API

The Web Animation API is intended to provide the features necessary for expressing CSS Transitions, CSS Animations, and SVG 1.1. As such, the use cases of Web Animations model is the union of use cases for those three specifications.

It’s a modern API and it works across animatable content without having to worry about what type of elements they are: SVG and CSS work equally well with the WAAPI

GSAP

A commercial alternative for animations is GSAP (GreenSock Animation Platform) a professional animation library that, for now, is more powerful than native solutions both existing and proposed. Untl that is no longer the case we’ll still have to consider it for high end animation work.

Tutorial from the GSAP site

Snap SVG

Snap, like its predecessor Raphael, works with SVG graphics and some of the stuff it can do is simply amazing animations and infographics with a format that is resolution independent.

It also provides a good animation library for the SVG content. The Snap demos give you an idea of what you can do with the technology.

Typography, web fonts and lettering

One of the things I really want to explore with this project is how hard we can push the web for publishing. This is one of my main reasons why I chose to do Graceful degradation instead of progressive enhancement.

I want to push the envelope for those browsers that support them and provide experiences that older browsers will be able to handle. Some examples of what I mean are in Lynda.com’s Advanced CSS Typographic Techniques and in Jenn Simmons’ Layout Labs

 Disclaimer: I’m a Lynda.com subscriber but am not getting comission from links to any Lynda.com course 

Web Fonts

People tend to forget that web fonts are not really new. Internet Explorer 4 implemented @font-face in 1998 but it did not gain traction for over 10 years… the early implementations didn’t incorporate piracy protections so they were not safe for foundries to work with.

New iterations of @font-face along with secure font services like Typekit, Google Fonts, Font Bureau, FontShop, Hoefler & Co. and Webtype make using fonts more painless.

Hosting locally versus hosted services

Working with fonts presents two options: Serving them from the network or serving them locally.

Serving them remotely free us from having to host the different types of fonts and makes the caching for offline use slightly harder.

Hosting them locally requires having different versions of the font to acommodate different browsers as described in CSS Trick’s article on Using @font-face.

This is important because not all foundries make all font formats available. That’s where tools like Font Squirrel’s Webfont Generator come in handy. Assuming that you have a license and the font vendor hasn’t blocked the service you can upload the font and create the versions necessary to work with @font-face across browsers

Delivering web fonts in Professional Web Typography provides a complete overview of how to use fonts on the web.

Loading the fonts

When working with web fonts we have to deal with latency, FOIT, FOUT and FOFT depending on the browser.

Tools like Fontface Observer makes it less painful to deal with network latency and download speeds.

Ilya Grigorik looks at the performance side of using web fonts.

Generated content for Paged Media

There are a lot of things that you can do in printed paged media that you cannot do online, or can you? CSS Generated Content (with some of the useful tools moved to CSS List and Counters) provides some of that functionality on the web.

Rather than trying to track how many figures we have in a document we can use CSS like the one below to number them automatically and reset the numbers for each new section of content

section {
  counter-increment: section_count;
  counter-reset: figure_count;
}

figure {
  counter-increment: figure_count;
}

figure figcaption {
  color: #999;
  margin-top: -.25em;
}

figure figcaption::before {
  content: "Figure " counter(section_count) "-" counter(figure_count) ": ";
}

This will produce output like this:

Figure 5-3:  Bay Bridge during a gorgeous evening

There is a polyfill that covers many of the functions in the specs.

Discoverability and metadata

Easiest way to add metadata is to embed it directly into the document. We can provide rich user experience by leveraging techniques such as Google Rich Snippets or Bing’s Structured Data

Search engine discoverability may not be enough for publishers in which case we can create additional metadata files like the one from Dave Cramer’s Epub Zero proposal or a metadata structure geared towards generic web content, not just books.

Progressive and Subcompact Books: Philosophy

This is meant as living document. Feedback is appreciated and will be incorporated when appropriate. The idea is to use this and its sister technical document as the basis for a proof of concept application

One of the first web reading experiences I saw was Craig Mod’s Art Space Tokyo. I use the term experience deliberately as it was one of the first efforst to create a consistent experience (or as much as possible) between the physical book and the different online versions available to the user. They all share the same illustrations the same basic structure and layout. They also play to the strengths of each platform.

I’ve worked in creating digital content for the web as well as ebooks in epub and Kindle formats and it’s a mess. Most, if not all readers have their incompatible idiosyncracies that make creating CSS that works reliably accross devices. That plus the sheer number of form factors for ebook readers (each introducing their own screen size, pixel density, CSS parser, standard version they support and other considerations) makes it evern harder to work reliably with ebooks.

I’ve been around the web long enough to realize that we should be able to publish our content online on the open web as easily as we do for our ereading devices. I’ve explored different ways to accomplish this on my research and projects. I’ve looked at building easy frameworks to add you HTML to, I’ve built custom frameworks for publishing XML content as HTML and PDF, have worked with Docbook and DITA, and have explored what Web Components can offer as a publishing platform among other ideas.

But it wasn’t until I saw Progressive Web applications that web-based reading applications that are on par with native experiences became possible. These progressive ebooks are not only readable across form factors but they are also available offline and can, if so configured, send push notifications on new content updates and synchronize the content on the background while the reader is busy doing something else… the service worker attached to the book will handle most of these experience issues without the user’s direct input and without necessarily having to be in the page or with the browser open.

The term Progressive Web Application was first coined by Alex Russell in Progressive Web Apps: Escaping Tabs Without Losing Our Soul. In the article Alex describes the requirements for a web application that becomes a close match to a native app using existing and new web technologies. Russell sees the following as core characteristics of progressive web applications:

  • Use Responsive Web Design to provide a consistent user experience
  • Progressively-enhanced with Service Workers to let them work offline
  • Adopt an application shell model to create an app-like user experience
  • Use background synchronization to keep the content up to date
  • Served via HTTPS to prevent snooping
  • Identified as “applications” allowing search engines to find them
  • Can use the notification system built into the operating system to push notifications about the application
  • Installable to the home screen through browser-provided prompts. Users can “save” apps to their home screens without intermediaries

I became interested on the the idea of combining Progressive Web Apps with Subcompact Publishing when I started looking at using offline as a starting point to experiment with web content and how can we make web content as engaging as applications or devices like Kindle, Kobo and hundreds of others readers. App Cache, the current solution at the time, proved to be hard to implement and unreliable in its execution but it was all we had.

What would a web based reading application look like if, in addition to the requirements for a progressive web application, we could also make it small and nimble enough to meet the criteria for a subcompact publication (modified from Craig’s original):

  • Small file sizes
  • Fluid publishing schedule
  • Scroll (don’t paginate)
  • Clear navigation
  • HTML based
  • Touching the open web

The progressive part of our reader gives us both a content shell to hold our content and, at the same time, provides a stable platform that make flaky connections less annoying. this is just a starting point.

Subcompact publishing makes the content easier to create and digest. Progressive Web Apps makes it easier to push these smaller chunks of content to the user without having to process all the content and making it available for both offline use and for faster online experiences.

Because web technologies (HTML, CSS and Javascript) and APIs (Service Workers, Push Notification and Background Sync) are at the core of these new experiences we gain all of the advantages of the open web platform and inherit its issues. We can rely on the web technologies we are familiar with and expand the reach of what we can do with and in browsers.

In Tablets are waiting for their Movable Type the author wishes for a table equivalent to Movable Type wishing for their simplicity and ease of use out of the box (although from what I remember, customizing MT was a nightmare.) I posit that the web needs the Tablet equivalent to WordPress… easier to publish and customize. Using web content needs to be as complicated as you want it to be and we can leverage the strengths of the platform while minimizing their weaknesses. How do we take this message out without turning it into another jQuery?

Ebooks have to live with a very long tail that makes it harder to experiment with. We have to contend with first generation hardware Kindle devices (the ebook equivalent to IE6) that are still potential customers for publisher’s content. Even within the same format we have enough discrepancies to make it really hard to work on most (never mind all) devices that claim to support the spec.

The main advantage of the open web is the searchability and “the power of the URL”. We no longer need an app store for our content (although having one can’t hurt), we can just point users to Google (or your favorite search engine) and can search within the publication without depending on the native app search… if the search engine route is not enough then libraries like lunr.js provides client-side search capabilities.

And if this is not enough we can add further metadata inline using resources from schema.org to document the book and our own custom vocabularies based on existing . Specialized search engines can then look for this metadata and display it in special ways; Google’s search engine already does this with enriched pages.

Because we are no longer pushing heaps of content to our users every time we update the application we can update more frequently and, if the users agrees, transparently to them in a way that doesn’t preclude their using the content they already have… better to have old content than no content at all.

We may decide that the subcompact model doesn’t work for the type of content we are trying to package. That’s ok too, the PSWAs can also handle traditional publishing paradigms where we push the entire contnt to the user all at once or in stages… for example we can download the first 10 chapters of our book when the Service Worker first installs and then load the rest of the book in chunks of 5 or 10 chapters as we update the service worker… the process will be as transparent to the user as we choose to make it

Where does the content come from? One of the things I find most intriguing are some of the way in which we create the content and turn it into a book, and even if that book making process is necessary at all. There are three examples that I find particularly iluminating:

Signal versus Noise is 37 Signals (now Basecamp) blog. It has, directly or indirectly, spawned multiple books (Getting Real, Remote and Rework.) I was particularly taken with the first book and how it translated content into book format.

I’ve mentioned Art Space Tokyo and its multiplatform approach as one of the earliest influences on this project. It started me thinking about the differences between physical and digital and how they live together in a continuum rather than working against each other in an either/or dichotomy. This led me into considering the different formats that are available for online content, what are their advantages and how can the open web leverage these advanages when creating digital content.

I reached some conclusions about the Digital Content We Make:

  • The Digital Content We Make embraces their medium — working in concert with the content to make a clear and compelling narrative
  • The Digital Content We Make does not need to obey the same constraints printed books do
  • The Digital Content We Make is confident in the usage of digital tools and techniques. We’re not afraid to use polyfills and plugins to accomplish our goals

I’m ambivalent when it comes to paged content. One the one hand it’s hard to get out of the metaphors that correspond to what we do in the physical world but, on the other hand, how much do we really exercise those metaphors when reading online if we read online at all? How much of these scanning versus reading rules apply to our long form content?

There are other things that attracted me to web publishing… I call it engaged readers. Experiences like Explorable Explanations and Parable of the Polygons are highly interactive experiences that requires Javascript libraries and scripts that are highly unlikely to be supported by ebook readers due to their security models. But we can do this and more in a web publication. It greatly expands what we can do with our books and makes it easier to convert paper books into fully interactive digital ones. To me this is one more reason to move to the open web as the driver for our digital books.

This will sound counterintuitive in light of the subject of these thoughts. KISS seems to be a forgotten art and we keep going for the latest and greatest and we forget that the biggest thing analog has over digital is its simplicity for the end user.

I think that’s the other reason for this project: to provide a set of technologies for digital web publishing while keeping the possibilities open. Both progressive web applications and subcompact publishing promote simplicity for the end user. Choosing the technologies we use to deliver these experiences should be simple as well.