Categories
Uncategorized

Progressive Subcompact Publications: Structuring and Styling Content

We want our content to look awesome regardless of the device. How do we accomplish this?

A good starting point is Ethan Marcotte’s Responsive Web Design (or whatever Responsive Web Design book is your favorite). We want the experience to scale to whatever device or platform we’re targeting and, whatever design we choose, the first thing we need to make sure of is that it’ll work well in phones, tablets and desktops (in other words, everywhere there is a browser).

Once we have a layout we can start thinking about, what to me is, the most important part of any long form project: typography. I’ve written extensively on what typography is and how it works on the web now we need to take the next steps.

This is where the first set of choices happen: What layout do we choose? How do we create something engaging without becoming repetitive? How do we craft a reading experience that matches the content?

I first saw Jen’s presentation at SFHTML5. I see it as a challenge and an opportunity to think differently about the way we create and layout our content on the web. For longer form content this also speaks to letting the content dictate the layout and not the other way around. What is it that makes magazine layouts so interesting?

I collect electronic versions of GQ, Wired, Vanity Fair, Fast Company and Harvard Business Review and the biggest question when I read them is how can we make this reading experience in the open web? The ads from magazines are what intrigue me the most… and where a lot of my most radical ideas come from.

After watching this presentation from Beyond Tellerand I couldn’t help reading the new edition of Hardboiled Web Design. Clarke advocates that creativity should be at the center of our online design work… It speaks to the need of art directed web design and bespoke designs rather than using the same design over and over.

If we drop the book metahphor from our online reading experiences, There is no limit to what we can do with our online publications. We need to go back to our content and see how we can enrich it and what technologies we can use to do so… we have a lot of layout tools that, a few years ago, were only possible in InDesign and other Desktop Publishing Tools or took a lot of extra workarounds to do in CSS/HTML/JavaScript.

Now we need to get out our collective comfort zone and challenge both ourselves and our future readers with layouts that go beyond what we see on the web today.

One last example of what we can do with our new css tools and how much we can be true to our creative selves without having to lie to our web developer selves: Justin McDowell uses new CSS technologies to recreate works from the Bauhaus school.

Categories
Uncategorized

Progressive Subcompact Publications: Annotations

I still remember the first time I made an annotation from a Kindle book available in their public site (kindle.amazon.com). I saw the possibilities as limitless until I realized that there were limitless as long as you bought the book from Amazon and read it on a Kindle device or application.

Every time I’ve turned around and searched for some way to annotate the web I’ve come with these two solutions but I’ve never had a project they work well with. I think PSPs are the perfect place to put this in practice. There are two libraries I think are particularly appropriate: Emphasis and annotator.js which provide different ways to make and share annotations from your PSPs.

Emphasis provides dynamic paragraph-specific anchor links and the ability to highlight text in a document, it makes the information about the highlighted elements of the page available in the URL hash so it can be emailed, bookmarked, or shared.

annotator provides a more traditional annotation interface that is closer in spirit to the Kindle annotation UI that attracted me to the concept when I first saw it.

Another tool that sounds interesting is MIT’s Annotation Studio but it seems to be geared towards MIT Hyperstudio’s larger project and not necessarily ready as a standlone solution, that said, your milleage may vary.

The thing to consider if how these annotation tools store the annotations. Do they use server-side databases? If so how do we cache new annotations when the reader is offline? Google Analytics provides a possible example where we store the annotations in indexedDB and then play them back when the user goes online.

Categories
Uncategorized

Progressive Subcompact Publications: Beyond Offline

The service worker script we discussed in the prior section is the core of a PSP but there’s way more we can do to make our reading experiences behave more like native applications. Some of these features are

  • Push notifications
  • Background sync

While not directly related to service workers this feature may help get better re-engagement from your users:

  • Installation on mobile home screens

Also not directly related to progressive web applications, we can also preserve data, not just content on our web applications using

  • IndexedDB

We’ll discuss them in the sections below.

Push notifications

Using Push notifications we can communicate events and new information to the user through the Operating System’s push notification system and UI.

Detailed instructions for setting up Push Notifications using Chrome and Firebase Cloud Messaging (the successor to Google Cloud Messaging) can be found in Push Notifications on the Open Web.

Push Notifications: Timely, Relevant, and Precise provide a context for how and when to use push notifications.

Background Sync

If you write an email, instant message, or simply favourite a tweet, the application needs to communicate that data to the server. If that fails, either due to user connectivity, service availability or anything in-between, the app can store that action in some kind of ‘outbox’ for retry later.

Unfortunately, on the web, that outbox can only be processed while the site is displayed in a browsing context. This is particularly problematic on mobile, where browsing contexts are frequently shut down to free memory.

This API provides a web equivalent to native application platforms’ job scheduling APIs that enable developers to collaborate with the system to ensure low power usage and background-driven processing. The web platform needs capabilities like this too. In the future we’ll be able to do periodic synchronizations.

A more detailed explanation can be found in the explainer document for background sync.

Install in mobile home screens

Using the W3C App Manifest specification and the existing metatags for adding an app to the homescreen in mobile devices we enable our users to add our web content to the homescreen of mobile devices to foster a higher level of interaction and reengagement with the content.

It’s next to impossible to remember all the items you can include in your manifest. Rather than go through tutorials for the reduced set required for Android’s add to homescreen (documented in Google’s web fundamentals we can use tools like Manifestation (either as a Node Package or web based) to generate a complete manifest for our applicaiton. The Node version can also be used as part of a Gulp/Grunt build system.

HTML5 Doctor has a good an up to date reference on App Manifest. Another source of information is the Mozilla Developer Network article on Web App Manifest.

Expect a deeper dive on Web Application Manifest some time in December.

IndexedDB

We’ve had client-side storage solutions for a while now. Sessions Storage, WebSQL and IndexedDB. Until recently they had no uniform support among brosers and one (WebSQL) is no longer being developed because all the implementations relied on SQLite as the backend and this was considered to violate the “two interoperable implementations” requirements for W3C specs.

I’ve chosen IndexedDB as the engine to store data for my offline applications because, as complicated as the API is work with, there are wrapper libraries that make the work easier and will work across browsers, even Safari (which has a deserved reputation for shitty IndexDB implementations).

Knowing how much of a pain it can be to write bare IndexedDB code, I’ve picked Dexie as my wrapper library. It is easy to use and, for browsers who have issues with indexedDB like Safari, provide a transparent fallback to WebSQL. It also uses promises rather than callbacks and, once you start working with promises, you will never go back to callbacks 🙂

The example below shows how to create a database and a store for the a theoretical friends datastore.

var db = new Dexie("friends");

// Define a schema
db.version(1).stores({
  friends: 'name, age'
});

We then open the database

// Open the database
db.open().catch(function(error) {
  alert('Uh oh : ', error);
});

We can then insert records into the datastore one at a time or using a transaction.

Transactions group one or more actions into an atomic unit. If any of the actions composing a transaction fails then the entire transaction fails and the datasore is rolled back to the state before the transaction began.

// Insert data into the database
db.friends.add({
  name: 'Camilla', age: 25
});

// Insert data into database using transactions
function populateSomeData() {
  return db.transaction("rw", db.friends, function () {
    db.friends.clear();
    db.friends.add({ name: "David", age: 48 });
    db.friends.add({ name: "Ylva", age: 21 });
    db.friends.add({ name: "Jon", age: 76 });
    db.friends.add({ name: "Måns", age: 56 });
    db.friends.add({ name: "Daniel", age: 55 });
    db.friends.add({ name: "Nils", age: 42 });
    db.friends.add({ name: "Zlatan", age: 21 });

    // Log data from DB:
    db.friends.orderBy('name').each(function (friend) {
        log(JSON.stringify(friend));
    });
  })
  .catch(function (e) {
    log(e, "error");
  });
}
 ```
We can then retrieve data from the store usning queries similar to the SQL syntax. An example of this query retrieves all the names from the data store where the age is over (above) 35 and then display the names.  

```language-javascript
// Query friends datastore
db.friends
  .where('age')
  .above(35)
  .each (function (friend) {
    console.log (friend.name);
  });

There may be occasions when we need to delete the database, maybe because we don’t need it again or maybe because we screwed up and want to start over.

db.delete().then(function() {
    console.log("Database successfully deleted");
}).catch(function (err) {
    console.error("Could not delete database");
}).finally(function() {
    // Do what should be done next...
});

This is a very broad and quick overview of Dexie. If you want more information check the Dexie.js Tutorial to get started.

There are other wrapper libraries for IndexedDB but Dexie is the most flexible and forgiving one for me.

Categories
Uncategorized

Progressive Subcompact Applications: How they work

How does this all work?

At the core of our progressive subcompact publications is a service worker. This woker is a type of shared worker and it also works as a network proxy for your requests, you can cache responses, provide new responses based on the request you make, provide the basic mechanism to do push notifications and content synchronization in the background.

We’ll break down the service worker in two sections: the script and the installation script you add to your entry point (usually index.html)

Service worker: The script

This is a fairly common pattern to build a service worker that will perform the following tasks:

  • Caches the content of our application shell
  • Automatically cleans up old cached content when the service worker is updated
  • Fetches app resources using a ‘cache first strategy’. If the content requested is in the cache then serve it from there. If it’s not on the cache then make a network request for the resource, serve it to the user and put it in the cache for later requests
var CACHE_NAME = 'my-site-cache-v1';
var cacheWhitelist = [CACHE_NAME];
var urlsToCache = [
    '/',
    '/styles/main.css',
    '/script/main.js',
    'images/banner.png'
];

self.addEventListener('install', function(event) {
    event.waitUntil(
    caches.open(CACHE_NAME)
      .then(function(cache) {
        console.log('Opened cache');
        return cache.addAll(urlsToCache);
      })
  );
});

self.addEventListener('activate', function(event) {
  event.waitUntil(
    caches.keys().then(function(cacheNames) {
      return Promise.all(
        cacheNames.map(function(cacheName) {
          if (cacheWhitelist.indexOf(cacheName) === -1) {
            return caches.delete(cacheName);
          }
          })
        );
      })
    );
});

self.addEventListener('fetch', function(event) {
  event.respondWith(
  caches.match(event.request)
  .then(function(response) {
    if (response) {
      return response;
    }

    return fetch(event.request)
      .then(function(response) {
        // Check if we received a valid response
        if (!(response)) {
          throw Error('unable to retrieve file');
        }

      var responseToCache = response.clone();
      caches.open(CACHE_NAME)
        .then(function (cache) {
            cache.put(event.request, responseToCache);
          });
        return response;
      })

      .catch(function(error) {
        console.log('[Service Worker] unable to complete the request: ', error);
      });
    })
  );
});

There are things that we are not covering on purpose for the sake of keeping the code short. Some of these things include:

  • For this example we’ve assumed a minimal set of elements to cache for the application shell. We can be more detailed and add fonts and other static resources. We may also assign the array of items to cache on install to a variable to make it easier to work with
  • Providing a solution for when the content is not in the cache and the network is not available. We can cache default feedback for text-based content or programmatically generate an svg image for fallback
  • Putting content in different caches so deleting one group of resources doesn’t delete all of them
  • We make no effort to add hashes to the resources we cache so we can do proper HTTP cache busting when needed

But at 60 lines of Javascript that will work in 3 of the 5 major browsers (and soon in all of them) I think it does a pretty good job.

Service worker: The registration

Assuming that we saved the service worker as sw.js we can write the code below inside a script tag on our entry page (index.html).

if ('serviceWorker' in navigator) {
  console.log('Service Worker is supported');
  navigator.serviceWorker.register('sw.js').then(function(reg) {
    console.log('Yay!', reg);
  }).catch(function(err) {
    console.log('boo!', err);
  });
}

This script checks for service worker support by testing if the string serviceWorker exists in the navigator object. If it does then service workers are supported, we log a message to the console and then register the serviceworker.

If the serviceWorker string doesn’t exist in the navigator object then service workers are not supported. The catch statement will trigger and we log something to the console.

That’s it. The combination of those two scripts gives us consistent performance across devices and the possibility of work offline after accessing the content once while online.

Service Worker: Next step

Doing it by hand is fun and teaches you a lot about the inner workings of service workers but having to update the files you want to cache and how to define the routes you want to use to cache your content.

sw-precache is a Google tool developed to atuomate creation of service workers with application shell caching on installation. The tool can be used from command line or as part of a build system (Grunt, Gulp and others).

It will also take care of importing additional scripts to use sw-toolbox (described in the next section).

A gulpfile.js using sw-precache looks like this:

// Assigning modules to local constants
var gulp = require('gulp');
// Required for sw-precache
var path = require('path');
var swPrecache = require('sw-precache');
// Array of paths. Currently only uses the src to represent the path to source
var paths = {
    src: './'
};

gulp.task('service-worker', function(callback) {
  swPrecache.write(path.join(paths.src, 'service-worker.js'), {
  staticFileGlobs: [
    paths.src + 'index.html',
    paths.src + 'js/main.js',
    paths.src + 'css/main.css',
    paths.src + 'images/**/*'

    ],
    importScripts: [
      'node_modules/sw-toolbox/sw-toolbox.js',
      paths.src + 'js/toolbox-scripts.js'
    ],
    stripPrefix: paths.src
  }, callback);
});

sw-toolbox automates dynamic caching for your service worker. It creates customizable routes for your caching and provides for express-like or regular-expression-based routes to match routes and resources.

In the gulpfile.js abov the importScripts section imports two files:

  • sw-toolbox.js is the library that will run the custom routes
  • toolbox-scripts contains our custom toolbox routing

The script itself is wrapped in an immediately-invoked function expression (IIFE) to keep our code from polluting the global namespace. Inside the IIFE we work with different routes.

All these routes use the get HTTP verb to represent the action the router will take.

The toolkbox then takes a pattern to match the route against and a cache strategy.

There is an optional cache object that contains additional parameters for the cache like (cache) name, maximum number of entries (maxEntries) and maximum duration of the cache in seconds.

The toolbox-scripts.js looks like this:

(function(global) {
    'use strict';

    // The route for any requests from the googleapis origin
    global.toolbox.router.get('/(.*)', global.toolbox.cacheFirst, {
        cache: {
            name: 'googleapis',
            maxEntries: 20,
        },
        origin: /\.googleapis\.com$/
    });

// We want no more than 50 images in the cache. 
// We use a cache first strategy
    global.toolbox.router.get(/\.(?:png|gif|jpg)$/, global.toolbox.cacheFirst, {
      cache: {
        name: 'images-cache',
        maxEntries: 50
      }
    });

    // pull html content using network first
    global.addEventListner('fetch', function(event) {
      if (event.request.headers.get('accept').includes('text/html')) {
        event.respondWith(toolbox.networkFirst(event.request));
      }

      // you can add additional synchronous checks based on event.request.
    });

    // pull video using network only. We don't want such large files in the cache
    global.toolbox.router.get('(.+)', global.toolbox.networkOnly, {
      origin: /\.(?:youtube|vimeo)\.com$/
    });

    // the default route is global and uses cacheFirst
    global.toolbox.router.get('/*', global.toolbox.cacheFirst);
})(self);

Registering the automatically generated service worker is no different than registering the manually generated script. Assuming that we saved the service worker as service-worker.jsthe registration code in our entry page (index.html) looks like this:

if ('serviceWorker' in navigator) {
 console.log('Service Worker is supported');
 navigator.serviceWorker.register('sw.js').then(function(reg) {
   console.log('Yay!', reg);
 }).catch(function(err) {
   console.log('boo!', err);
 });
}
Categories
Uncategorized

Progressive Subcompact Publications: Introduction

For the past few months I’ve been working at Google building a set of instructor-led courses on how to build progressive web applications. This has made me think of how to push some of these concepts into what I call “Progressive subcompact publications”. These concepts are different than ePub Next and any number of formats vining for use, each of which have issues that are hard to overcome:

  • They seek to replace the installed EPUB (and Kindle) user base. Since most users of iBooks and Kindles are locked in to their devices and readers this is not a good idea
  • There will never be uniform buy in to new specs or ways to publish content and, unless you can get a majority of publishers to implement your specification, schema or idea you will be competing with a behemoth that is very slow to evolve (not questioning the reasons, just making a statement)
  • Some people are trying to establish their format as a defacto standard (use this instead of what you already have) and that’s dangerous
  • It’s dangerous if you fail to get full buy in because it segments the market even further
  • it’s dangerous if you succeed because the defacto standard becomes a dejure standard and you have to support it and work all the warts that were ok when you were developing it (check the Javascript specifications for the amount of baggage carried over to keep old code from breaking)

Instead I’m looking at progressive web applications as a starting point for an exploration of how far we can push the web as a publishing medium.

What are progressive web applications

Alex Rusell coined the term “Progressive Web Applications” in Progressive Web Apps: Escaping Tabs Without Losing Our Soul. It is an umbrella term for a series of technologies and best practices to make our users experience feel more like native applications without loosing what makes the web awesome. The characteristics of these apps (as defined in the post) are:

  • Responsive: to fit any form factor
  • Connectivity independent: Progressively-enhanced with Service Workers to let them work offline
  • App-like-interactions: Adopt a Shell + Content application model to create appy navigations & interactions
  • Fresh: Transparently always up-to-date thanks to the Service Worker update process
  • Safe: Served via TLS (a Service Worker requirement) to prevent snooping
  • Discoverable: Are identifiable as “applications” thanks to W3C Manifests and Service Worker registration scope allowing search engines to find them
  • Re-engageable: Can access the re-engagement UIs of the OS; e.g. Push Notifications
  • Installable in mobile: to the home screen through browser-provided prompts, allowing users to “keep” apps they find most useful without the hassle of an app store
  • Linkable: meaning they’re zero-friction, zero-install, and easy to share. The social power of URLs matters.

Note that none of these ideas involve implementing new technologies. They are all in the specification pipeline at W3C or WHATWG and have multiple browser implementations already in the market.

These technologies also don’t stop you from using the new, shinny and awesome stuff coming down in CSS, Javascript and related APIs and technologies. Nothing stops you from using WebGL 2.0, CSS Grids and other awesomeness coming soon to browsers.

We will also briefly explore what it would take to make PSPs into full desktop and mobile applications using Electron and Apache Cordoba / Adobe PhoneGap. Again this is not meant to be a perfect solution but an exploration of possibilities.

What is subcompact publishing

It seems that perfection is attained, not when there is nothing more to add, but when there is nothing more to take away.
Antoine de Saint Exupéry

The term Subcompact Publishing was coined by Craig Mod to describe a new and different publishing methodology rooted in the digital world rather than an extension of traditional publishing methods and systems.

According to Mod:

  • Subcompact Publishing tools are first and foremost straightforward and require few to no instructions. Compare this to the instructions on how to navigate the current crop of digital magazines
tutorials03
tutorials06
  • The editorial and design decisions around them react to digital as a distribution and consumption space. We no longer buy print magazines but read them online. How can we leverage the online publishing and reading experiences?
  • They are the result of dumping our publishing related technology on a table and asking ourselves — what are the core tools we can build with all this stuff? Don’t think of online as just an extension of print but explore what things you can do only online and how that enhances the reader’s experience

Furthermore Craig describes subcompact publications as having the following characteristics:

  • Small issue sizes (3-7 articles / issue)
  • Small file sizes
  • Digital-aware subscription prices
  • Fluid publishing schedule
  • Scroll (don’t paginate)
  • Clear navigation
  • HTML(ish) based
  • Touching the open web

Reading the essay it shows that it’s geared towards magazines but, with a few modifications, it applies equally to books and other long form content. For this project, geared towards books and other collection types of publications, I’ve changed some of the definitions of Subcompact Publishing as listed below:

  • Small issue sizes (3-7 articles / issue) / Small file sizes Because we are using technologies that allow us to load content on demand and to cache the content on the user’s browser the need to keep the content small, both issue size and file size becomes less relevant. We can load the shell of our book independently of the content and load the content in smaller bites. For example we can load the first 10 chapters of a book right away and then load the rest of the content on demand. This does not mean we should forget about best practices in compressing and delivering the content but with Service Workers and caching available we can worry more about the content itself rather than how it’s delivered. If we add http2 and server push to the mix the speed gain becomes significant if implemented correctly
  • Fluid publishing schedule Because we can update the content of our web publications whenever it’s necessary we can push new or updated content at any point, without having to worry about releasing the entire package again or having to go through a vendor’s store approval process
  • Scroll (don’t paginate) Unless we have a compelling reason
  • Clear navigation We have trained our users to accept certain metaphors for navigating our web applications. There is no compelling reason to change that now and, if there is, it better be a very good reason
  • HTML based Here is the main divergent point from Craig’s conception of subcompact publications. PSPs are meant for the web and, if the developer chooses, for HTML-based publishing formats. iBooks and, especially, Kindle are closed ecosystems where it’s very difficult to get in to the ecosystem beyond using the tools they provide to adapt your format to their specifications… It is already hard enough to work with different browsers and the uneven CSS support… none of the existing tools handle epub readers and their own prefixing requirements
  • Using the open web One of the biggest draws of the web is that it requires no installation proccess or approval for content delivery to the end users. Leveraging this makes the idea of Progressive Subcompact publications easier to work with, even if DRM and other rights management issues are not tackled from the start

This is how a progressive web application looks like. It may also be how our web reading experiences look like in the not too distant future.