Categories
Uncategorized

W3C + IDPF = a different experience

I’ve been reading the different positions in the debate started when the International Digital Publishing Forum and the W3C started talking about merging. I’ve also purposefully stayed quiet as I digest the conversations and try to understand the rationale of such a merger. This is my understanding of and position regarding the whole situation.

IDPF is moving towards an HTML experience for their users. Looking at the EPUB 3.1 changes from EPUB 3.0.1 shows a move away from what has been traditionally part of the EPUB specification.

From the W3C side we can see their interest in publishing by how much traction (if any) the new Digital Publication spec gains among both publishers and web developers.

One problem I see right off the bat is that the primary audience of the W3C, browser makers, have competing interests… Let’s not forget that it was mobile performance what caused regions to be removed from the blink rendering engine and it’s been philosophical purity debates that have grounded the specification to a standstill in the CSS working group.

With so many different audiences to please I find it hard to believe that publishers will be able to influence the W3C’s working groups strongly enough to get the necessary specifications and changes to existing specifications to reach recommendation status. How many more do we need? How much longer do we have to wait?

Baldur Bjarnason (@fakebaldur on Twitter) wrote an interesting piece where he outlines the problems he sees in both the IDPF and W3C way of creating specifications. He is highly critical of the processes and closed nature of the specifications. Even though the specifications are publicly accessible until the decision making was actually made by member organizations through their representatives in the organizations and the cost of membership is prohibitively expensive for individuals to cover (see the W3C membership costs for a US organization starting in October, 2016 and the IDPF Membership Dues)

He then argues for an open development model using the WICG incubating methodology which is modeled after Chrome’s intent to implement and ship templates I’ve seen used on Blink-dev. Unlike Blink-dev I fail to see how an open development process for some of the existing APIs (take CSS Regions for example) will not get deadlocked in the open rather than in the working group where it was born and where it’s still sitting waiting for someone to change their minds about it. Would putting the Portable Web Publications specification under the Web Incubator Community Group actually improve the specification? I have my doubts that such a move would accomplish much unless we can be guaranteed at it will go into a recommendation track.

Baldur further compares PWP to XHTML2, W3C’s failed attempt at recasting HTML as an XML vocabulary. I find this assertion particularly troublesome because it fails to take a few things into account. There are no competing proposals for something other than Portable Web Publications in the open web; there was at least one proposal from Opera and Mozilla to further develop web applications… it was rejected and the rejection resulted in HTML5 which was later adopted by the W3C as the starting point for their own work in May, 2007. Tzviya Ziegman and Dave Cramer have made good efforts in trying to engage production folks (those in the #eprdctn group in Twitter) regarding these standards and specifications. I haven’t seen much engagement for the community.

One of my strongest reasons to support a merger is consolidation. The market for ereaders is incredibly fragmented ranging from e-ink Kindle readers to Kindle Fire in all their incarnations to iBooks and none of them support the full specification in a consistent manner. Jimmy Panoz has documented some of these discrepancies and some of the issue he has found when researching the differences in rendering a cross readers and it just makes me sad to see the number and type of discrepancies.

Perhaps what we’re missing is how to best construct books and reading content for the web. Perhaps the future of books is in a merged organization where the people who have already worked on performance, layouts and typography can show the publishing world what they’ve done and the publishing world can tell the web community what they’ve done before and we can come to an agreement on what’s the best way to move forward. If you’ve done any web-related work in the past 10 years you know how bad the fragmentation and the design compromises we had to do as a result. Yet, somehow, browser makers and other interested parties managed to solve their differences and work together. That’s why the web is where it is today (and as bad as you think it is, it’s light years ahead of where it used to.)

The web can compete with native experiences better than it ever has ever done. Progressive web applications leverage the technologies proposed for portable web publications without the need of launching yet another specification to an uncertain future. What’s most important to me is that we can polyfill those features still in limbo (regions is the one that comes to mind.)

So we may have a solution in PWPs. Is it a perfect solution? No, it isn’t. But it’s something we can start testing right now and improve throw iteration until it gets closer to what we really want.

other posts to read

Categories
Uncategorized

ES6, Babel and You: Modules, the what and the how

Thanks to Ada Rose Edwards for pointing me to rollup.js and providing examples of how to configure it. 

I’ve always struggled to understand the differences between modules and classes and I’m still not 100% sure I understand the differences but I think I do well enough to write up about it.

Where a class has to be instantiated using the new constructor and is an all or nothing proposition, either you use the entire class or none of it, you cannot extend some methods of the class and not others.

With Modules you have to explicitly export the elements of your module that you want to make available and you have to explicitly tell the module what it is that you want to import from a given package.

A module representing the functions we created to work with promises looks like this:

export function loadImage(url) {
  return new Promise( (resolve, reject) =>{
    var image = new Image();
    image.src = url;

    image.onload = () => {
      resolve(image);
    };

    image.onerror = () => {
      reject(new Error('Could not load image at ' + url));
    };
  });
}

export function scaleToFit(width, height, image) {
    console.log('Scaling image to ' + width + ' x ' + height);
    return image;
}

export function watermark(text, image) {
    console.log('Watermarking image with ' + text);
    return image;
}

export function grayscale(image) {
    console.log('Converting image to grayscale');
    return image;
}

With ES6 modules we can import an entire module or import specific elements of the module, anything in the module that starts with the export keyword at the beginning.

We can also export variables and constants for our modules to use. In the following example we’ll create a export in module foo for our default value:

export default 42;

Which we can then import

import foo from './foo.js';
export default function () {
  console.log(foo); //logs 42 to the console
}

The complementary method to export is import which will import the specified methods from our module (duh). om import and use the exported module we can use something like the code below:

// image-module.js
import { 
  loadImage, 
  scaleToFit, 
  watermark, 
  grayscale } from './image-module.js';

// Image processing pipeline
function processImage(image) {
  loadImage(image)
    .then((image)  => {
      document.body.appendChild(image);
      return scaleToFit(300, 450, image);
    })
    .then((image)  => {
      return watermark('The Real Estate Company', image);
    })
    .then((image)  => {
    return grayscale(image);
    })
    .catch((error) => {
      console.log('we had a problem in running processImage ' + error);
    });
}

processImage('js/banf.jpg');

This selective import allows developers to create internal APIs for modules. We only export user facing elements and keep all the internals of our API private by not allowing it to be exported.

Furthermore you can import from many different modules as long as they are
available to you. Using an example rollup-config-file.js we’ll see how to leverage imports from multiple locations:

'use strict';

import commonjs from 'rollup-plugin-commonjs';
import nodeResolve from 'rollup-plugin-node-resolve';
import json from 'rollup-plugin-json';

export default {
    entry: './es6/main.js',
    plugins: [
        nodeResolve({
            jsnext: true
        }),
        commonjs({
            include: 'node_modules/**'
        }),
        json()
    ],
    dest: './bundle.js'
};

Modules give us a lot of flexibility. We can create a module for each type of functionality we are implementing (image manipulation, typography, etc) or we can create one module per type of content (a module for the same page and a module for the catalog) and, because we can selectively import elements from an ES6 module we can keep our code DRY by not reinventing the wheel.

The problem

The problem with ES6 modules today is that no browser supports them natively. I spent a lot of time figuring out how to make it work in browsers without transpiling and I wasn’t able to figure out how to natively support modules across browsers and environments.

Ada Edwards clued me into Rollup.js, a bundler for Javascript modules. When combined with Babel they give us the ability to write ES6 modules using other features of the specification, transpile them to ES5 and bundle them together in a way that will work with current browsers.

The best thing about Rollup is that it will only bundle the module imports that are needed for our project to work; thus reducing the size of the bundle and the number of bytes we have to push through the wire

As any Node application we need to install Node (which bundles with NPM) and initialize the project:

npm init

And follow the prompts to create the package.json file.

First we install Rollup as a global Node package:

npm install -g rollup

This will install the rollup binary in your path so you can just run rollup.

Next we install the necessary plugins:

npm install -D rollup babel-preset-es2015-rollup rollup-plugin-babel \
rollup-plugin-commonjs rollup-plugin-json rollup-plugin-node-resolve

rollup-plugin-babel and babel-preset-es2015-rollup together handle Babel transpilation. We sue a custom ES2015 so can be sure that Babel will not convert the modules to common.JS before Rollup has a chance to work with them.

rollup-plugin-commonjs and rollup-plugin-node-resolve do something similar for

rollup-plugin-json

The last stage is to build the rollup.config.js to make sure we run the tool the same way every time. Since we’re working with ES6 we can use import statements instead of require.

Part of the configuration is to configure the plugins.
* nodeResolve configures the version of Ecmascript we’re using
* commonjs includes the packages from node_modules
* json lets you use data from the project’s package.json file

'use strict';

import commonjs from 'rollup-plugin-commonjs';
import nodeResolve from 'rollup-plugin-node-resolve';
import json from 'rollup-plugin-json';

export default {
    entry: './es6/main.js',
    plugins: [
        nodeResolve({
            jsnext: true
        }),
        commonjs({
            include: 'node_modules/**'
        }),
        json()
    ],
    dest: './bundle.js'
};

The one thing I’m not too fond of in the Rollup configuration is that it hardcodes both the entry point for the conversion as well as the destination.

These are minor nitpicks that can be fixed by working rollup into your build process, something I deliberately chose not to do, with different tasks for different rollup configurations.

Using the module image-module.js and the rollup-config.js we just defined produces the following valid ES5 result:

“`language-javascript
function loadImage(url) {
return new Promise( (resolve, reject) =>{
var image = new Image();
image.src = url;

<pre><code> image.onload = () => {
resolve(image);
};

image.onerror = () => {
reject(new Error('Could not load image at ' + url));
};
});
}

function scaleToFit(width, height, image) {
console.log('Scaling image to ' + width + ' x ' + height);
return image;
}

function watermark(text, image) {
console.log('Watermarking image with ' + text);
return image;
}

function grayscale(image) {
console.log('Converting image to grayscale');
return image;
}

// Image processing pipeline
function processImage(image) {
loadImage(image)
.then((image) => {
document.body.appendChild(image);
return scaleToFit(300, 450, image);
})
.then((image) => {
return watermark('The Real Estate Company', image);
})
.then((image) => {
return grayscale(image);
})
.catch((error) => {
console.log('we had a problem in running processImage ' + error);
});
}

processImage('js/banf.jpg');
</code></pre>

“`

Categories
Uncategorized

Better Markdown from Node

Unless I write an email in Gmail I do all my writing using Markdown a quick text-based writing system. I love Markdwon but have always been upset at the limitations placed in the original Markdown, the fact that there is no actual specification for Markdown and that Markdown’s creator has been a douche when it comes to supporting the community effort of rectifying some of the shortcomings of the original Markdown tool (leading to the CommonMark effort).

I currently use Ulysses for MacOS/iOS as my primary writing tool and then copy and paste to a Markdown enabled instance of WordPress using Jetpack. The one thing I’ve only been partially successful in doing is to convert the Markdown directly into HTML and uset it as is.

The best I’ve been able to do is convert the Markdown file to (badly formatted) HTML and insert it into a template… for some reason the converter keeps thinking that bulledted lists should be formated with paragraphs (as in <li><p>) and that creates issues with dropcaps in list items. I have to manually clean the documents after conversion and I’m pretty sure I can do better than that.

The idea

If I’m publishing the content to WordPress I don’t need to do anything further. The Markdown code is good enough to publish with only minor alterations (WordPress has some issues with the way it renders Markdown and you need to preview your content before you publish).

There are times when I want to publish content to the web without having to worry about manually generating the HTML. In order to get the page ready we’ll do the following:

  1. generate the HTML from Markdown
  2. Insert the generated HTML into an existing HTML template
  3. Visually inspect the page and the underlying HTML
  4. Clean up the resulting page based on inspection

The tools

I’ve plugged the Markdown process to my CSS Starter Kit. SO the template is linked to the CSS generated for the starter kit and also to Prism syntax highlighter.

The Markdown section uses the following tools

  • Remarkable Markdown Gulp plugin
    • gulp-remarkable
  • Wrapper to insert HTML into a template
    • gulp-wrap

Gulp build file

I’ve added two tasks to work with Markdow. The first task will generate HTML from all the files under src/md-content that ends with an md extension and place them in the src/html-content directory.

Note that this task will only convert to HTML the content of the Markdown file. It will not add head or body elements. That will come in the next task.

gulp.task('markdown', () => {
  return gulp.src('src/md-content/*.md')
    .pipe(markdown({
      preset: 'commonmark',
      typographer: true,
      remarkableOptions: {
        typographer: true,
        linkify: true,
        breaks: false
      }
    }))
    .pipe(gulp.dest('src/html-content/'));
});

The second time will build the full HTML files. It will take each of the HTML fragments we created with the Markdown task, insert them into a template and save them to the root of our src directory with the name of the fragment as the name of the HTML file.

gulp.task('build-template', ['markdown'], function() {
  gulp.src('./src/html-content/*.html')
    .pipe($.wrap({src: './src/templates/template.html'}))
    .pipe(gulp.dest('./src/'))
});    

We make the task dependent on Markdown to make sure we have the freshest HTML fragments before inserting them into the templates.

What’s next

Working with WordPress and its Markdown parser I’ve gotten used to typing in HTML code manually. Videos in the page are added as HTML with the following snippet

<div class="video">
<iframe width="560" height="315" 
src="https://www.youtube.com/embed/K1SFnrf4jZo" 
frameborder="0" allowfullscreen></iframe>
</div>

It would be nice if we can just tailor the Markdown we write to produce the same HTML without having to write it by hand.

Remarkable, the Markdown library I’m using for this project, has a rich plugin ecosystem. I’m researching how to run plugins inside the Gulp task. Once this is done I will be able to incorporate the results of the plugins into the HTML content. This may mean I’ll have to use the HTML in the blog (instead of the Markdown file directly) but I still save myself from having to code the HTML manually 🙂

Categories
Uncategorized

Performance auditing using Chrome Dev Tools, Web Page Test and Page Speed Insight

Although Chrome Dev Tools gives us an approximation of what a connection would be like in a given device it’s not 100% accurate, it can’t be since our Macbooks and Windows devices don’t have to deal with the additional constraints that a mobile device has.

Why should we test in an actual device?

All cores in a laptop or desktop systems are symmetric as opposed to mobile devices where some of the cores are powerful and some of the cores (the ones that happen to do all of the work) are less powerful, use less power and generate less heat.

All laptops and desktops have large heat sinks over the CPU and fans (or liquid systems for some gaming rigs) to dissipate the heat. Mobile devices have no such things, that’s why we’re always concerned about burning ourselves with mobile devices.

Best explanation on how this affects performance and why we should care is Alex Russell’s presentation at the Polymer Summit

So we have unreliable hardware (we can’t tell if the right CPU cores will be ready when the user first access an application) in a flat out hostile network with high RTT that, potentially, takes a long time to actually complete.

The graphic below shows the HTTP Archive Trend for Average number of requests for Javascript resources and the average size of all responses for a single website between November, 2015 and November, 2016. HTTP Archive only deals with resources as they are transferred through the wire.

JS Transfer Size and Average # of Requests from HTTP Archive

JS Transfer Size and Average # of Requests from HTTP Archive

If we accept that the default Dev Tools emulation in Chrome Desktop is not a faithful representation of how a site will load in a mobile device. We can create custom profiles and adjust the latency for each of our connections we cannot account for the unpredictability of mobile networks.

In the rest of the post we’ll look at 3 tools that will allow us to get a better read on how our application is performing:

  • Chrome Remote Debugging
  • Page Speed Test to run your app closer to where your users are
  • Page Speed Insight to get feedback and metrics on your app performance

Using Chrome Remote Debugging

If you use Android, you can plug your device and run your app on the device to trace your app’s performance in an actual mobile device using your desktop’s Dev Tools. This will eliminate some of the issues Alex discussed on the video above. Running remotely through Dev Tools using the device will use the devices cell or wifi connectivity and will fire the device’s cores as it would when working independently of the computer it’s tethered to.

Because we can’t rely on the results from simulating connection speed using desktop Chrome’s Dev Tools this will produce a much more accurate Dev Tools trace of your application. The trace can be opened in other instances of Chrome for review and further analysis.

Chrome Dev Tools Remote Debugger describes how to plug in your Android device and use your desktop Chrome Dev Tools to create traces and debug your application.

In the video below Boris Smus shows how to remote debug an application on your mobile device using Desktop Chrome.

Using WebPage Speed Test

Webpage Speed Test provides tools to test your application in the same geographical location you expect your users to be. Depending on the region you will have different browsers and devices to test with.

If using Chrome (any version) you can generate trace of the test Pagespeed test runs. These tests can be imported into other instances of Chrome or Chromium-based browser and used for collaborative debugging and performance analysis.

Using Page Speed Insight

Page Speed Insights works slightly different than WPST. It runs tests against the given website but instead of providing metrics and statistics it provides suggestions and solutions for improving the page load speed. A score of 85 or higher (out of 100) indicates that the page is performing well and, of course, the higher the score the better.

  • time to above-the-fold load: How long does it take to load the content the user sees when she first loads the site/app
  • time to full page load: How long does it take for the browser to fully render the page

Network performance is a tricky thing and can vary considerably between browsers, devices and networks. Because of this PageSpeed Insights only considers the network-independent aspects of page performance: the server configuration, the HTML structure of a page, and its use of external resources such as images, JavaScript, and CSS. The results will vary according to your network.

PWA auditing with Lighthouse

Progressive Web Applications describe a set of technologies that make web applications look and behave like native applications without sacrificing the ease of authoring and the tools we use to create our apps.

There are many things that go into making a PWA and it’s extremely hard to keep everything in mind as you’re developin and testing your content. That’s where Lighthouse comes in.

Lighthouse is a tool developed by Google’s Chrome Team to detect and measure features of PWAs, provide best practices and performance advice for your application.

Lighthouse is available as a chrome extension

and as a Node-CLI tool.

Install Lighthouse

You can find instructions for installing Lighthouse, both the Chrome Extension and the CLI in the Github Repository README file.