Web Content Optimization: HTML

Tasks that minimize HTML are available for Grunt and Gulp and both are based on the HTML minifier tool.

Option Description Default
removeComments Strip HTML comments false
removeCommentsFromCDATA Strip HTML comments from scripts and styles false
removeCDATASectionsFromCDATA Remove CDATA sections from script and style elements false
collapseWhitespace Collapse white space that contributes to text nodes in a document tree. false
conservativeCollapse Always collapse to 1 space (never remove it entirely). Must be used in conjunction with collapseWhitespace=true false
preserveLineBreaks Always collapse to 1 line break (never remove it entirely) when whitespace between tags include a line break. Must be used in conjunction with collapseWhitespace=true false
collapseBooleanAttributes Omit attribute values from boolean attributes false
removeAttributeQuotes Remove quotes around attributes when possible. false
removeRedundantAttributes Remove attributes when value matches default. false
preventAttributesEscaping Prevents the escaping of the values of attributes. false
useShortDoctype Replaces the doctype with the short (HTML5) doctype false
removeEmptyAttributes Remove all attributes with whitespace-only values false
removeScriptTypeAttributes Remove type="text/javascript" from script tags. Other type attribute values are left intact. false
removeStyleLinkTypeAttributes Remove type="text/css" from style and link tags. Other type attribute values are left intact. false
removeOptionalTags Remove unrequired tags false
removeIgnored Remove all tags starting and ending with < %, %>, < ?, ?> false
removeEmptyElements Remove all elements with empty contents false
lint Toggle linting false
keepClosingSlash Keep the trailing slash on singleton elements false
caseSensitive Treat attributes in case sensitive manner (useful for custom HTML tags.) false
minifyJS Minify Javascript in script elements and on* attributes (uses UglifyJS) false (could be true, false, Object (options))
minifyCSS Minify CSS in style elements and style attributes (uses clean-css) false (could be true, false, Object (options))
minifyURLs Minify URLs in various attributes (uses relateurl) false (could be Object (options))
ignoreCustomComments Array of regex’es that allow to ignore certain comments, when matched [ ]
processScripts Array of strings corresponding to types of script elements to process through minifier (e.g. text/ng-template, text/x-handlebars-template, etc.) [ ]
maxLineLength Specify a maximum line length. Compressed output will be split by newlines at valid HTML split-points.
customAttrAssign Arrays of regex’es that allow to support custom attribute assign expressions (e.g. '<div flex?="{{mode != cover}}"></div>') [ ]
customAttrSurround Arrays of regex’es that allow to support custom attribute surround expressions (e.g. <input {{#if value}}checked="checked"{{/if}}/>) [ ]
customAttrCollapse Regex that specifies custom attribute to strip newlines from (e.g. /ng\-class/)
quoteCharacter Type of quote to use for attribute values (‘ or “)

Installation Instructions

From NPM for use as a command line app:

npm install html-minifier -g

From NPM for programmatic use:

npm install html-minifier

From Git:

git clone git://github.com/kangax/html-minifier.git
cd html-minifier
npm link .


For command line usage please see html-minifier --help


var minify = require('html-minifier').minify;
var result = minify('<p title="blah" id="moo">foo</p>', {
  removeAttributeQuotes: true
result; // '<p title=blah id=moo>foo</p>'

Grunt Task

  htmlmin: {                                     // Task
    dist: {                                      // Target
      options: {                                 // Target options
        removeComments: true,
        collapseWhitespace: true
      files: {                                   // Dictionary of files
        'dist/index.html': 'src/index.html',     // 'destination': 'source'
        'dist/contact.html': 'src/contact.html'
    dev: {                                       // Another target
      files: {
        'dist/index.html': 'src/index.html',
        'dist/contact.html': 'src/contact.html'

Gulp Plugin

var gulp = require('gulp');
var htmlmin = require('gulp-htmlmin');

gulp.task('minify', function() {
  return gulp.src('src/*.html')
    .pipe(htmlmin({collapseWhitespace: true}))

HTML versus web components

Over the past few weeks I’ve been looking at Polymer and web components again and it led me to wonder if web components are a better solution to transport and performance issues we are discussing in this series. I’m not sure what the answer is yet but it’d be an interesting research project.

Web Content Optimization: Javascript

Optimizing Javascript is nothing more than eeliminating as much white space as we possibly can and, if we want to, to mangle the names of variables down to as few letters as possible. The idea is that the fewer characters the smaller the file will be and the less bandwidth it’ll consume and the faster it will transfer and the quicker your scripts will become available and run.

Be particularly careful when mangling variables. The mangling may have unexpected results


Uglify was the first command line and build process minimizer I found for Javascript. It will both minimize and mangle scripts. It also provides additional functionality but, for the purpose of this article, we’ll only concentrate on compressing and mangling Javascript files.

Command Line

Uglify’s fist version is the command line interface installed through Grunt:

[16:30:21] [email protected] books-as-apps 16542$ npm install -g uglifyjs

That runs using

uglifyjs [input files] [options]

You can revert the order (options first and then input files) but you need to put 2 dashes (–) between the options and the input so Uglify will not consider the files part of the input. The command looks like this:

uglifyjs --compress --mangle -- input.js

Grunt and other build tools

Uglify has plugins for Grunt, Gulp and other build systems and task runners. The Grunt task below will perform the following steps:

  • Combine video.js and highlight.pack.js into script.min.js
  • Avoid any mangling of text by setting mangle to false under options
  • Create a sourcemap (script.min.map)
uglify: {
  dist: {
    options: {
      mangle: false,
      sourceMap: true,
      sourceMapName: 'css/script.min.map'
    files: {
      'js/script.min.js': [ 'js/video.js', 'lib/highlight.pack.js']

Closure Compiler

Google’s Closure Compiler is another tool, written in Java, that allows you to compress your scripts. It is part of the Closure collection of tools that facilitate web development.

Closure Compiler requires a version of Java (either JDK or JRE) to be installed on your system. You can download Java (if not already installed on your system) from Oracle Technology Network or the Open JDK Project

Compressing Javascript provides a good introduction to using the command line tool to compress your files. According to the documentation the different parameters for compression are:

  • Whitespace Only mode simply removes unnecessary whitespace and comments. Selecting “Whitespace Only” mode and pressing compile presents you with a single file of JavaScript with 164K of source code, 28% smaller than the original 227K of source code.

  • Simple mode is a bit more sophisticated. It optimizes JavaScript function bodies in several ways, including renaming local variables, removing unneeded variables and code, and replacing constant expressions with their final value (such as converting “1+3” to “4”). It, however, won’t remove any functions or variables that might be referenced outside your JavaScript. It shrinks the code by 42% from 227K to 132K

  • Advanced mode does even more sophisticated changes to your code. Try selecting “Advanced” optimizations, compile the code, and look at the results. This code looks much less like your original code; it renames all functions to short names, deletes functions it does not believe are used, replaces some function calls with the function body, and does several other optimizations that shrinks the code even further. Typically, you can’t use Advanced Mode on existing JavaScript code without providing some additional information about functions in the code that need to be visible elsewhere and code elsewhere that might be called from within your JavaScript. However, it’s worth noting that the Advanced mode cut the code size from 227K to 86K – 62% smaller than the original code. If you’d like this file to load in 1/3 the time of the original, you might find it worthwhile to give Advanced mode all the information to do this change correctly.

  'closure-compiler': {
    frontend: {
      closurePath: '/src/to/closure-compiler',
      js: 'static/src/frontend.js',
      jsOutputFile: 'static/js/frontend.min.js',
      maxBuffer: 500,
      options: {
        compilation_level: 'SIMPLE_OPTIMIZATIONS',
        language_in: 'ECMASCRIPT5_STRICT'


Web Content Optimization: CSS Critical Path

Optimizing the critical rendering path is critical for improving performance of our pages: our goal is to prioritize and display the content that relates to the primary action the user wants to take on a page.

Ilya Grigorik (Critical Rendering Path)

The sad truth is that we’ve become obsessed with speed and how fast does a page load and, for a fast page load, we need to know what to load when. “Above the fold” is a concept inherited from printed media. In the context of web design/development:

Above the fold is also used in website design (along with “above the scroll”) to refer to the portion of the webpage that is visible without scrolling.[1] As screen sizes vary drastically[2] there is no set definition for the number of pixels that define the fold. This is because different screen resolutions will show different portions of the website without scrolling. Further complicating matters, many websites adjust their layout based on the size of the browser window, such that the fold is not a static feature of the page.

A 2006 study by Jakob Nielsen found that 77% of visitors to a website do not scroll,[3] and therefore only see the portion of the website that is above the fold. There has been considerable controversy about this finding with other broad studies finding the 76% of visitors scrolled vertically to some extent[4] and 22% of visitors scroll to the bottom of the webpage.[5] Most web design advice available today encourages designers to place important information at the top of the website, but to prioritize usability and design.[6][7][8]

From Wikipedia

So not only we have to worry about creating a fast experience (or one that appears fast, at least) but we also need to worry about how to accomplish this in devices that are as diverse as an iPhone 3, a Samsung Note 5 or a 27 in iMac Retina desktop computer.

We’ll worry about the what first, then we’ll look at the how and, finally, we’ll explore some ways to automate the process as part of a build toolchain.

What is Critical Path for Web Dev.

To put it simply Critical Path in the context of web development are all the assets that we need to load the above the fold section of the document we are working on or the user is viewing. We then put those asses inline inside the HTML document.

By doing this we speed up the load of the page (or at least the perceived page load speed) because the browser no longer has to go into the network to fetch the resource.

How do we build the Critical Path CSS

Command Line

Penthouse provides a command line tool that works in tandem with Phantom.js to generate the critical path CSS. However when I tested it, the resulting critical path css file was truncated and I couldn’t figure out why it truncated it.

There is also discussion about removing the standalone command tool altogether so I won’t go into further details. If you’re interested you can grab it from Github

Build System

I use Grunt as my build system so it makes sense that I’ll go with that to build my Critical Path CSS. I use the Critical CSS plugin for Grunt which takes care of creating the Critical Path CSS and then inlining it into the page for me.

Once the plugin is installed you can add a task to your Gruntfile.js like this:

module.exports = function (grunt) {
    critical: {
      typography: {
        options: {
          base: './',
          css: [
          width: 1200,
          height: 800
        src: 'typography.html',
        dest: 'dist/typography.html'

This will take the CSS necessary to render the dimensions indicated (1200 x 800 in this case) and insert in the document, along with scripts to load the rest of the content asynchronously.

Inlining the CSS for our ‘above the fold’ content will speed up the display of the page, even as it downloads the rest of the content. From a user’s perspective the page will have already loaded and scrolling down will just show the rest of it.

Links and Resources


Web Content Optimization: CSS Autoprefixer

CSS vendor prefixes are both a blessing and a curse.

They are a blessing because, as originally designed, they allow browser vendors to implement new CSS features that were not part of any final specification in a way that could be easily changed when the specification changes or is withdrawn; and, once the specification is finalized, vendors can drop the prefix and developers can use the new properties as they would any other CSS property.

They are a curse because, as good as the theory was, it never really worked that way. The web is littered with prefixed selectors long after the specification in question was finalized. In order to maintain backwards compatibility developers have to do multiple prefixed versions of a property even when the final version has been released.

For example, depending on how far back you need to support browsers, the code for rounded corners look like this:

.round {
  -webkit-border-radius: 12px; 
  -moz-border-radius: 12px; 
  border-radius: 12px;

  Prevent background color leak outs 
http://tumble.sneak.co.nz/post/928998513/fixing-the-background-bleed */ -webkit-background-clip: padding-box; -moz-background-clip: padding-box; background-clip: padding-box; }

We can all agree that doing this kind of repetitive tasks is a pain. Fortunately there are several ways in which we can eliminate the duplication of work. We can create SASS mixins and place holder selectors where we can hardcode the prefixed versions. That would be good for the short term but doesn’t address the bloat problem in our CSS… eventually we will no longer need the prefixed versions but the CSS code will still be littered with prefixes that no one but older browsers really need or want.

A second alternative is use tools like Autoprefixer. It is another Node based tool that installs with the NPM command, like so:

[23:49:19] [email protected] npm install -g autoprefixer

and provides a command line tool by default. To get and idea of the options available you can use the autoprefixer --help command that produces a result like the one below.

[23:49:19] [email protected] typography-sass 16563$ autoprefixer --help
Usage: autoprefixer [OPTION...] FILES

Parse CSS files and add prefixed properties and values.

  -b, --browsers BROWSERS  add prefixes for selected browsers
  -o, --output FILE        set output file
  -d, --dir DIR            set output dir
  -m, --map                generate source map
      --no-map             skip source map even if previous map exists
  -I, --inline-map         inline map by data:uri to annotation comment
      --annotation PATH    change map location relative from CSS file
      --no-map-annotation  skip source map annotation comment is CSS
      --sources-content    Include origin CSS into map
      --no-cascade         do not create nice visual cascade of prefixes
      --safe               try to fix CSS syntax errors
  -i, --info               show selected browsers and properties
  -h, --help               show help text
  -v, --version            print program version

Given a list of browsers to support and a list of CSS files to inspect it will query Caniuse.com data to determine what, if any, vendor prefixes need to be added to your code and will insert those prefixes where appropriate.

Take for example the following CSS code:

a {
  width: calc(50% - 2em);
  transition: transform 1s;

and running Autoprefixer to add prefixes for the last 2 versions of major browsers using this command:

autoprefixer -b "last 2 versions"


a {
  width: calc(50% - 2em);
  -webkit-transition: -webkit-transform 1s;
          transition: transform 1s

While SASS mixins may be easier to work with if you’re just writing CSS; Autoprefixer makes a nice addition to a development toolchain. You don’t have to remember what supported browser need an extension for what property, particularly if you consider that different versions of a browser may have different prefixes for a property or none at all.

Web Content Optimization: CSS Triming with UNCSS

Unless you already work with a customizer or are familiar with SASS to know what imports to comment out in order to remove a feature from your CSS framework you are bound to have unused features bloating your CSS and making it bigger than it need to be.

Or it may be that your CSS has grown too large after accommodating feature after feature… without removing them your file will bloat to unmanageable sizes.

Tools like UnCSS allow you to remove unussed CSS selectors by following these steps:

  1. The HTML files are loaded by PhantomJS and JavaScript is executed.
  2. Used stylesheets are extracted from the resulting HTML.
  3. The stylesheets are concatenated and the rules are parsed by css-parse.
  4. document.querySelector filters out selectors that are not found in the HTML files.
  5. The remaining rules are converted back to CSS.

Installing UnCSS

UnCSS requires Node.js already in your system. To install UnCSS globally for all your applications, run the following command:

npm install -g uncss

Command Line Version

Installing UnCSS globally gives you the uncss command to work with UnCSS from the command line. Using this tool you can indicate the file or URL that contains the HTML and CSS.

Usage: uncss [options] <file or URL, ...>
        uncss http://getbootstrap.com/examples/jumbotron/ > stylesheet.css
        uncss index.html > stylesheet.css


  -h, --help                            output usage information
  -V, --version                         output the version number
  -i, --ignore <selector, ...>          Do not remove given selectors
  -m, --media <media_query, ...>        Process additional media queries
  -C, --csspath <path>                  Relative path where the CSS files are located
  -s, --stylesheets <file, ...>         Specify additional stylesheets to process
  -S, --ignoreSheets <selector, ...>    Do not include specified stylesheets
  -r, --raw <string>                    Pass in a raw string of CSS
  -t, --timeout <milliseconds>          Wait for JS evaluation
  -H, --htmlroot <folder>               Absolute paths' root location
  -u, --uncssrc <file>                  Load these options from <file>

Using the command will produce a new css file (stylesheet.css) containing only the CSS rules used on the page. You can also use a wildcard match to pick all the html files in a directory (uncss dist/*.html > stylesheet.)

For more information refer to UnCSS’s Readme file

Automating the process

There are UnCSS plugins from several task runners and build systems that would allow developers to incorporate UnCSS into their development workflows:

You can incorporate the plugin into your workflow where it makes sense to you. Working with Grunt you can do the following to create a new css files with only the selectors used in your application html files:

uncss: {
  dist: {
    files: {
      'dist/css/tidy.css': ['app/*.html']

an equivalent Gulp task looks like this:

var gulp = require('gulp');
var uncss = require('gulp-uncss');

gulp.task('default', function () {
    return gulp.src('site.css')
            html: ['app/*.html']

And in Broccoli:

var uncss = require('broccoli-uncss');
tree = uncss(tree, {html: ['index.html']});

For more detailed information, refere to the respective plugin documentation.