Books as (web) apps

Books as applications

What would it take to make our books in to applications for the web? In this article we’ll explore what is an application, what is a web application, why it now makes sense to turn our books in to web applications.

What is an application? What is a web application?

Application software (an application) is a set of one or more programs designed to permit the user to perform a group of coordinated functions, tasks, or activities. Application software cannot run on itself but is dependent on system software to execute. Examples of an application include a word processor, a spreadsheet design and management system, an aeronautical flight simulator, a console game, a drawing, painting, and illustrating system, or a library management system. [1]

From Wikipedia

A web application or web app is any software that runs in a web browser. It is created in a browser-supported programming language (such as the combination of JavaScript, HTML and CSS) and relies on a web browser to render the application.[1][2][3]

Web applications are popular due to the ubiquity of web browsers, and the convenience of using a web browser as a client, sometimes called a thin client. The ability to update and maintain web applications without distributing and installing software on potentially thousands of client computers is a key reason for their popularity, as is the inherent support for cross-platform compatibility. Common web applications include webmail, online retail sales, online auctions, wikis and many other functions.

From Wikipedia

Open Web versus OS-specific Apps

It hasn’t been long since browser (Chrome and Firefox) and Operating System Vendors (Microsoft) allow web content to live in the OS as applications subject to packaging and usage restrictions.

I’ve chosen to remain platform neutral as much as possible.

The idea is that we’ll use ‘The Web’ as out application environment where both mobile and desktop users can use the same content. The moment we introduce OS specific features we invite what I call the ‘why not my OS’ syndrome… it’s just not worth it when the web is getting more and more powerful as an application environment all the time.

Mobile devices also have web browsers so we can use the web version in Mobile and we’ll take advantage of that for some mobile browsers (See iOS icons and splash screens, below.)

The DRM/EME monster rears its ugly head

One issue that needs to be addressed is Digital Rights Management or DRM.

People look at DRM as a way to preserve intellectual property and to keep people from giving away or lending books that they did not buy.

But DRM is not a perfect (or even a good) solution. Here are some reasons:

  • The DRM schemas can be broken and the content can still be given away. It is not a trivial exercise, true. But it’s not very difficult either
  • If your ebook provider goes under you will loose access to your books unless the seller has made arrangements to continue supporting the DRM scheme
  • You’re locked to your vendor. Amazon, Apple and Barnes and Nobles all use different DRM schemes and they are not compatible. You may still be locked to your vendor without DRM but it would make switching readers easier if they use a common format without restrictions

Ian Hickson writes about DRM. In one of the last paragraphs of the post, Hickson observes that:

Arguing that DRM doesn’t work is, it turns out, missing the point. DRM is working really well in the video and book space. Sure, the DRM systems have all been broken, but that doesn’t matter to the DRM proponents. Licensed DVD players still enforce the restrictions. Mass market providers can’t create unlicensed DVD players, so they remain a black or gray market curiosity. DRM failed in the music space not because DRM is doomed, but because the content providers sold their digital content without DRM, and thus enabled all kinds of players they didn’t expect (such as “MP3” players). Had CDs been encrypted, iPods would not have been able to read their content, because the content providers would have been able to use their DRM contracts as leverage to prevent it.

This is not just an academic exercise on freedom of expression. Publishers like O’Reilly (described here and here) and Tor have removed DRM from their books and, while the books have been widely pirated, sales have not decreased and have seen a modest improvement. I’ll try to get information from O’Reilly with updated figures and will update the article when/if I received an answer.

In 2009, David Pogue, a technology columnist from the New York Times, wrote a piece about ebooks and copy protection. In the article one of his readers sums up what, to me, the real issue:

“When the iPod introduced music lovers to the idea of copy protection, a years-long war ensued between consumers and the RIAA (and others). The primary issue was that if I purchased a song for my music player, it would only play on that player; I didn’t really own it, per se. Years later, we finally have digital music without copy protection.”

This was reinforced when Amazon deleted books from Kindle readers in 2009 and later in 2012 allegedly wiped a user’s entire Kindle library.

While I understand publishers’ positions, DRM is not the answer any more than it was in the Music industry when Apple introduced the iPod. If we can consistently prove that removing DRM will not damage sales and will improve t he user experience the more we can get publishers to stop thinking in terms of absolute sales as the only goal that matters.

Although not directly related to DRM, in the sense that they don’t provide the full set of access restrictions DRM does, Encrypted Media Extensions (EME) provide a similar platform for building restrictions to video content on the web.

I mention EME briefly because it will become the next battleground in the fight between content distributors and content consumers. The best explanation of what EME is and how it affects the web and its ideals is a page titled What is EME by Henri Sivonen.

What worries me the most is the fact that in the case of Firefox:, as mentioned by Cory Doctorow:

The inclusion of Adobe’s DRM in Firefox means that Mozilla will be putting millions of its users in a position where they are running code whose bugs are illegal to report. So it’s very important that this code be as isolated as possible.

All browsers have some level of support for EME. Firefox has announced their intent to implement EME, Google and Apple already support parts (in the case of Google) and all of the specs (Apple) to fully encrypt video content. At least 3 of these browsers: Firefox, Chrome and Opera (or Chrome Jr.) have open bug reporting systems; how will they be affected by these new restrictions on the proprietary software that makes EME?

Netflix seems to have gotten away with EME, at least for now. MSE, EME, and Web Cryptography APIs to fully control when, how and who can access their content. An earlier post in their technology blog seems to indicate that the technologies are also supported in ChromeOS (with the exception of the Crypto API) and that when Web Crypto is implemented in Chrome then they will move to encrypting all content viewable in Chrome for all platforms.

It’ll be interesting to see how this evolves and what other areas it moves to beyond video. For an interesting discussion about DRM listen to The Web Ahead, Episode 73 with Doug Scheppers and Jeremy Keith.

Online versus offline

One specific type of enhancement we’ll discuss before jumping into the bells and whistles is whether we should add offline caching.

The APIs are in progress. Chrome supports most of the APIs we need to create offline capable applications as part of ServiceWorkers (they support cache, through polyfill, push messaging and notification) whether the page using them is open or not. While the Chrome implementation only support Google Cloud Messaging the idea is that we’ll soon have an open source solution to the push and notification challenges we currently face. So, unless you’re already working with GCM and ther other required technologies it may be better to see what the open web has to offer when the specifications are finalized.

The idea behind push and background notifications is that we can build a fully responsive user experience that will work on and offline and notify the user on specified events, such as updated content, and the application doesn’t even need to be running to do so.

I’ve written about ServiceWorkers when discussing Athena as an offline reading experience so I won’t repeat all the rationale and code here.

In an HTML5 Rocks article Jeff Posnick describes how we can automate the creation of a caching service worker using Gulp.

gulp.task('generate-service-worker', function(callback) {
  var fs = require('fs');
  var path = require('path');
  var swPrecache = require('sw-precache');
  var rootDir = 'app';

  swPrecache({
    staticFileGlobs: [rootDir + '/**/*.{js,html,css,png,jpg,gif}'],
    stripPrefix: rootDir
  }, function(error, swFileContents) {
    if (error) {
      return callback(error);
    }
    fs.writeFile(path.join(rootDir, 'service-worker.js'), swFileContents, callback);
  });
});

Enhancing the user experiences

Offline content caching is just part of the exercise. The other part is what additional tools, libraries and scripts we add to our web documents and how will they work in an offline environment.

For each of the technologies discussed below I’ll provide a brief summary and why I chose the technology.

One thing to realize from the beginning is that, while I chose to show all possible technologies, some of these are part of the build process for our site/app they are still important and need to be mentioned

Modernizr and Modernizr.load

Modernizr makes it easier to use new HTML5 tags and APIs and CSS3 technologies in a way that will also provide graceful degradation for our content.

We can leverage Modernizr from both CSS and Javascript.

For CSS Modernizr will add classes for each feature tested depending on whether it supports a feature (regions would be the class if regions are supported) or not (no-regions in that case).

.video .highlight {}

.no-video .highlight {}

For Javascript Modernizr creates an object and attaches all successful tests. for examplle, if we wanto to test if a browser supports video we can do something like this:

if (modernizr.video) {
  console.log ('we can play video');
  // load the vide or do furhter testing
} else {
  console.log ('we cannot play video, need polyfill');
  // load an alternative, perhaps the solution from
  // video for everyone
}

Modernizr.load uses Modernizr’s feature detection libarry to conditionally load content based on feature availability. Continuing with our video example, if the browser supports HTM5 video and can play h264 (MP4) video we load the video.js otherwise (cannot play h264 or cannot play HTML5 video altogher) we load a polyfill library or, most likely the solution presented in Video for everybody by Kroc Camen.

Modernizr.load({
  test: Modernizr.video && Modernizr.video.h264,
  yep : 'video.js',
  nope: 'video-polyfill.js'
});

There are ways to test for features without using Modernizr, but they all require lots of testing and they are not guaranteed to work on every browser.

Normalize.css

Normalize is a CSS library that provides an event and better playing field. In addition to providing the functionality of a CSS Reset library (Like Eric Meyer’s CSS Reset with added functionality.

It [Normalize] fixes common desktop and mobile browser bugs that are out of scope for resets. This includes display settings for HTML5 elements, correcting font-size for preformatted text, SVG overflow in IE9, and many form-related bugs across browsers and operating systems.

For a more detailed explanation see Nicolas Gallagher post abut Normalize.css

Singularity / Susy Grid System

Singularity is a Grid System designed with repsponsive grids in mind. From the documentation:

Singularity is a next generation grid framework built from the ground up to be responsive. What makes Singularity different? Well, a lot of things. Singularity is based on internal ratios instead of context based which allows for better gutter consistency across breakpoints. Ratio based math also allows for non-uniform grids in any unit you want to use.

If you’re used to working with SASS and Compass then it’s a breeze to work with Singularity; however that’s where the problem is: you must work with the Ruby version of SASS and you must work with Compass. Integrating Ruby, SASS installation and compilation and Compass is not a small undertaking so I try to avoid it where possible.

Susy is a more flexible framework; It provides the same functionality without requiring Compass. It also plugs in with existing Grunt SASS tasks, something like this (ignore the syntax for the moment, we’ll revisit Grunt and Gulp later in the article):

// Gruntfile.js
sass: {
  dist: {
    options: {
      style: 'expanded',
      require: 'susy'
    },
    files: {
        'css/style.css': 'scss/style.scss'
    }
  }
}

Beyond the obvious use of grids to provide a consistent interface to layout content they provide answer to more complicated questions: How do we create layouts with columns of arbitrary width (using percentages or rem units)? How do we span multiple columns? How do we change the gutter for our layout (or parts of the layout)?

Media Queries Support

I am lazy. Even though I have defined my own set of media queries and even took into account different screen sizes and resolutions, it’s not something that I want to maintain long-term. Breakpoint to the rescue!

Breakpoint abstracts a lot of the work in creating Mediaqueries. It includes support for when queries are not supported, ability to pass a context to your own mixins and advanced media queries including: compound queries, density-based and media types.

In an ideal world all browsers would have the same support for the same features and we wouldn’t have to deal with inconsistencies. Media Queries are one step in solving these issues.

Prefix Free and UNCSS

If I don’t have to write it usually I won’t. On the other hand there are times when the accumulation of cruft (old classes the are no longer used) or the sheer size of a framework (think Twitter Bootstrap, Zurb Foundation or Adobe Top Coat) versus the classes from the framework we actually use make for bloated CSS and unnecessary data being sent over the wire.

Addy Osmani has written about this problem along with a potential solution. UNCSS will load the HTML and CSS using Phantom.js and then create a new CSS file using only those selectors that match content in the HTML files. The savings can reach the 100k mark (!).

PrefixFree is a Javascript library that automates vendor prefixes for CSS selectors. The classic example is border radius, there are 5 nearly identical ways to address the same property (WebKit, Chrome/Opera, Mozilla, IE and Standard). Prefix Free takes care of the differences so we don’t have to

When we discuss tooling we’ll talk about grunt-autorprefixer, which removes the Javascript library while achieving the same effect of prefixing content as needed for the browsers we specify.

As I mentioned, I’m lazy and don’t want to write any more code than absolutely necessary have to. This is one way to reduce the code count

Typography

Compass Typography suffers from the same issues as all Compass-related plugins do. They depend on Compass which in turns depends on having Ruby installed on your system. Again, don’t get me wrong, I think compass is awesome but it’s not always necessary and it adds bloating to the resulting CSS.

Typeplate takes a minimalist approach to framework development. It doesn’t do much but what it does well and it does simply. It provides a minimal set of HTML models and corresponding SCSS/CSS templates. You can use it as-is or you can enhance it

The idea of using SASS/CSS typography solutions is that it makes it easier to create your content. You don’t always have to stick with what the framework has to offer… I’ve always considered CSS frameworks to be a starting point for my own work and not and end in and of itself.

Neither of the alternatives offers answers to how to load the fonts on the page. Depending on the fonts we may be able to leverage services like fonts.com, Adobe Typekit, Font Squirrel, Google and Font Deck, among others, to handle the download of your fonts.

Some of the best fonts are not available through font services. If you’re sure the font matches your needs and the license available (in most cases it’s a different license for web and ebooks) you can host the font on your server and use it from there. There are ups and downs like having to provide for font obfuscation and other security measures to having a wider selections of fonts to license and use.

Note the different versions of the same font you have to support to be compatible with most (all?) existing browsers. You probably can do away with supporting IE6 – IE8 if you can but that still leaves you with 5 different formats for each font you want to support (you can convert your woff font to woff2 using this online converter.)

We can embed fonts in our web sites (assuming that we have the license for doing so) with a command like this in our main CSS style sheet:

@font-face {
  font-family: Open Sans';
  src: url('opensans.eot'); /* IE9 Compat Modes */
  src: url('opensans.eot?#iefix') format('embedded-opentype'), /* IE6-IE8 */
       url('opensans.woff2') format('woff2'), /* Super Modern Browsers */
       url('opensans.woff') format('woff'), /* Pretty Modern Browsers */
       url('opensans.ttf')  format('truetype'), /* Safari, Android, iOS */
       url('opensans.svg#svgFontName') format('svg'); /* Legacy iOS */
}

And then use the font as you would normally in a font-family instructions.

body {
  background: #efefef;
  font-family: 'Open Sans', sans-serif;
  font-size: 16px;
  line-height: 24px;
  padding: 50px;
}

CSS Tricks has an interesting article explaining how to use web fonts.

WebFont Loader

The Web Font Loader is a joint project between Typekit and Google that provides tighter control over your fonts loadings process. Main advantage of a system like this is that you can work with fonts from different vendors and reduce the likelihood of the dreaded ‘Flash of Unstyled Content’.

Modular scales

modularscale.com and its companion SASS Module provide an easier way to incorporate the scale into you projects.

Drive before you buy

Services like Typecast.com allow you to test fonts in a fully responsive environment with your own text before you commit to using the font in your project. Until recently I thought that was not necessary and that we’d only have to test on devices. This is no longer the case, we should test as much as possible in as many devices as possible.

References

I love The Web Ahead. I think Jenn Simmons does an awesome job in selecting her guests and having meaningful and insightful conversations about the web; not just the technology but what makes the web work.

This was specially true about the typography episodes where she’s talked to people who have changed the way that I look at fonts and how we use them on the web. The podcasts (in reverse chronological order) are:

Picture elements and responsive images

Wouldn’t it be nice if we could use something like Media Queries for images? We can… it’s the picture element.

The picture element attempts to solve 4 issues / Answer 4 questions:

  • Can I serve different image sizes based on some attribute of the browser accessing the page?
  • Can I provide different images based on DPI?
  • Can I provide different image formats based on device capability? (For example, not all browsers support WebP)
  • Can I provide different images based on my art direction requirements? (full size images may be overkill for smaller devices)

The example below, taken from Opera Developer’s sitearticle on responsive images takes the extreme view of supporting all 4 use cases for responsive images.

For browser windows with a width of 1280 CSS pixels and wider, a full-shot photo with a width of 50% of the viewport width is used; for browser windows with a width of 640-1279 CSS pixels, a photo with a width of 60% of the viewport width is used; for less wide browser windows, a photo with a width that is equal to the full viewport width is used. In each case, the browser picks the optional image from a selection of images with widths of 200px, 400px, 800px, 1200px, 1600px and 2000px, keeping in mind image width and screen DPI. These photos are served as WebP to browsers that support it; other browsers get JPG.

<picture>
  <source
    media="(min-width: 1280px)"
    sizes="50vw"
    srcset="opera-fullshot-200.webp 200w,
            opera-fullshot-400.webp 400w,
            opera-fullshot-800.webp 800w,
            opera-fullshot-1200.webp 1200w,
            opera-fullshot-1600.webp 1600w,
            opera-fullshot-2000.webp 2000w"
    type="image/webp">
  <source
    sizes="(min-width: 640px) 60vw, 100vw"
    srcset="opera-closeup-200.webp 200w,
            opera-closeup-400.webp 400w,
            opera-closeup-800.webp 800w,
            opera-closeup-1200.webp 1200w,
            opera-closeup-1600.webp 1600w,
            opera-closeup-2000.webp 2000w"
    type="image/webp">
  <source
    media="(min-width: 1280px)"
    sizes="50vw"
    srcset="opera-fullshot-200.jpg 200w,
            opera-fullshot-400.jpg 400w,
            opera-fullshot-800.jpg 800w,
            opera-fullshot-1200.jpg 1200w,
            opera-fullshot-1600.jpg 1800w,
            opera-fullshot-2000.jpg 2000w">
  <img
    src="opera-closeup-400.jpg" alt="The Oslo Opera House"
    sizes="(min-width: 640px) 60vw, 100vw"
    srcset="opera-closeup-200.jpg 200w,
            opera-closeup-400.jpg 400w,
            opera-closeup-800.jpg 800w,
            opera-closeup-1200.jpg 1200w,
            opera-closeup-1600.jpg 1600w,
            opera-closeup-2000.jpg 2000w">
</picture>

Sure, this takes a lot more work to setup both in terms of preparing images of different sizes and resolutions and in terms of preparing your HTML to accomodate all your needs but we can now finally stop depending on servers to convert our images or provide a one-size-fits all solution.

ServiceWorker caching

I ran my first experiment with ServiceWorkers as part of my Athena Framework Experiment where the idea was (and still is) to cache content for offline viewing so that network becomes another layer of enhancement as we only need to be online the first time we access out content and the cache will display the content whether we are online or not.

If you use Chrome as your primary development platform you can take advantage of Push Messaging (Google Cloud Messagin) and Background notification to enhance your users’ experience but only tied to the Google echo system.

Despite the advantages I’ve decided to hold off on Push and Background Notification and concentrate on caching the content. Once the open APIs for Push and Background notification reach candidate recommendation status (meaning that there are two interoperable implementations available in the wild) I will revisit the issue.

Emphasis deep Linking

The Emphasis Library was initially developed by The New York Times as a highlighting and deep linking library. What caught my attention is that you’re not limited to single user linking; you can share the link and it will display the same highlights and annotations that you created.

Emphasis was initially developed as a jQuery plugin which made it less attractive as someone who wanted to develop a dependecy-free framework. When I revisited the plugin I had decided to use jQuery anyways and the pluging had moved away from jQuery as a dependency.

Even though I chose to make jQuery available I was still pleasantly surprised to see the code be made jQuery-free.

Highlight.js

A lot of what I write is web-related with lots of code examples for HTML, CSS and Javascript. I know for certain that I don’t want to do the highlighting by hand.

There are many code highlighting libraries available. I chose highlight.js for HTML as well as PDF generated from XML.

I love the library’s breadth of supported languages and the work it saves makes it totally worth it.

Another library worth looking at is Prism.js

Regions and Shapes

If we’ll push the envelope then I want to be able to push the envelope in terns of the technologies that will make the user experience more engaging and interactive content.

I’ve written about shapes both as a new technology and in conjunction with svg clip paths.

Shapes can provide better drop cap support (by wrapping closer to the shape of the letter) and can provide better floated text with different shapes.

Support is inconsistent and must be polyfilled in order to work accross the board. See the polyfill readme file for more information on how to use the polyfill and what browsers are supported.

Regions provide a different way to layout the content that doesn’t involve tables and doesn’t require Javascript.

OS Specific Home Screen Icons and Splash Screens

While I remain committed to keeping this project on the web, there are enhancements we can make to provide a better experience in mobile without loosing the desktop experience.

The downside is that these enhancements are platform specific. Every time we add this functionality we must do so for each browser.

iOS icons and splash screens

When iOS was first introduced one of the features that first caught my attention was the ability to save web sites to the home screen and use them as an online-only web application. I always thought that it was something only full applications or apps from larger companies could do. It wasn’t until I read the Configuring Web Applications section of the Safari Web Content Guide that I realized that it was hard work but it was doable by anyone.

We add the following elements to the head of our content page (not 100% sure if this is for every page or only for the index). The first set of resources deal with fixing the width of the device to a 1 to 1 scale window, full screen and no app bar at the top of the application.

<meta name="viewport" content="user-scalable=no, initial-scale=1.0" />
<meta name="apple-mobile-web-app-capable" content="yes" />
<meta name="apple-mobile-web-app-status-bar-style" content="black" />

The second block adds the application icons that will appear in the home screen when we add the site/app. We have to add multiple icons to account for different screen size/resolution/DPI combinations available to different devices.

<!-- iOS ICONS AND START SCREENS -->
<link rel="apple-touch-icon" href="/images/misc/apple-touch-icon-57x57.png" />
<link rel="apple-touch-icon" sizes="72x72" href="/images/misc/apple-touch-icon-72x72.png" />
<link rel="apple-touch-icon" sizes="114x114" href="/images/misc/apple-touch-icon-114x114.png" />

The final step is to add the spalsh screen that will appear while the site is loading. As with the icons, we have to account for different size/resolution/DPI combinations for different devices.

<!-- iPhone/iPod Touch Portrait – 320 x 460 (standard resolution) -->
<!-- These are not all available resolutions, do your homework -->
<link rel="apple-touch-startup-image" href="/images/misc/splash-screen-320x460.png" media="screen and (max-device-width: 320px)" />

<!-- For iPad Landscape 1024x748 -->
<link rel="apple-touch-startup-image" sizes="1024x748" href="/images/misc/splash-screen-1024x748.png" media="screen and (min-device-width: 481px) and (max-device-width: 1024px) and (orientation:landscape)" />

<!-- For iPad Portrait 768x1004 (high-resolution) -->
<link rel="apple-touch-startup-image" sizes="1536x2008" href="/images/misc/splash-screen-1536x2008.png" media="screen and (min-device-width: 481px) and (max-device-width: 1024px) and (orientation:portrait) and (-webkit-min-device-pixel-ratio: 2)"/>

Windows8 application tiles

Windows 8 (in tablets and phones) allows applications to be tiled on the start screen. We can take advantage of this feature by going to buildmypinnedsite.com completing the form and download the resulting kit. We can the paste the code below on the head of our pages and upload the associated images to the server.

<-- Windows 8+ tile stuff -- Assumes all content is uploaded to server -->
<meta name="application-name" content="My Awesome Site"/>
<meta name="msapplication-TileColor" content="#e809e8"/>
<meta name="msapplication-square70x70logo" content="tiny.png"/>
<meta name="msapplication-square150x150logo" content="square.png"/>
<meta name="msapplication-wide310x150logo" content="wide.png"/>
<meta name="msapplication-square310x310logo" content="large.png"/>

Chrome on Android

Chrome takes a two-prong approach to adding web apps to the homescreen. First it asks to link to a json manifest file (in the example below I called it manifest.json

<link rel="manifest" href="manifest.json">

The manifest itself looks like this:

{
  "name": "Web Application Manifest Sample",
  "icons": [
    {
      "src": "launcher-icon-0-75x.png",
      "sizes": "36x36",
      "type": "image/png",
      "density": "0.75"
    },
    {
      "src": "launcher-icon-1x.png",
      "sizes": "48x48",
      "type": "image/png",
      "density": "1.0"
    },
    {
      "src": "launcher-icon-1-5x.png",
      "sizes": "72x72",
      "type": "image/png",
      "density": "1.5"
    },
    {
      "src": "launcher-icon-2x.png",
      "sizes": "96x96",
      "type": "image/png",
      "density": "2.0"
    },
    {
      "src": "launcher-icon-3x.png",
      "sizes": "144x144",
      "type": "image/png",
      "density": "3.0"
    },
    {
      "src": "launcher-icon-4x.png",
      "sizes": "192x192",
      "type": "image/png",
      "density": "4.0"
    }
  ],
  "start_url": "index.html",
  "display": "standalone",
  "orientation": "portrait"
}

More information on the Chrome Developers’ site

jQuery

jQuery has always been a touchy subject for me. On the one hand it is still a good library to smooth out browser idiosyncracies and to provide a common interface for developers to work. I usually chose jQuery over Dojo (even though I think Dojo is the better library) because of all the plugins available and the relative ease of implementing your own.

The problem I have is not really with jQuery but with the people who use the technology without understanding what it does, how it works and how to modify it when needed.

I tend not to use jQuery much but make it available in case there are plugins that the creator may want to use.

The two sides of jQuery

Below are 2 views on the jQUery debate. I will let you decide for yourself if this is worth it.

@zackbloom and adamfschwartz created You Might Not Need jQuery to show developers that jQuery is not the only way to achieve web effect and support older browsers. They use IE as an example, probably because older versions of IE is the most problematic of browsers

John-David Dalton, Paul Irish created a document outlining browser bugs jQuery fixes as an answer to You May Not Need jQuery. They point out that:

While the sentiment of youmightnotneedjquery is great, developers should be aware that ditching libraries, like jQuery, can easily require large amounts of research on their end to avoid bugs (even in modern browsers). The snippets provided by youmightnotneedjquery are a starting point but hardly scratch the surface of being a solid robust replacement to jQuery.

The great thing about an established library, like jQuery, is it’s hammered on by lots of talented people, transparently improved, and refined by the community. jQuery core is very careful not to bloat their codebase and does not add features or fixes without serious consideration and peer review. If it’s in jQuery that means it’s addressing real compatibility issues.

stayInApp jQuery Plugin

One of the annoyances of working on mobile browsers is that clicking on links takes you out of the full screen experience into Safari’s interface.There is a jQuery plugin designed to keep you in your web application when you click on links and, thus, preserving your application’s experience.

The Plugin only works for iOS and it takes advantage of the ability to detect when iOS is in full screen mode

I’m looking for ways to provide and equivalent functionality for Android and Windows.

Tooling and Automation

For most web-related project I’ve used Grunt as my task runner. Grunt was the first software I used to run commands like I used to do with Ant and Make in a pure Javascript environment. sometimes it gets painfully slow but nowhere near as slow as Ant used to be.

Grunt automates all the tasks that I need in a project: convert from SASS to CSS, concatenate and minnify all third party Javascript libraries and plugins, use Autoprefixer and UNCSS on the generated CSS and save it with the same name, copy all the resources needed into a distribution folder and even publish the distribution to Github’s gh-pages branch.

Gulp is similar to Grunt but it starts from a different premise. Where Grunt puts the emphasis on configuring the task to run, Gulp gives developers the flexibility to create their own tasks to work anyway they want and only enforces how these tasks connect to each other. put in another way: “Grunt focuses on configuration, while Gulp focuses on code.”

sass: {
  dev: {
    options: {
      style: 'expanded'
    },
    files: [ {
      expand: true,
      cwd: 'scss',
      src: [ '*.scss'],
      dest: 'css',
      ext: '.css'
    }]
  },
  production: {
    options: {
      style: 'compact'
    },
    files: [ {
      expand: true,
      cwd: 'scss',
      src: [ '*.scss'],
      dest: 'css',
      ext: '.css'
    }]
  }
},

The best way I’ve found to do this with Gulp is create two separate tasks, one for development and one for production. They would looks something like this:

gulp.task('sass-dev', function() {
  return gulp.src('src/styles/main.scss')
    .pipe(sass({ style: 'expanded' }))
    .pipe(gulp.dest('dist/assets/css'))
});

gulp.task('sass', function() {
  return gulp.src('src/styles/main.scss')
    .pipe(sass({ style: 'compressed' }))
    .pipe(gulp.dest('dist/assets/css'))
});

Gulp allows you to pipe commands to output of the previous command. The example below will do the following:

  • Convert the SASS files into CSS
  • Minify the CSS
  • Rename the CSS file and convert it to .min.css
  • Write the resulting file out
//declare the task
gulp.task('sass', function(done) {
  gulp.src('sass/main.scss')
    .pipe(sass({ style: 'compressed'}))
    .pipe(gulp.dest('./www/css/'))
    .pipe(minifyCss({
      keepSpecialComments: 0
    }))
    .pipe(rename({ extname: '.min.css' }))
    .pipe(gulp.dest('./www/css/'))
    .on('end', done);
});

As you can see, each Gulp tasks does one thing and then passes its output as the input the next task.

Grunt / Gulp links and resources

Towards Subcompact Publishing

In Platforming books Craig Mod proposes a multi platform approach to book publishing anchored in the success of his Artspace Tokyo Kickstarter project.

He suggests that books should have: An open web presence with the complete book content, an iBooks version to enjoy the advanced capabilities available to the format, Kindle and PDF versions of the book with the advantages of each format and the things you can get out of it.

Then in Subcompact Publishing he proposes a departure from tradition in the digital publishing business. It advocates for a system that has/is:

  • Small issue sizes (3-7 articles / issue)
  • Small file sizes
  • Digital-aware subscription prices
  • Fluid publishing schedule
  • Scroll (don’t paginate)
  • Clear navigation
  • HTML(ish) based
  • Touching the open web

While some of these areas apply to periodical publications (magazines and such) several of these elements are applicable to a wider publishing channels.

Examples

Note that not all examples listed are books or book-like web experiences. Some, like Unnumbered Sparks, are listed because there was some aspect I thought important to highlight.

  <h3>Books and magazines</h3>
  • Offline Reader. I’m not much for paginated content but think this is an example of how to make it work. It is also built with Polymer so it’s become a good model of how this can be done
  • Artspace Tokyo by Craig Mod and Ashley Rawlings. It highlights a lot of the ethos of Subcompact Publishing and how to Platform books
  • Shape of Design by Fran Chimero. The navigation was a little hard to understand at first but once I understood the metaphor it became very easy to navigate
  • The Magazine provides a good example of what a SubCompact publication may look like. What I loved the most was t he speed of publication, how long it took to download and read and, particularly, the little link trick where clicking on a link will pop up a small window with information about the link and the giving you the option of actually going to the link

Other Online

  • Liz Danzico’s blog provides a clean and crisp interface. I particularly love the way the site (designed by Jason Santa Maria) uses white space
  • Climbing Everest presents a fully interactive experience and it draws the reader to interact with the content
  • Tokyo Otaku Mode started as a Facebook page and reached over 3 million subscribers before developing apps and how to subvert the publishing and marketing worlds
  • Unnumbered Sparks is the largest web browser I’ve ever seen. There is a Chrome instance about 10 million pixels that projects users’ interactions into the hanging structure. Every time I see this project (both the video and the material available at the Autodesk Museum in San Francisco) makes me think I’m not thinking big enough
  • Cabbibo’s website makes extensive use of WebGL and 3D web as an expressive medium. It is in the convergence of 2D and 3D where we can find the trully expressive potential of the web
  • Forest Giant and Alice in Woderland are technology demonstrations from Adobe that push the envelope of what you can do with web technologies. Unfortunately some of the CSS technologies have been caught in what I call “specification hell” with a very uncertain future (which makes me really sad)

SVG Clip Path and Shapes. An interesting alternative

We’ll borrow a play from the SVG playbook for this post. We’ll use clip path to generate masks around an image to control what the user gets to see and what shapes the image takes without changing the image itself.

We’ll look at the process and then we’ll build an e-book with examples to test if this works with iBooks (and whatever other reader you want to test with) and how can we better leverage the feature in our reflowable projects.

CSS clip-path

CSS clip-path attribute is the star of the show either in CSS, via SVG or a mix of the two of them, it will clip the image and hide portions outside the clipping region (and therefore changing the image’s visible shape) without changing the image file.

Rather than figure out the coordinates for each point in the shape or polygon I’ll be working on I chose to use Clippy, a tool by Bennett Feely. It is certainly not the only one but it is certainly the easiest to use of those I’ve found. If you use Brackets you may want to look at the CSS Shapes Editor that’s available for the editor.

For this example I took a triangle and put it on its side, the same shape in the Demosthenes example but with a different image.

The code looks like this:

[codepen_embed height=”266″ theme_id=”2039″ slug_hash=”QwodXb” default_tab=”result” user=”caraya”]See the Pen Breaking The Box — Step 1 by Carlos Araya (@caraya) on CodePen.[/codepen_embed]

SVG Clip path

All is well and good for browsers that support the CSS clip-path property whether prefixed or not. But what happens to older browsers? Fortunately for us support for SVG is wider than the support for CSS clip path.

SO we take a two-pronged approach, we create an SVG clip path element and then we reference the SVG from our CSS.

This bit looks like this:

[codepen_embed height=”266″ theme_id=”2039″ slug_hash=”ogVZjP” default_tab=”result” user=”caraya”]See the Pen Breaking The Box — Step 2 by Carlos Araya (@caraya) on CodePen.[/codepen_embed]

CSS shapes

I’ve discussed CSS shapes in other blog posts so I won’t cover it again here. But it’s important to keep this in mind as it’ll be what will pull the components together below.

Putting it all together

We have all the components we need. It’s time to put them together. We use shapte-outside to tel the CSS engine to put the content closer to masked shape of the image.

The final code looks like this:

[codepen_embed height=”496″ theme_id=”2039″ slug_hash=”YPgZyd” default_tab=”result” user=”caraya”]See the Pen Breaking The Box — Step 3 by Carlos Araya (@caraya) on CodePen.[/codepen_embed]

Moving it to ePub and the result thereof

I had initially targeted iBooks but even within the iBooks platform, the results are inconsistent. I’m working on trying to figure out if it’s a code issue or if the different versions of iBooks really are that inconsistent with each other.

iBooks for Mac (1.1.1 (600) running on OS X 10.10.3) produces no visible result. The image is not displayed at all.

iBooks for iOS in an iPad Air 2 produces a distorted image and not the sharp triangle like the one provided for the open web.

I’m researching if this is an issue with the way I’m using clip-path, the limitations for using SVG clip path inside an XHTML document, or just that it’s not supported.

If you want to help me test the epub I created (with the cover and title for Peter Pan) is available here

Links and credits

Idea from http://demosthenes.info/blog/1007/Combining-CSS-clip-path-and-Shapes-for-New-Layout-Possibilities

Image used in this post courtesy of Craig Deakin used under a Creative Commons attribution license

image is available in codepen

Trimming the CSS fat

Trimming the CSS fat

After reading India Amos Degristling the sausage: BBEdit 11 Edition I thought I’d share my tricks for making CSS files as small as possible. While I learned these tricks as a front end developer they apply equally to creating content for e-books.

One thing that has always stopped me from fully embracing frameworks is that they use an insane amount of code and it’s really difficult to trim until I’m done with a project and, at that time, I usually don’t want to have to look at CSS for a few weeks.

In researching front end techniques I’ve discovered a few techniques to make CSS development more bearable and to cut the size of our CSS files at the same time.

The main requirement for the chosen tools is that they have both a command line tool or a grunt/gulp build system plugin.

For tools like CodeKit, a Macintosh application, or Prepros, cross platform, they must support all the tool discussed.

Both of these task runners, and the plugins that run within them, depend on Node.js and NPM. They both must be installed on your system before any of the tools discussed will work.

SASS

SASS and related libraries require Ruby and the SASS gem. Ruby is installed in most (if not all) Linux and OS X systems.

To install SASS just do gem install sass

I’ve been a fan of SASS ever since I first read about it a few years ago. It allows you to build more complex structures that you can with pure CSS.

Part of the fat trimming is the use of variables and reducing the number of redundant selector rules that we write.

I have written about SASS and some of its features

I followed it up with this post about advanced features to make CSS more manageable.

Grunt/Gulp Build System

After a long time saying I didn’t need a build system but, the more tools and techniques I’ve discovered. the harder it gets to remember the command line tools you have to use to accomplish these tasks

Grunt is the first task runner I saw and the one I still work with. It works in discrete tasks. It is very configuration heavy; the Gruntfile.js configuration file is full of instructions for how to run each task.

In the example below we define our tasks, along with the options and settings for each, and finally we define a custom task that includes all the steps we want to take.


/*global module */
/*global require */
(function () {
  "use strict";
  module.exports = function (grunt) {
    // require it at the top and pass in the grunt instance
    // it will measure how long things take for performance
    //testing
    require("time-grunt")(grunt);

    // load-grunt will read the package file and automatically
    // load all our packages configured there.
    // Yay for laziness
    require("load-grunt-tasks")(grunt);

    grunt.initConfig({
      // SASS RELATED TASKS
      // Converts all the files under scss/ ending with .scss
      // into the equivalent css file on the css/ directory
      sass: {
        dev: {
          options: {
            style: "expanded"
          },
          files: [{
            expand: true,
            cwd: "scss",
            src: ["*.scss"],
            dest: "css",
            ext: ".css"
          }]
        },
        production: {
          options: {
            style: "compact"
          },
          files: [{
            expand: true,
            cwd: "scss",
            src: ["*.scss"],
            dest: "css",
            ext: ".css"
          }]
        }
      },
     scsslint: {
        allFiles: [
          "scss/*.scss",
          "scss/modules/_mixins.scss",
          "scss/modules/_variables.scss",
          "scss/partials/*.scss"
        ],
        options: {
          config: ".scss-lint.yml",
          force: true,
          colorizeOutput: true
        }
      },

      autoprefixer: {
        options: {
          browsers: ["last 2"]
        },

        files: {
          expand: true,
          flatten: true,
          src: "scss/*.scss",
          dest: "css/"
        }
      },

      // CSS TASKS TO RUN AFTER CONVERSION
      // Cleans the CSS based on what"s used in the specified files
      // See https://github.com/addyosmani/grunt-uncss for more
      // information
      uncss: {
        dist: {
          files: {
            "css/tidy.css": ["*.html", "!docs.html"]
          }
        }
      }
    }); // closes initConfig

    // CUSTOM TASKS
    // Usually a combination of one or more tasks defined abov

    // Prep CSS starting with SASS, autoprefix et. al
    grunt.task.registerTask(
      "prep-css",
      [
        "scsslint",
        "sass:dev",
        "autoprefixer",
        "uncss"
      ]
    );
  }; // closes module.exports
}()); // closes the use strict function

Gulp is a stream oriented task runner where the emphasis is connecting (piping) the output one task to the input of the next. In the example below we create a task and then pipe the different plugins as input until the last pipe is for the destination of the product.


var cssc   = require("gulp-css-condense"),
    csso   = require("gulp-csso"),
    more   = require("gulp-more-css"),
    shrink = require("gulp-cssshrink");

gulp.task("styles", function () {
    return sass("./styles", {
        loadPath: "./vendor/bootstrap-sass/assets/stylesheets"
    }).on("error", console.warn.bind(console, chalk.red("Sass Errorn")))
        .pipe(autoprefixer())
        .pipe(combinemq())
        .pipe(cssc())
        .pipe(csso())
        .pipe(more())
        .pipe(shrink())
        .pipe(gulp.dest("./build/css"));
});

Combine Media Queries

The first optimization is to consolidate our Media Queries using Combine MQ. The idea behind this is to reduce the number of Media Queries and their associated rules.

We do this reduction first to make sure that we won’t have to run Autoprefixer and UnCSS again after reducing the number of Media Queries in our final CSS file.

There are Grunt and Gulp plugins available

AutoPrefixer

Autoprefixer helps in dealing with ‘prefix hell’ for the most part.

In their race to be first to implement a css feature, vendors added it behind a vendor-specific prefix (-webkit for Safari, Chrome and Opera, -o for Opera before they adopted Webkit, -moz for Firefox and -ms for Microsoft) to hide it for the browsers that had not adopted it or implemented differently.

Note that Autoprefixer does not handle ePub specific vendor prefixes. There are PostCSS tools that will do it for you when/if needed.

I’ve chosen not to implement these postcss plugins

That left developers having to figure out which elements had which vendor prefixes and to update them when/if the vendor finally decided to drop the prefix altogether.

Autoprefixer is a command line tool that will take care of vendor prefixes. It uses caniuse.com to determine what prefixes to apply to which element.

You can also specify how far back to go for prefixes. Examples of valid browser values:

  • last 2 versions: the last 2 versions for each major browser.
  • last 2 Chrome versions: the last 2 versions of Chrome browser.
  • > 5%: versions selected by global usage statistics.
  • > 5% in US: uses USA usage statistics. It accepts [two-letter country code].
  • Firefox > 20: versions of Firefox newer than 20.
  • Firefox >= 20: versions of Firefox newer than or equal to 20.
  • Firefox < 20: versions of Firefox less than 20.
  • Firefox <= 20: versions of Firefox less than or equal to 20.
  • Firefox ESR: the latest [Firefox ESR] version.
  • iOS 7: the iOS browser version 7 directly.

You can also target browsers by name:

  • Android for Android WebView.
  • BlackBerry or bb for Blackberry browser.
  • Chrome for Google Chrome.
  • Firefox or ff for Mozilla Firefox.
  • Explorer or ie for Internet Explorer.
  • iOS or ios_saf for iOS Safari.
  • Opera for Opera.
  • Safari for desktop Safari.
  • OperaMobile or op_mob for Opera Mobile.
  • OperaMini or op_mini for Opera Mini.
  • ChromeAndroid or and_chr for Chrome for Android (mostly same as common Chrome).
  • FirefoxAndroid or and_ff for Firefox for Android.
  • ExplorerMobile or ie_mob for Internet Explorer Mobile.

Autoprefixer is available as a command line tool, a Grunt Plugin and a Gulp Plugin

UnCSS

User-interface libraries like Bootstrap, TopCoat and so on are fairly prolific, however many developers use less than 10% of the CSS they provide (when opting for the full build, which most do). As a result, they can end up with fairly bloated stylesheets which can significantly increase page load time and affect performance. grunt-uncss is an attempt to help with by generating a CSS file containing only the CSS used in your project, based on selector testing.

From Grunt UnCSS

Uncss takes a set of HTML files, a css stylesheet and produces a new stylesheet with only those rules actually used in the HTML files. The idea is to reduce the size of the CSS being pushed to the client.

Shrinking the size of our CSS file(s) may not seem like a big deal but it becomes important when you use large libraries like Bootstrap or Zurb Foundation or when your own CSS libraries become too large to handle (special cases can be killers.)

Addy Osmani, the creator and maintainer claims that he has reduced the CSS size on a multi page Bootstrap project from over 120KB to 11KB.

UnCSS Size Reduction

There are UnCSS plugins for Grunt and Gulp available

CSSO or other minimizers

Now that we have a prefixed CSS file with only the classes we need we can look at further size reduction by doing optional compressions. I’ve chose to be somehwat conservative and choose two possible options of the many minimizers available through NPM and Grunt.

If you want to see a more detailed comparison check sysmagazine comparison of CSS and Javascript processors

CSS Optimizer

We will first run the CSS file through CSS Optimizer. What brought this plugin to my attention is that it not only does the traditional minimizations. According to the documentation it can perform:

  • Safe transformations:
    • Removal of whitespace
    • Removal of trailing ;
    • Removal of comments
    • Removal of invalid @charset and @import declarations
    • Minification of color properties
    • Minification of 0
    • Minification of multi-line strings
    • Minification of the font-weight property
  • Structural optimizations:
    • Merging blocks with identical selectors
    • Merging blocks with identical properties
    • Removal of overridden properties
    • Removal of overridden shorthand properties
    • Removal of repeating selectors
    • Partial merging of blocks
    • Partial splitting of blocks
    • Removal of empty ruleset and at-rule
    • Minification of margin and padding properties

As with the other tools, there are Grunt and
Gulp plugins available.

CSS Shrink

While CSSO may have gotten as small as possible, I’d rather make sure. That’s where CSS Shrink comes in.

You may be wondering why is Carlos being so obsessive with reducing the size of his CSS files?

Fair question. Here’s the answer:

Images are loaded asynchronously. We can load JavaScript asynchronously if we so choose. CSS is the only component of your web page that only loads synchronously and most browsers will block rendering the page until all your CSS downloads. That’s why it pays for it to be the smallest we can make it and combined the CSS into as few files as possible.

CSS Shrink provides that second level of compression, just to make sure we didn’t miss anything 🙂

As usual, plugins for Grunt and Gulp are available.

CodeKit/Prepros: One tool to rule them all

I know that some developers would rather not use command line tools. There are applications that provide almost equivalent functionality.

CodeKit (Mac only) and Prepros (Mac and Windows)

The screenshot below shows Codekit’s UI with a project open

Codekit Project UI

The second screenshopt shows SCSS compilation options on the right side of the screen.

Codekit Compilation Options

I own a copy of Codekit more from curiosity than from any actual use but realize that it may be better for developers who are not comfortable with command line interfaces.

Code Repository and Additional Goodies

I’ve created a Github Repository to go along with the ideas in this article. It’s a drop-in structure for a new project and it’s also an opinionated skeleton for new projects.

Issues, comments and Pull Requests are always welcome

XML Workflows: Tools and Automation

Because we use XML we can’t just dump our code in the browser or the PDF viewer and expect it to appear just like HTML content.

We need to prepare our content for conversion to PDF before we can view it. There are also front-end web development best practices to follow.

This chapter will discuss tools to accomplish both tasks from one build file.

What software we need

For this to work you need the following software installed:

  • Java (version 1.7 or later)
  • Node.js (0.10.35 or later)

Once you have java installed, you can install the following Java packages

  • Saxon (9.0.6.4 for Java)

A note about Saxon: OxygenXML comes with a version of Saxon Enterprise Edition. We’ll use a different version to make it easier to use outside the editor.

Node packages are handled through NPM, the Node Package Manager. On the Node side we need at least the grunt-cli package installed globally. TO do so we use this command:

$ npm install -g grunt-cli

The -g flag will install this globally, as opposed to installing it in the project director.

Now that we have the required sotfware installed we can move ahead and create our configuration files.

Optional: Ruby, SCSS-Lint and SASS

The only external dependencies you need to worry about are Ruby, SCSS-Lint and SASS. Ruby comes installe in most (if not all) Macintosh and Linux systems; an installer for Windows is also available.

SASS (syntactically awesome style sheets) are a superset of CSS that brings to the table enhancements to CSS that make life easier for designers and the people who have to create the stylesheets. I’ve taken advantage of these features to simplify my stylesheets and to save myself from repetitive and tedious tasks.

SASS, the main tool, is written in Ruby and is available as a Ruby Gem.

To install SASS, open a terminal/command window and type:

$ gem install sass

If you get an error, you probably need to install the gem as an administrator. Try the following command

$ sudo gem install sass

and enter your password when prompted.

SCSS-Lint is a linter for the SCSS flavor of SASS. As with other linters it will detect errors and potential erors in your SCSS style sheets. As with SASS, SCSSLint is a Ruby Gem that can be installed with the following command:

$ sudo gem install scss-lint

The same caveat about errors and installing as an administrator apply.

Ruby, SCSS-Lint and SASS are only necessary if you plan to change the SCSS/SASS files. If you don’t you can skip the Ruby install and work directly with the CSS files

If you want to peek at the SASS source look at the files under the scss directory.

Installing Node packages

Grunt is a Node.js based task runner. It’s a declarative version of Make and similar tools in other languages. Since Grunt and it’s associated plugins are Node Packages we need to configure Node.

At the root of the project there’s a package.json file where all the files necessary for the project have already been configured. All that is left is to run the install command.

npm install

This will install all the packages indicated in configuration file and all their dependencies; go get a cup of coffee as this may take a while in slower machines.

As it installs the software it’ll display a list of what it installed and when it’s done you’ll have all the packages.

The final step of the node installation is to run bower, a front end package manager. It is not configured by default but you can use it to manage packages such as jQuery, Highlight.JS, Polymer web components and others.

Grunt & Front End Development best practices

While developing the XML and XSL for this project, I decided that it was also a good chance to test front end development tools and best practices for styling and general front end development.

One of the best known tools for front end development is Grunt. It is a Javascript task runner and it can do pretty much whatever you need to do in your development environment. The fact that Grunt is written in Javascript saves developers from having to learn another language for task management.

Grunt has its own configuration file (Gruntfile.js) one of which is provided as a model for the project.

As currently written the Grunt file provides the following functionality in the assigned tasks. Please note that the tasks with an asterisk have subtasks to perform specific functions. We will discuss the subtasks as we look at each portion of the file and its purpose.

      autoprefixer  Prefix CSS files. *
             clean  Clean files and folders. *
            coffee  Compile CoffeeScript files into JavaScript *
              copy  Copy files. *
            jshint  Validate files with JSHint. *
              sass  Compile Sass to CSS *
            uglify  Minify files with UglifyJS. *
             watch  Run predefined tasks whenever watched files change.
          gh-pages  Publish to gh-pages. *
    gh-pages-clean  Clean cache dir
             mkdir  Make directories. *
          scsslint  Validate `.scss` files with `scss-lint`. *
             shell  Run shell commands *
              sftp  Copy files to a (remote) machine running an SSH daemon. *
           sshexec  Executes a shell command on a remote machine *
             uncss  Remove unused CSS *
              lint  Alias for "jshint" task.
          lint-all  Alias for "scsslint", "jshint" tasks.
          prep-css  Alias for "scsslint", "sass:dev", "autoprefixer" tasks.
           prep-js  Alias for "jshint", "uglify" tasks.
      generate-pdf  Alias for "shell:single", "shell:prince" tasks.
 generate-pdf-scss  Alias for "scsslint", "sass:dev", "shell:single",
                    "shell:prince" tasks.
      generate-all  Alias for "shell" task.

The first thing we do is declare two variables (module and require) as global for JSLint and JSHint. Otherwise we’ll get errors and it’s not essential to declare them before they are used.

We then wrap the Gruntfile with a self executing function as a deffensive coding strategy.

When concatenating Javascript files there may be some that use strict Javascript and some that don’t; With Javascript variable hoisting the use stric declaration would be placed at the very top of the concatenated file making all the scripts underneat use the strict declaration.

The function wrap prevents this by making the use strict declaration local to the file where it was written. None of the other templates will be affected and they will still execute from the master stylesheet. It’s not essential for Grunt drivers (Gruntfile.js in our case) but it’s always a good habit to get into.

Setup

/*global module */
/*global require */
(function () {
  'use strict';
  module.exports = function (grunt) {
    // require it at the top and pass in the grunt instance
    // it will measure how long things take for performance
    //testing
    require('time-grunt')(grunt);

    // load-grunt will read the package file and automatically
    // load all our packages configured there.
    // Yay for laziness
    require('load-grunt-tasks')(grunt);

The first two elements that work with our content are time-grunt and load-grunt-tasks.

Time-grunt provides a breakdown of time and percentage of total execution time for each task performed in this particular Grunt run. The example below illustrates the result when running multiple tasks (bars reduced in length for formatting.)

Execution Time (2015-02-01 03:43:57 UTC)
loading tasks      983ms  ▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇ 12%
scsslint:allFiles   1.1s  ▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇ 13%
sass:dev           441ms  ▇▇▇▇▇▇▇▇▇ 5%
shell:html          1.5s  ▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇ 18%
shell:single        1.2s  ▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇ 14%
shell:prince        2.9s  ▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇ 36%
Total 8.1s

Load-grunt-tasks automates the loading of packages located in the package.json configuration file. It’s specially good for forgetful people like me whose main mistake when building Grunt-based tool chains is forgetting to load the plugins to use :-).

Javascript

    grunt.initConfig({

      // JAVASCRIPT TASKS
      // Hint the grunt file and all files under js/
      // and one directory below
      jshint: {
        files: ['Gruntfile.js', 'js/{,*/}*.js'],
        options: {
          reporter: require('jshint-stylish')
            // options here to override JSHint defaults
        }
      },

      // Takes all the files under js/ and selected files under lib
      // and concatenates them together. I've chosen not to mangle
      // the compressed file
      uglify: {
        dist: {
          options: {
            mangle: false,
            sourceMap: true,
            sourceMapName: 'css/script.min.map'
          },
          files: {
            'js/script.min.js': ['js/video.js', 'lib/highlight.pack.js']
          }
        }
      },

JSHint will lint the Gruntfile itself and all files under the js/ directory for errors and potential errors.

[20:58:14] [email protected] xml-workflow 13902$ grunt jshint
Running "jshint:files" (jshint) task

Gruntfile.js
  line 9    col 33  Missing semicolon.
  line 269  col 6   Missing semicolon.

  ⚠  2 warnings

Warning: Task "jshint:files" failed. Use --force to continue.

Aborted due to warnings.

Uglify allow us to concatenate our Javascript files and, if we choose to, further reduce the file size by mangling the code (See this page for an explanation of what mangle is and does). I’ve chosen not to mangle the code to make it easier to read. May add it as an option for production deployments.

SASS and CSS

As mentioned elsewhere I chose to use the SCSS flavor of SASS because it allows me to do some awesome things with CSS that I wouldn’t be able to do with CSS alone.

The first task with SASS is convert it to CSS. For this we have two separate tasks. One for development (dev task below) where we pick all the files from the scss directory (the entire files section is equivalent to writing scss/*.scss) and converting them to files with the same name in the css directory.

      // SASS RELATED TASKS
      // Converts all the files under scss/ ending with .scss
      // into the equivalent css file on the css/ directory
      sass: {
        dev: {
          options: {
            style: 'expanded'
          },
          files: [{
            expand: true,
            cwd: 'scss',
            src: ['*.scss'],
            dest: 'css',
            ext: '.css'
          }]
        },
        production: {
          options: {
            style: 'compact'
          },
          files: [{
            expand: true,
            cwd: 'scss',
            src: ['*.scss'],
            dest: 'css',
            ext: '.css'
          }]
        }
      },

There are two similar versions of the task. The development version will produce the format below, which is easier to read and easier to troubleshoot (css-lint, discussed below, tells you what line the error or warning happened in.)

@import url(http://fonts.googleapis.com/css?family=Roboto:100italic,100,400italic,700italic,300,700,300italic,400);
@import url(http://fonts.googleapis.com/css?family=Montserrat:400,700);
@import url(http://fonts.googleapis.com/css?family=Roboto+Slab:400,700);
@import url(http://fonts.googleapis.com/css?family=Source+Code+Pro:300,400);
html {
  font-size: 16px;
  overflow-y: scroll;
  -ms-text-size-adjust: 100%;
  -webkit-text-size-adjust: 100%;
}

body {
  background-color: #fff;
  color: #554c4d;
  color: #554c4d;
  font-family: Adelle, Rockwell, Georgia, 'Times New Roman', Times, serif;
  font-size: 1em;
  font-weight: 100;
  line-height: 1.1;
  padding-left: 10em;
  padding-right: 10em;
}

The production code compresses the output. It deletes all tabs and carriage returns to produce cod elike the one below. It reduces the file size by eliminating spaces, tabs and carriage returns inside the rules, otherwise both versions are equivalent.

@import url(http://fonts.googleapis.com/css?family=Roboto:100italic,100,400italic,700italic,300,700,300italic,400);
@import url(http://fonts.googleapis.com/css?family=Montserrat:400,700);
@import url(http://fonts.googleapis.com/css?family=Roboto+Slab:400,700);
@import url(http://fonts.googleapis.com/css?family=Source+Code+Pro:300,400);
html { font-size: 16px; overflow-y: scroll; -ms-text-size-adjust: 100%; -webkit-text-size-adjust: 100%; }

body { background-color: #fff; color: #554c4d; color: #554c4d; font-family: Adelle, Rockwell, Georgia, 'Times New Roman', Times, serif; font-size: 1em; font-weight: 100; line-height: 1.1; padding-left: 10em; padding-right: 10em; }

I did consider adding cssmin but decided against it for two reasons:

SASS already concatenates all the files when it imports files from the modules and partials directory so we’re only working with one file for each version of the project (html and PDF)

The only other file we’d have to add, normalize.css, is a third party library that I’d rather leave along rather than mess with.

The scsslint task is a wrapper for the scss-lint Ruby Gem that must be installed on your system. It warns you of errors and potential errors in your SCSS stylesheets.

We’ve chosen to force it to run when it finds errors. We want the linting tasks to be used as the developer’s discretion, there may be times when vendor prefixes have to be used or where colors have to be defined multiple times to acommodate older browsers.

      // I've chosen not to fail on errors or warnings.
      scsslint: {
        allFiles: [
          'scss/*.scss',
          'scss/modules/_mixins.scss',
          'scss/modules/_variables.scss',
          'scss/partials/*.scss'
        ],
        options: {
          config: '.scss-lint.yml',
          force: true,
          colorizeOutput: true
        }
      },

Grunt’s autoprefixer task uses the CanIUse database to determine if properties need a vendor prefix and add the prefix if they do.

This becomes important for older browsers or when vendors drop their prefix for a given property. Rather than having to keep up to date on all vendor prefixed properties you can tell autoprefixer what browsers to test for (last 2 versions in this case) and let it worry about what needs to be prefixed or not.

      autoprefixer: {
        options: {
          browsers: ['last 2']
        },

        files: {
          expand: true,
          flatten: true,
          src: 'scss/*.scss',
          dest: 'css/'
        }
      },

The last css task is the most complicated one. Uncss takes out whatever CSS rules are not used in our target HTML files.

      // CSS TASKS TO RUN AFTER CONVERSION
      // Cleans the CSS based on what's used in the specified files
      // See https://github.com/addyosmani/grunt-uncss for more
      // information
      uncss: {
        dist: {
          files: {
            'css/tidy.css': ['*.html', '!docs.html']
          }
        }
      },

This is not a big deal for our workflow as most, if not all, the CSS is designed for the tags and classes we’ve implemented but it’s impossible for the SASS/CSS libraries to grow over time and become bloated.

This will also become and issue when you decide to include third part libraries in projects implemented on top of our workflow. By running Uncss on all our HTML files except the file we’ll pass to our PDF generator (docs.html) we can be assured that we’ll get the smallest css possible.

We skip out PDF source html file because I’m not 100% certain that Uncss can work with Paged Media CSS extensions. Better safe than sorry.

Optional tasks

I’ve also created a set of optional tasks that are commented in the Grunt file but have been uncommented here for readability.

The first optional task is a Coffeescript compiler. Coffeescript is a scripting language that provides a set of useful features and that compiles directly to Javascript.

I some times use Coffeescript to create scripts and other interactive content so it’s important to have the compilation option available.

      // OPTIONAL TASKS
      // Tasks below have been set up but are currently not used.
      // If you want them, uncomment the corresponding block below

      // COFFEESCRIPT
      // If you want to use coffeescript (http://coffeescript.org/)
      // instead of vanilla JS, uncoment the block below and change
      // the cwd value to the locations of your coffee files
      coffee: {
        target1: {
          expand: true,
          flatten: true,
          cwd: 'src/',
          src: ['*.coffee'],
          dest: 'build/',
          ext: '.js'
      },

The following two tasks are for managing file transfers and uploads to different targets.

One of the things I love from working on Github is that your project automatically gets an ssl-enabled site for free. Github Pages work with any kind of static website; Github even offers an automatic site generator as part of our your project site.

For the puposes of our workflow validation we’ll make a package of our content in a build directory and push it to the gh-pages branch of our repository. We’ll look at building our app directory when we look at copying files.

      // GH-PAGES TASK
      // Push the specified content into the repository's gh-pages branch
      'gh-pages': {
        options: {
          message: 'Content committed from Grunt gh-pages',
          base: './build/app',
          dotfiles: true
        },
        // These files will get pushed to the `
        // gh-pages` branch (the default)
        // We have to specifically remove node_modules
        src: ['**/*']
      },

There are times when we are not working with Github or pages. In this case we need to FTP or SFTP (encrypted version of FTP) to push files to remote servers. We use an external json file to store our account information. Ideally we’d encrypt the information but until then using the external file is the first option.

      //SFTP TASK
      //Using grunt-ssh (https://www.npmjs.com/package/grunt-ssh)
      //to store files in a remote SFTP server. Alternative to gh-pages
      secret: grunt.file.readJSON('secret.json'),
      sftp: {
        test: {
          files: {
            "./": "*.json"
          },
          options: {
            path: '/tmp/',
            host: '< %= secret.host %>',
            username: '< %= secret.username %>',
            password: '< %= secret.password %>',
            showProgress: true
          }
        }
      },

File Management

We’ve taken a few file management tasks into Grunt to make our lifes easier. The functions are for:

  • Creating directories
  • Copying files
  • Deleting files and directories

We will use the mkdir and copy tasks to create a build directory and copy all css, js and html files to the build directory. We will then use the gh-pages task (described earlier) to push the content to the repository’s gh-pages branches

      // FILE MANAGEMENT
      // Can't seem to make the copy task create the directory
      // if it doesn't exist so we go to another task to create
      // the fn directory
      mkdir: {
        build: {
          options: {
            create: ['build']
          }
        }
      },

      // Copy the files from our repository into the build directory
      copy: {
        build: {
          files: [{
            expand: true,
            src: ['app/**/*'],
            dest: 'build/'
          }]
        }
      },

      // Clean the build directory
      clean: {
        production: ['build/']
      },

Watch task

Rather than type a command over and over again we can set up watchers so that, any time a file of the indicated type changes, we perform specific tasks.

AS currentlly configured we track Javascript and SASS files.

For Javascript files anytime that the Gruntfile or any file under the Javascript directorie we run the JSHint task to make sure we haven’t made any mistakes.

For our SASS/SCSS files, any files under the scss directory, we run the sass:dev task to translate the files to CSS.

      // WATCH TASK
      // Watch for changes on the js and scss files and perform
      // the specified task
      watch: {
        options: {
          nospawn: true
        },
        // Watch all javascript files and hint them
        js: {
          files: ['Gruntfile.js', 'js/{,*/}*.js'],
          tasks: ['jshint']
        },
        sass: {
          files: ['scss/*.scss'],
          tasks: ['sass:dev']
        }
      },

Compile and Execute

Rather than using Ant, I’ve settled on Grunt’s shell task to run the compilation steps to create HTML and PDF. This reduces teh number of dependecies for our project and makes it easier to consolidate all the work.

We have three different commands:

  • html will create multiple html files using Saxon, a Java XSLT processor
  • single will create a single html file using Saxon
  • prince will create a PDF based on the single html file using PrinceXML

We make sure that we don’t continue if there is an error. Want to make sure that we troubleshoot before we get all the resulting files.

      // COMPILE AND EXECUTE TASKS
      shell: {
        options: {
          failOnError: true,
          stderr: false
        },
        html: {
          command: 'java -jar /usr/local/java/saxon.jar -xsl:xslt/book.xsl docs.xml -o:index.html'
        },
        single: {
          command: 'java -jar /usr/local/java/saxon.jar -xsl:xslt/pm-book.xsl docs.xml -o:docs.html'
        },
        prince: {
          command: 'prince --verbose --javascript docs.html -o docs.pdf'
        }
      }


    }); // closes initConfig

Custom Tasks

The custom task uses one or more of the tasks defined above to accomplish a sequence of tasks.

Look at specific tasks defined above for specific definitions.

    // CUSTOM TASKS
    // Usually a combination of one or more tasks defined above
    grunt.task.registerTask(
      'lint',
      [
        'jshint'
      ]
    )

    grunt.task.registerTask(
      'lint-all',
      [
        'scsslint',
        'jshint'
      ]
    );

    // Prep CSS starting with SASS, autoprefix et. al
    grunt.task.registerTask(
      'prep-css',
      [
        'scsslint',
        'sass:dev',
        'autoprefixer'
      ]
    );

    grunt.task.registerTask(
      'prep-js',
      [
        'jshint',
        'uglify'
      ]
    );

    grunt.task.registerTask(
      'generate-pdf',
      [
        'shell:single',
        'shell:prince'
      ]
    );

    grunt.task.registerTask(
      'generate-pdf-scss',
      [
        'scsslint',
        'sass:dev',
        'shell:single',
        'shell:prince'
      ]
    );

    grunt.task.registerTask(
      'generate-all',
      [
        'shell'
      ]
    );


  }; // closes module.exports
}()); // closes the use strict function

XML Workflows: CSS Styles for Paged Media

This is the generated CSS from the SCSS style sheets (see the scss/ directory for the source material.) I’ve chosen to document the resulting stylesheet here and document the SCSS source in another document to make life simpler for people who don’t want to deal with SASS or who want to see what the style sheets look like.

Typography derived from work done at this URL: http://bit.ly/16N6Y2Q

The following scale (also using minor third progression) may also help: http://bit.ly/1DdVbqK

Feel free to play with these and use them as starting point for your own work 🙂

The project currently uses these fonts:

  • Roboto Slab for headings
  • Roboto for body copy
  • Source Code Pro for code blocks and preformated text

Font Imports

Even though SCSS Lint throws a fit when I put font imports in a stylesheet because they stop asynchronous operations, I’m doing it to keep the HTML files clean and because we are not loading the CSS on the page, we’re just using it to process the PDF file.

Eventually I’ll switch to locally hosted fonts using bulletproof font syntax (discussed here and available for use at Font Squirrel.

At this point we are not dealing with font subsetting but we may in case we need to.

@import url(http://fonts.googleapis.com/css?family=Roboto:100italic,100,400italic,700italic,300,700,300italic,400);
@import url(http://fonts.googleapis.com/css?family=Roboto+Slab:400,700);
@import url(http://fonts.googleapis.com/css?family=Source+Code+Pro:300,400);

Defaults

Now that we’ve loaded the fonts we can create our defaults for the document. The html element defines vertical overflow and text size adjustment for Safari and Windows browsers.

html {
overflow-y: scroll;
-ms-text-size-adjust: 100%;
-webkit-text-size-adjust: 100%;
}

The body selector will handle most of the base formatting for the the document.

The selector sets up the following aspects of the page:

  • background and font color
  • font family, size and weight
  • line height
  • left and right padding (overrides the base document’s padding)
  • orphans and widows
body {
background-color: #fff;
color: #554c4d;
font-family: 'Roboto', 'Helvetica Neue', Helvetica, sans-serif;
font-size: 1em;
font-weight: 100;
line-height: 1.1;
orphans: 4;
padding-left: 0;
padding-right: 0;
widows: 2;
}

Blockquotes, Pullquotes and Marginalia

It’s fairly easy to create sidebars in HTML so I’ve played a lot with pull quotes, blockquotes and asides as a way to move the content around with basic CSS. We can do further work by tuning the CSS

aside {
border-bottom: 3px double #ddd;
border-top: 3px double #ddd;
color: #666;
line-height: 1.4em;
padding-bottom: .5em;
padding-top: .5em;
width: 100%;
}

aside .pull {
margin-bottom: .5em;
margin-left: -20%;
margin-top: .2em;
}

The magin-notes* and content* move the content to the corresponding side of the page without having to create specific CSS to do so. The downside is that, as with many things in CSS, you are stuck with the provided values and will have to modify them to suit your needs.

.margin-notes,
.content-left {
font-size: .75em;
margin-left: -230px;
margin-right: 20px;
text-align: right;
width: 230px;
}

.margin-notes-right,
.content-right {
font-size: .75em;
margin-left: 760px;
margin-right: -20px;
position: absolute;
text-align: left;
width: 230px;
}

.content-right {
font-size: .75em;
margin-left: 760px;
margin-right: -20px;
position: absolute;
text-align: left;
width: 230px;
}

.content-right ul,
.content-left ul {
list-style: none;
}

The opening class style creates a large distinguishing block container for opening text. This is useful when you have a summary paragraph at the beginning of your document or some other opening piece of text to go at the top of your document

.opening {
border-bottom: 3px double #ddd;
border-top: 3px double #ddd;
font-size: 2em;
margin-bottom: 10em;
padding-bottom: 2em;
padding-top: 2em;
text-align: center;
}

Blockquotes present the enclosed text in larger italic font with a solid bar to the left of the content. Because the font is larger I’ve added

blockquote {
border-left: 5px solid #ccc;
color: #222023;
font-size: 1.5em;
font-style: italic;
font-weight: 100;
margin-bottom: 2em;
margin-left: 4em;
margin-right: 4em;
margin-top: 2em;
}
blockquote p {
padding-left: .5em;
}

The pullquote classes were modeled after an ESPN article and look something like this:

example pullquote

The original was hardcoded to pixels. Where possible I’ve changed the values to em to provide a more responsive

.pullquote {
border-bottom: 18px solid #000;
border-top: 18px solid #000;
font-size: 2.25em;
font-weight: 700;
letter-spacing: -.02em;
line-height: 2.125em;
margin-right: 2.5em;
padding: 1.25em 0;
position: relative;
width: 200px;
}
.pullquote p {
color: #00298a;
font-weight: 700;
text-transform: uppercase;
z-index: 1;
}
.pullquote p:last-child {
line-height: 1.25em;
padding-top: 2px;
}
.pullquote cite {
color: #333;
font-size: 1.125em;
font-weight: 400;
}

Paragraphs

The paragraph selector creates the default paragraph formatting with a size of 1em (equivalent to 16 pixels) and a line height of 1.3 em (20.8 pixels)

p {
font-size: 1em;
margin-bottom: 1.3em;
}

To indent all paragraphs but the first we use the sibling selector we indent all paragraphs that are the next sibling of another paragraph element (that is: the next child of the same parent).

The first paragraph doesn’t have a paragraph sibling so the indent doesn’t happen but all other paragraphs are indented

p + p {
text-indent: 2em;
}

Rather than use pseudo elements (:first-line and :first-letter) we use classes to give authors the option to use these elements.

.first-line {
font-size: 1.1em;
text-indent: 0;
text-transform: uppercase;
}

.first-letter {
float: left;
font-size: 7em;
line-height: .8em;
margin-bottom: -.1em;
padding-right: .1em;
}

Lists

The only thing we do for list and list items is to indicate what type of list we’ll use as our default square for unordered list and Arabic decimals for our numbered lists.

ul li {
list-style: square;
}

ol li {
list-style: decimal;
}

Figures and captions

The only interesting aspect of the CSS we use for figures is the counter. The figure figcaption::before selector creates automatic text that is inserted before each caption. This text is the string “Figure”, the value of our figure counter and the string “: “.

This makes it easier to insert figures without having to change the captions for all figures after the one we inserted. The figure counter is reset for every chapter. I’m researching ways to get the figure numbering across chapters.

figure {
counter-increment: figure_count;
margin-bottom: 1em;
margin-top: 1em;
}
figure figcaption {
font-weight: 700;
padding-bottom: 1em;
padding-top: .2em;
}

figure figcaption::before {
content: "Figure " counter(figure_count) ": ";
}

Headings

Headings are configured in two parts. The first one sets common attributes to all headings: font-family, font-weight, hyphes, line-height, margins and text-transform.

It’s this attribute that needs a little more discussion. Using text-transform we make all headings uppercase without having to write them that way

h1,
h2,
h3,
h4,
h5,
h6 {
font-family: 'Roboto Slab', sans-serif;
font-weight: 400;
hyphens: none;
line-height: 1.2;
margin: 1.414em 0 .5em;
text-transform: uppercase;
}

In the second part of our heading styles we work on rules that only apply to one heading at a time. Things such as size and specific attributes (like removing the top margin on the h1 elements) need to be handled need to be handled individually

h1 {
font-size: 3.157em;
margin-top: 0;
}

h2 {
font-size: 2.369em;
}

h3 {
font-size: 1.777em;
}

h4 {
font-size: 1.333em;
}

h4,
h5,
h6 {
text-align: inherit;
}

Different parts of the book

There are certains aspects of the book that need different formatting from our defaults.

We use the element[attribute=name] syntax to identify which section we want to work with and then tell it the element within the section that we want to change.

For example, in the bibliography (a section with the data-type='bibliography attribute) we want all paragraphs to be left aligned and all paragraphs to have no margin (basicallwe we are undoing the indentation for paragraphs with sibling paragraphs within the bibliography section)

section[data-type='bibliography'] p {
text-align: left;
}
section[data-type='bibliography'] p + p {
text-indent: 0 !important;
}

The same logic applies to the other sections that we’re customizing. We tell it what type of section we are working with and what element inside that sectin we want to change.

section[data-type='titlepage'] h1,
section[data-type='titlepage'] h2,
section[data-type='titlepage'] p {
text-align: center;
}

section[data-type='dedication'] h1,
section[data-type='dedication'] h2 {
text-align: center;
}
section[data-type='dedication'] p {
text-align: left;
}
section[data-type='dedication'] p + p {
text-indent: 0 !important;
}

Preformatted code blocks

A lot of what I write is technical and requires code examples. We take a two pronged approach to the fenced code blocks.

We format some aspects our content (wrap, font-family, size, line height and wether to do page breaks inside the content) locally and hand off syntax highlighting to highlight.js with a style to mark the content differently.

pre {
overflow-wrap: break-word;
white-space: pre-line !important;
word-wrap: break-word;
}
pre code {
font-family: 'Source Code Pro', monospace;
font-size: 1em;
line-height: 1.2em;
page-break-inside: avoid;
}

Miscelaneous classes

Rather than for people to justify text we provide a class to make it so. I normally justify at the div or section level but it’s not always necessary or desirable.

Code will be used in a future iteration of the code to highlight inline snippets (think of it as an inline version of the <pre><code> tag combination)

.justified {
text-align: justify;
}

.code {
background-color: #e6e6e7;
opacity: .75;
}

Columns

The last portion of the stylesheet deals with columns. I’ve set up 2 set of rules for 2 and 3 column with similar attributes. In the SCSS source these are created with a column mixin.

.columns2 {
column-count: 2;
column-gap: 3em;
column-fill: balance;
column-span: none;
line-height: 1.25em;
width: 100%;
}
.columns2 p:first-of-type {
margin-top: 0;
}
.columns2 p + p {
text-indent: 2em;
}
.columns2 p:last-of-type {
margin-bottom: 1.25em;
}

.columns3 {
column-count: 3;
column-gap: 10px;
column-fill: balance;
column-span: none;
width: 100%;
}
.columns3 p:first-of-type {
margin-top: 0;
}
.columns3 p:not:first-of-type {
text-indent: 2em;
}
.columns3 p:last-of-type {
margin-bottom: 1.25em;
}