Case Study: Building Polymer Applications, Part 3


The project-list element is the meat of the application and uses some interesting techniques I’ve learned from working on creating Polymer elements.
This is an example of the JSON model used on their element. It has been redacted for length.

  "description": "The VR/AR web seems to be the latest buzz word in the web community.",
  "name": "Moving 2D content to a 3D world",
  "notes": "",
  "stage": "Idea",
  "type": "Code",
  "url": {
    "code": "",
    "other": "",
    "writeup": ""

Knowing the structure of the JSON data will help make sense of the data bindings I used below:

<dom-module id="project-list">

In the elements.html file we include iron-flex-layout which gives us access to CSS mixins we can incorporate into our style sheets to create flex layouts.

The three @apply rules below are equivalent to manually creating a row wrap flex layout with space between items.

We do the same thing to layout the links in card-actions.

    <style is="custom-style">
      body {
        margin: 0;

      :host {
        --paper-card-header: {
          background-color: #3f51b5;
          color: white;
          text-align: center;

      .cards-container {
        width: 98%;
        /* Mixins equivalent to flex: row wrap */
        /* Space between items */


      .card-content {
        color: #333;
        background-color: white;
        text-align: left;

      .card-actions {
        /* Mixins equivalent to flex: row wrap */
        /* Space between items */

Most paper elements have mixins defined. In this instance we use the default values for the mixins changing the header, the header text and the header color.

      paper-card {
        margin-bottom: 3em;
        height: 25%;
        width: 47.5%;



As an added visual cue, I’ve removed the link underline by default and bring it back when the user hovers I believe that this combined with the different link color is enough to tell users this is a link.

      a {
        text-decoration: none;

      a:hover {
        text-decoration: underline;

We use media queries to change the layout when the total width of the screen is smaller than 900 pixels (including the 300 pixels for the menu.) we turn the layout to vertical and we make the paper card elements 100% of the width of the containing element

      @media screen and (max-width: 900px){
        .cards-container {

        paper-card {
          width: 100%;


We use iron-ajax to load the project data from the JSON file.

The auto parameter will make the request fire automatically.

handle-as tells Polymer what type of response to expect.

last-response gives us the content of the response. We do an automatic data binding with the content so we can use it later in the element.

    <iron-ajax auto

I use a repeating template (template is="dom-repeat) to create one instance of the template for each element in the source (in this case our JSON array.) We use one way bindings on this portion of the element. Since we’re only interested in displaying the data and not change it we’re ok with just letting the data from from the parent (iron-ajax) down into the template.

I’ve also aliased the projects array to project. This makes it easier to read and reason through the code for myself-6-months-from-now and for other people interested in reading the code.

    <div class="cards-container">
      <template is="dom-repeat" items="{{projects}}" as="project">
          <paper-card heading>
            <div class="card-content">
              <h3><strong>Project Stage: [[project.stage]]</strong></h3>

Description and project notes are written in Markdown. To acommodate this (I’m lazy and Markdown is much easier to write than HTML) I’ve included the marked-element which is a Polymer wrapper around the Marked Markdown library.


              <marked-element markdown="[[project.description]]">
                <div class="markdown-html"></div>

              <h3>Project Notes</h3>

              <marked-element markdown="[[project.notes]]">
                <div class="markdown-html"></div>

The links in the Links section use a different type of template to only stamp if the element is present. The dom-if templates take a single parameter (if) with the condition to test. If the condition is truthy (something that evaluates to true) then the content of the template is stamped, otherwisse it’s skiped.

Because not all links have content and I don’t want to not use them, I wrap them around dom-if templates. If they have content they will return true and the link inside will show up on the resulting card. If not it will be left blank.

              <div class="card-actions">
                <template is="dom-if" if="{{project.url.code}}">
                  <paper-button><a href="[[project.url.code]]">Code</a></paper-button>
                <template is="dom-if" if="{{project.url.writeup}}">
                  <paper-button><a href="[[project.url.writeup]]">Writeup</a></paper-button>
                <template is="dom-if" if="{{project.url.other}}">
                  <paper-button><a href="[[project.url.other]]">Other</a></paper-button>



      is: 'project-list'

Next steps

Some of the things I’d like to do:

  • Research why iPad portrait use displays weird. It may have to do with media queries. The way I calculate the width to change the display may be done incorrectly
  • Move the backend to Firebase using the Polymerfire collection.
  • Oncce the backend is moved to Firebase, wire CRUD functionality

Case Study: Building Polymer Applications, Part 2


project-menu is a presentational element that only contains icons, links and linkable icon elements. It’s a good example of how we can compose custom elements with regular HTML content.

As will all out elements we import elements.html to make sure we have everything we need to get started. We then define our dom-module to be project menu by assigning its ID.

<link rel="import" href="elements.html">

<dom-module id="project-menu">

Inside the template we define the styles for our elements.

      body {
        margin: 0;
        font-family: 'Roboto', 'Noto', sans-serif;
        background-color: #eee;

      :host {
        display: block;
        --iron-icon-width: 48px;
        --iron-icon-height: 48px;

      paper-item {
        --paper-item: {
          cursor: pointer;

      .menu-container {
        margin-left: 1em;
      paper-item {
        display: block;
        padding-left: 1em;
        margin-bottom: 2vh;

      paper-item a {
        text-decoration: none;

    <div class="menu-container">


            <a href="">
              <iron-icon icon="link"></iron-icon>


      <h4>Work Related Social Media</h4>

Each paper-item element has four components:

  • the paper-item element itself tells Polymer what type of custom element this is
  • a link to the correct site
  • an iron-icon element with either a built-in icon or an SVG icon located on the image folder
  • the text of the link

There may be better ways to compose this type of elements but I’m comfortable with this one.

            <a href="">
              <iron-icon icon="link"></iron-icon>
              Publishing Project

            <a href="">
              <iron-icon src="images/Google_plus.svg"></iron-icon>
              Google Plus

            <a href="">
              <iron-icon src="images/linkedin.svg"></iron-icon>

            <a href="">
              <iron-icon src="images/codepen-logo.svg"></iron-icon>
              Code Pen

            <a href="">
              <iron-icon src="images/Twitter-Logo.svg"></iron-icon>

            <a href="">
              <iron-icon src="images/Octicons-mark-github.svg"></iron-icon>


We then instantiate the element.


      is: 'project-menu',


This is about as simple as a menu can get. We don’t use animations and we don’t have sub menus to animate. Yet it gives a good initial idea of how to build a menu and provides a good starting points for enhancements.

ES6, Babel and You: Exploring the new Javascript

I haven’t been this excited about Javascript in a long time.

Javascript is becoming fun again with better aync support in Promises and async/await code, better modularity and reusability with classes and modules and a consistent, concise syntax for anonymous functions with arrow syntax and, better support accross Node.js and browsers.

Until support for ES6 (also known as ES2015) is complete accross major browserss we still need to transpile the code to ES5, the version that is currently supported accross browsers.

There are two tables that list compatibility tables that guide you on native support for different features across ECMAScript implementations


Before jumping into a more in depth analysys of what we can do with Babel let’s throw some terminology down to make our lives easier as we work.

  • ECMAScript: he Javascript standard implemented by ECMA (European Computer Machinery Association.) All Javascript implementations follow the ECMAScript Standard
  • TC39: The Technical Committee in charge of the ECMAScript specification
  • ES4: ES4 was a failed attempt at update Javascript. There was no agreement between participants as to how much of these changes to implement so the specification died and was not implemented. Some features from ES4 have made it into ES6
  • ES5: Released as a compromise after the ES4 debacle. It is also known as ESHarmony
  • ES6 / ES2015 The current standard version of ECMAScript released in June, 2015. Moves to incremental anual releases and staged features.
  • ES7/ES2016: Next release of The ECMAScript standard. Includes async / await functionality
  • ESNext: My term for features that are currently at stage 3 on TC39’s proposal pipeline and, unless withdrawn, are likely to make it to stage 4 and release in the next major version of the specification

Enter Babel

Babel (formerly known as 6to5) is a transpiler. It takes ES6 or ESNext code and converts it to ES5 that runs natively in modern browsers. This makes it easier for developers to work with modern code without waiting for vendors to implement the feature you’re working with.

Installing Babel

Babel is a node application. As such, Node must be installed on your sytem. Then it’s as simple as the following command to install Babel’s CLI, ES2015 Preset and stage-3 preset.

npm install -g babel-cli babel-preset-es2015 babel-preset-stage-3

To use the presets create an .babelrc at the root of your working director. It should look like this:

   "presets": [
   "plugins": []

Each of the presets allows you to load several plugins rather than leaving individually.

Minimal Transpiler

I’ve also created a minimal-transpiler to automate the transpilation, linting and quality check for ES6 files.

npm install --save-dev babel-eslint babel-preset-es2015 
babel-preset-stage-0 babel-preset-stage-3 babel-register 
eslint eslint-config-defaults gulp gulp-eslint 
gulp-jsdoc3 gulp-jshint gulp-load-plugins jshint

The Babel task (using ES6 syntax)

gulp.task("babel", () => {
  return gulp.src("app/es6/**/*.js")
       presets: [
      pretty: true,
      title: "Babel"

Arrow syntax for anonymous functions

(Adapted from Exploring JS from Axel Rauschmayer)

My first foray into ES6 was with arrow functions which are ES6 new way of creating anonymous functions. I’ve always struggled with fat fingers and invariably write functoin and have to go back and fix it so the less I have to write it the happier I am 🙂

Traditional function literals in ES5 are written like this:

// ES5
var selected = allJobs.filter(function (job) {
  return job.isSelected();

The “fat” arrow => (as opposed to the thin arrow ->) was chosen to be compatible with CoffeeScript, whose fat arrow functions are very similar. The example function above could be rewritten as follows:

// ES6 
var selected = allJobs.filter( (job) => {

It could be further reduce by eliminating parens around the parameter and {} around the return statement as we’ll see below but I don’t like the shorthand syntax as it is more error prone. It may be better for people who like shortcuts but I’m not one of them.

Specifying parameters:

ES6 syntax for promises offers 3 ways to specify parameters:

  • If we have no parameters then we use () => {...} with the empty parens indicating there are no parameters.
  • With one parameter we can use x => {...} where x is the parameter specificied on its own. For the case the parameters are optional
  • More than one parameter requires us to use (x,y) => {...} but now the parens are required

Specifying a body:

x => { return x * x }  // block
x => x * x  // expression, equivalent to previous line

The statement block behaves like a normal function body. For example, you need return to give back a value. With an expression body, the expression is always implicitly returned.

Note how much an arrow function with an expression body can reduce verbosity. Compare:

const squares = [1, 2, 3].map(function (x) { return x * x });
const squares = [1, 2, 3].map(x => x * x)

Omitting the parentheses around the parameters is only possible if they consist of a single identifier:

> [1,2,3].map(x => 2 * x)
  [ 2, 4, 6 ]

As soon as there is anything else, you have to type the parentheses, even if there is only a single parameter. For example, you need parens if you destructure a single parameter:

> [[1,2], [3,4]].map(([a,b]) => a + b)
  [ 3, 7 ]

And you need parens if a single parameter has a default value (undefined triggers the default value!):

> [1, undefined, 3].map((x='yes') => x)
  [ 1, 'yes', 3 ]

The source of this is an important distinguishing aspect of arrow functions:

Traditional functions have a dynamic this; its value is determined by how they are called.

Arrow functions have a lexical this; its value is determined by the surrounding scope.

The complete list of variables whose values are determined lexically is:

  • arguments
  • super
  • this

There are a few syntax-related details that can sometimes trip you up.

Syntactically, arrow functions bind very loosely. The reason is that you want every expression that can appear in an expression body to “stick together”; it should bind more tightly than the arrow function:

const f = x => (x % 2) === 0 ? x : 0;

As a consequence, you often have to wrap arrow functions in parentheses if they appear somewhere else. For example:

console.log(typeof () => {}); // SyntaxError
console.log(typeof (() => {})); // OK

On the flip side, you can use typeof as an expression body without putting it in parens:

const f = x => typeof x;

ES6 forbids a line break between the parameter definition and the arrow of an arrow function:

const func1 = (x, y) // SyntaxError
=> {
   return x + y;

const func2 = (x, y) => // OK
  return x + y;

const func3 = (x, y) => { // OK
  return x + y;

const func4 = (x, y) // SyntaxError
  => x + y;

const func5 = (x, y) => // OK
  x + y;

Line breaks inside parameter definitions are OK:

const func6 = ( // OK
) => {
  return x + y;

The rationale for this restriction is that it keeps the options open regarding “headless” arrow functions in the future: if there are zero parameters, you’d be able to omit the parentheses.

Asynchronous code with promises

Promises are one alternative to callback hell. Creation is similar to using callbacks in that we use onload and onerror but this time we wrap them in a Promise as shown in the example below.

The promise API is surprisingly simple. I will only emphasize the elements I work with the most when creating promise-based async scripts. Mozilla’s MDN has a more in-depth article about promises you can refer to for more information.



Returns a promise that resolves when all of the promises in the iterable argument have resolved or rejects as soon as one of the promises in the iterable argument rejects, this is also know as ‘fall fast.’

This method is useful when we need to group promises together or when we need something to happen only if all the required actions are successful. In the example below Promise.all will display the result of all the successful promises.

var p1 = Promise.resolve(42);
var p2 = 347;
var p3 = new Promise(function(resolve, reject) {
  setTimeout(resolve, 100, "Made it");

Promise.all([p1, p2, p3])
  .then(function(values) {
     console.log(values) // [42, 347, "Made it"]
  .catch(function (errror) {
     // Gets the first rejection among the promises

If we change any of the values to a rejection, for example:

var p2 = Promise.reject('Sorry, can't do that');

The promise will reject with the message on p2 and will not test p3, the promise has already failed.

This is a similar technique to the ones used in Font Face Observer to handle multiple fonts:

var fontA = new FontFaceObserver('Family A');
var fontB = new FontFaceObserver('Family B');

Promise.all([fontA.load(), fontB.load()]).then(function () {
  console.log('Family A & B have loaded');


Promise.race() takes an array of promises (thenables and other values are converted to promises via Promise.resolve()) and returns a promise with the same value (resolve or reject) as the first settled promise.

We can use a promise race to create a timer. We first create a delay function to set up the time we will race against.

function delay(ms) {
  return new Promise(function (resolve, reject) {
    setTimeout(resolve, ms); 

We can then set up a race between a function we want to run and a delay. Say, for example that we want to load a resource and make sure that it will load in less than the time we set in the delay function (2000 miliseconds in the example below.)

If the httpGet promises resolves first then the race resolves and the then portion of the promise is executed.

If the delay promise resolves first then we throw an error; this will automatically reject the project and the catch portion of the promise is executed.

  delay(2000).then(function () {
    throw new Error('Timed out')
  .then(function (text) { ... })
  .catch(function (reason) { ... });


Returns a Promise object that is rejected with the given reason.

Promise.reject("Testing static reject").then(function(reason) {
  // not called
}, function(reason) {

This is the static equivalent to the catch statement.


Returns a Promise object that is resolved with the given value. This is the opposite of Promise.reject.

If the returned value is a thenable (i.e. has a then method), the returned promise will “follow” that thenable, adopting its eventual state; otherwise the returned promise will be fulfilled with the value.

Generally, if you want to know if a value is a promise or not – Promise.resolve(value) it instead and work with the return value as a promise.

  .then(function(value) {
     console.log(value); // "Success"
  }, function(value) {
     // not called
} );


These are the two methods that you will see the most often when working with promises.


Adds fulfillment and rejection handlers to the promise, and returns a new promise resolving to the return value of the called handler, or to its original settled value if the promise was not handled (i.e. if the relevant handler onFulfilled or onRejected is undefined).

function loadImage(url) {
  return new Promise( (resolve, reject) => {
    var image = new Image();
    image.src = url;

    image.onload = function() {

    image.onerror = function() {
      reject(new Error('Could not load image at ' + url));


We then define the functions that we’ll use in our code. Note that all the functions have an image parameter and that we return it at the end of every function. This will become important when we start working with the promise code later in the section.

For this example the functions just log to console. In a real application we would import a module like Imagemagic or GraphicMagick to actually handle the image manipulation.

function scaleToFit(width, height, image) {
  console.log('Scaling image to ' + width + ' x ' + height);
  return image;

function watermark(text, image) {
  console.log('Watermarking image with ' + text);
  return image;

function grayscale(image) {
  console.log('Converting image to grayscale');
  return image;

When we create the pipeline we use the functions. There is a single catch statement at the end of the chain. This will catch any errors bubbling from the other functions in the chain.

// Image processing pipeline
function processImage(image) {
    .then((image)  => {
       return scaleToFit(300, 450, image);
    .then((image)  => {
       return watermark('The Real Estate Company', image);
    .then((image)  => {
       return grayscale(image);
    .catch((error) => {
       console.log('we had a problem in running processImage ' + error)

The above can also be represented in a more concise manner with the following code:

function processImage(image) {
  // Image is always last parameter preceded by any configuration parameters
  var customScaleToFit = scaleToFit.bind(null, 300, 450);
  var customWatermark = watermark.bind(null, 'The Real Estate Company');

  return Promise.resolve(image)

Unless I have a very compelling reason (and I have yet to find one) I prefer the first syntax as it keeps me from jumping between functions to troubleshoot… Yes, it sacrifices the compactness of the second version but it saves me from binding the functions and makes my life easier in the long run.

Another API that uses promises extensively is the Fetch API, the replacement for XHR. The example below uses fetch to get an image from the network and insert it in an image tag.

var myImage = document.querySelector('img');

  .then(function(response) {
    return response.blob();
  .then(function(myBlob) {
    var objectURL = URL.createObjectURL(myBlob);
    myImage.src = objectURL;


Appends a rejection handler callback to the promise, and returns a new promise resolving to the return value of the callback, or to its original fulfillment value if the promise is instead fulfilled.

Whatever you return in an error handler becomes a fulfillment value (not rejection value!). That allows you to specify default values that are used in case of failure:

  .catch(function () {
    // Something went wrong
    // Use this value instead
    return 'Untitled.txt';
  .then(function (fileName) {
    // Nothing here

Modularity / Reusability with Classes

I was surprised to find classes as part of ES6… Ever since the language was created developers have had to learn to work with prototypal inheritance, how to make that chain work for us and what are the pitfalls to avoid when doing so. The first question I asked when I started looking at ES6 was what are classes in ES6?

They’re syntactic sugar on top of prototypical inheritance. Everything we’ll discuss in this section is built on top of the traditional prototypal inheritance framework and the parser will use that under the hood while we do all out shiny work with classes.

This will be our example to look at how classes work. It defines a class Person with three default elements:
* first name
* lastname
* age.

It also defines 2 methods:

  • fullName returning the first and last name
  • howOld returing the string {name} is {age} years old

Note that the methods use template literals with string interpolations.

class Person {
  constructor(first, last, age) {
    this.first = first;
    this.last = last;
    this.age = age;

  fullName() {
    return `${this.first} ${this.last}`;

  howOld() {
    return `${this.first} is ${this.age} years old`

To instantiate a new object of class Person we must use the new keyword like so:

let  carlos =  new Person('carlos', 'araya', 49);

Now we can use the class methods like this:

// returns "carlos araya"
// returns "carlos is 49 years old"

Subclasses, extends and super

Now let’s say that we want to expand on the Person class by assigning additional attributes to the constructor and add additional methods. We could copy all the material from the Person class into our new class, call it Engineer, but Javascript saves us from having to do so. The extends keyword allows us to use a class as the basis for another one.

To continue the example, we’ll extend Person&nbsp; to become and Engineer&nbsp;. We’ll say that the Engineer is a person with all the attributes and methods of the Person class with 2 additions:

  • They belong to a department
  • They have a favorite language represented by the lang parameter

The Engineer class is presented below:

class Engineer extends Person {
  constructor(first, last, age, department, lang) {
    // super calls the parent class for the indicated attributes
    super(first, last, age);
    this.department = department;
    this.language = lang;

  departmentBelonging() {
    return `${this.first} is in the ${this.department} department`;

  favoriteLanguage() {
    return `${this.first} favorite language is ${this.language}`;

In the constructor we pass the same three values as for the Person Class but, rather than associate them with their values like we did in the Person class, we simply call the constructor of the parent class using the super keyword.

We then add the methods specific to the Engineer class: What department they belong to and what’s their favorite programming language.

The instantiation is the same as before

const william = new Engineer('William', 'Cameron', 49, 'engineering', 'C++');

Now the cool part. Even though we didn’t add the methods fullName and howOld we get them for free becaure they are defined in the parent class. WIth william (defined above) we can do:

// returns "William Cameron"

// returns "William is 49 years old"

and we get the methods that are exclusive to our Engineer class that are not part of Person:

// returns "William is in the engineering department"

// returns "William favorite language is C++"

To come

Modules are a static alternative to classes. Where you can instantiate a class directly you have to import modules. Think of modules as the ES6 native version of CommonJS and AMD modules.

I’m having a hard time getting the transpiled modules work in a browser so, until I do I will hold off on writing up about them. I want them to be useful now rather than a theoretical example.

abstractions versus underlying structures

Maximiliano Firtman wrote Service Workers replacing AppCache: a sledgehammer to crack a nut where he makes a case for Service Workers not being ready to replace AppCache, regardless of how broken it is.

I happen to disagree with it for the same reasons Jake Archibald listed in Application Cache is a Douchebag and for reasons having to do with the extensible web.

I tried to create an Application Cache for making some of my content offline. I’ll kindly say I failed because App Cache did not deliver on what it promised. What good is it to have an offline experience that doesn’t work consistently or at all?

The second, and most worrisome, point is the question Maximiliano was asked:

My second alarm sign appeared a couple of weeks ago during a training in San Francisco. One of my students, after creating our first Service Worker with the basic AppCache code, he asked me: “Ok, now tell me where is the jQuery of Service Workers?”

Don’t get me wrong, I’ve grown to like jQuery and I’ve used it to add functionality like on my projects. But we shouldn’t be teaching the abstraction before we teach the basics.

Yes, as you’ve seen before creating Service Workers is tedious but the code is highly reusable. Yes, you will find that most of the samples out there on the Web are the same code! but in my opinion as long as you understand what the code is doing is ok to have multiple copies of the same code.

When jQuery first came out there was a group of developers and users who thought jQuery was the perfect solution and never bothered to move from there. For some of them it might be enough but for others it isn’t and, worst of all, it hurts when people try to learn what’s underneath the abstraction.

This is a minimum viable replacement for App Cache using Service Workers (taken from HTML5 Rocks):

var CACHE_NAME = 'my-site-cache-v1';
var urlsToCache = [

self.addEventListener('install', function(event) {
  // Perform install steps
      .then(function(cache) {
        console.log('Opened cache');
        return cache.addAll(urlsToCache);

self.addEventListener('fetch', function(event) {
      .then(function(response) {
        // Cache hit - return response
        if (response) {
          return response;
        return fetch(event.request);

And this is a barebones sw-toolbox based implementation of the same script (adapted from the sw-toolbox demo) with, as Maximiliano puts it, jQuery for Service Workers.

(global => {
  'use strict';

  // Load the sw-toolbox library.

  // List of files to precache. This should be automated.

  // Turn on debug logging, visible in the Developer Tools' console.
  global.toolbox.options.debug = true;

  // precache the files in FILES_TO_PRECACHE

  // By default, all requests will use the toolbox.networkFirst cache
  // strategy, and their responses will be stored in the default cache.
  global.toolbox.router.default = global.toolbox.networkFirst;

  // Boilerplate to ensure our service worker takes control of the page 
  // as soon as possible.
    (event) => event.waitUntil(global.skipWaiting()));
    (event) => event.waitUntil(global.clients.claim()));

Just like with jQuery I am not against using the abstraction but I’m very leery of people who only learn the abstraction without learning or understanding the underlying code .

Sure, the abstraction looks nice but how different is it really from the raw Service Worker? Sure, the actual process of learning how Service Workers and the Cache API work is more tedious and error prone but it helps with whatever is coming next.

sw-toolbox will not necessarily help when people move from basic caching to push notifications and background sync so we do need to learn how the basics work before we need to move to more advanced features.

If we don’t need the bells and whistles then sw-toolbox is the better solution. If we want to move to more advanced features it’s on our best interest to learn the basics so we don’t struggle with the more complex concepts later.

Application Shells and Service Workers: Links and Resources