Empathy In Design and Development

Empathy – the action of understanding, being aware of, being sensitive to, and vicariously experiencing the feelings, thoughts, and experience of another of either the past or present without having the feelings, thoughts, and experience fully communicated in an objectively explicit manner;

— –Merriam-Webster

How many times have we told a teammate or a potential client “that’s easy” or “piece of cake” when it comes to doing something like a WordPress theme or installing a web server from source? Even if you have the data to back up the assertion that it’s easy we may still come across as arrogant or “know it all” and they would probably be right.


div class=”video”>

I saw this video after I had a conversation about WordPress development where I claimed that customizing a WordPress theme should not be complicated and how I’ve done multiple customizations using different techniques to do so.

Then it made me think…. How hard was setting up a WordPress installation in 2006 when I posted the first content in my blog? I remember why I moved away from Movable Type and I realized that I take those beginner experiences for granted and dismiss them quickly because I’ve already done this a hundred times.

This is also when I hear when I hear Tal Oppenheimer and Bruce Lawson speak about the next Billion users. We take connectivity for granted and believe we will always be able to optimize our applications to get the best speed possible.

There are cultural issues: Bruce makes a point that there are some people in Thailand who use one name only; if we require a first and last name in a form (for example to pay for an item) people will either lie or not use your application. What other similar gems hide behind your application’s users?

WWW: World Wide Web, not Wealthy Westerners’ Web

— Bruce Lawson

So how do we become more empathetic as designers?

I know this will come like an arrogant asshole but please bear with me.

The first step is to remember who we are doing this for. It may be you in 6 months, your circle of developer friends in your town or developers halfway across the world. They are people and they may do things differently than we do.

I remember that when I first started teaching how to build online courses a trainer form the High Tech Center Training Unit of the California Community Colleges showed videos of users with disabilities working with online courses. It was an eye opening and humbling experience. Things I took for granted when using keyboard and mouse were a whole different experience when using the keyboard as the only navigation tool.

So we need to account for our users. We’ll look at it from different aspects:

  • research
  • design
  • accessibility
  • tools and process

Docker for front end development

I’ve dabbled in working with Docker ever since it first became available for the Mac as Boot2Docker but I never actually did anything with it. Not too long ago I started working with the idea of creating a Docker app to store all the tools I use when writng or creating a front end application.

In building this container I will describe the basics of building a Dockerfile and using it to interact with your local filesystem. This will also go in a Github repo in case I need to duplicate the project at a later date.

Quick Docker overview

The idea of a container is that, like a Virtual Machine, it allows for full encapsulation of an application and its dependencies. For example if we’re building a Node.js application with a MongoDB backend you can configure the container to download and install Node and MongoDB, perform any needed configuration and be ready to begin development. Furthermore you should get the same results any time that you build an image from a Dockerfile.

Installing Docker

Linux

There is no graphical installer for Linux like the ones we’ll discuss later for Mac and Windows, you have two options: you can use the installer script our you can download a binary.

The install script is a shell script that will install Docker and associated dependencies. The process below uses Curl to download the script for your inspection and then run the script. We could automate the download and run into a single command but it’s always a good idea to check the script matches the original source before running it.

Make sure to verify the contents of the script you downloaded matches the contents of install.sh located at https://github.com/Docker/Docker-install before executing.

Curl

If using Curl, run the following commands to install Docker.

curl -fsSL get.Docker.com -o get-Docker.sh
sh get-Docker.sh

If you want to download and install a test version of Docker use this command instead.

curl -fsSL test.Docker.com -o test-Docker.sh
sh test-Docker.sh

Wget

If you use Wget this is the command to download the production install script.

wget -qO- https://get.Docker.com > get-Docker.sh
sh get-Docker.sh

And this is the command to download the test/development version.

wget -qO- https://test.Docker.com > test-Docker.sh
sh test-Docker.sh

Downlading a binary

If you don’t want to use the installer, or would like to use a different version of Docker than the one in the installer, you can download a binary from the Docker store. The downside to this approach is that you will have to manually install updates.

Windows and Mac

Installing for Windows and Mac is as simple as downloading a binary package from the Docker store. Unlike the installer for Linux, the installers for Mac and Windows contain everything necessary to run Docker, build images and use other Docker tools.

Verifying successful installation

After installation start Docker and then, from a terminal, type Docker version. The result should be something similar to the block below.

$ Docker version
Client:
 Version:      17.06.1-ce
 API version:  1.30
 Go version:   go1.8.3
 Git commit:   874a737
 Built:        Thu Aug 17 22:53:38 2017
 OS/Arch:      darwin/amd64

Server:
 Version:      17.06.1-ce
 API version:  1.30 (minimum version 1.12)
 Go version:   go1.8.3
 Git commit:   874a737
 Built:        Thu Aug 17 22:54:55 2017
 OS/Arch:      linux/amd64
 Experimental: true

You are good to go.

Building the Dockerfile

Rather than create an empty image for a project and start from scratch every time we can create a Dockerfile and use it as the basis for our our projects.

A Docker image is built up from a series of layers. Each layer represents an instruction in the image’s Dockerfile. Each layer except the very last one is read-only.

Each layer is only a set of differences from the layer before it. The layers are stacked on top of each other. When you create a new container, you add a new writable layer on top of the underlying layers. This layer is often called the “container layer”. All changes made to the running container, such as writing new files, modifying existing files, and deleting files, are written to this thin writable container layer.

The major difference between a container and an image is the top writable layer. All writes to the container that add new or modify existing data are stored in this writable layer. When the container is deleted, the writable layer is also deleted. The underlying image remains unchanged.

Because each container has its own writable container layer, and all changes are stored in this container layer, multiple containers can share access to the same underlying image and yet have their own data state.

In this case we want the container to do the following:

  1. Install an image from Bitnami that already has Node installed
  2. Install build tools, Bash, Curl, Git and Ruby
  3. Force the container to use Bash
  4. Install SASS and SCSS Lint using Ruby Gems
  5. Install node-gyp and node-pre-gyp
  6. Install a set of packages globally using Npm
  7. Make a directory to store our code (named app/code) and set its (Unix) permissions
  8. Copy a default package.json and `gulpfile.js to our work area
  9. Install the packages specified in package.json
  10. Expose port 3000 from our container to the outside world
  11. Provide the user a bash shell to work from

To make the process easier to work through I’ve broken the Dockerfile into smaller chunks that roughly correspond to the items in the list. This will make the entire process easier to understand and reason through.

Install the image

The first step in any Dockerfile is to specify the base image we want to work from. In this case I’ve chosen to use the latest Bitnami image for Node rather than using a bare operating system. I loose the ability to install multiple versions of Node with NVM but I’d rather have one working version in the image than none at all so I’m ok with the compromise, for now.

FROM bitnami/node:8.4.0-r1

Install tools and utilities

Next we use the apt-get package manager to install tool and utilities for our container. These utilities include: build-essentials to handle compiling and linking in case the node packages have no binary version for Linux, curl to download files, the bash shell, git VCS, Ruby and Rubygems to handle two special cases.

There is a second block of libraries and applications that I’m installing to install other libraries and gems. libffi6, libffi-dev and ruby-dev are needed to install a Gem dependency.

# Fetch and install system tools
RUN apt-get update && apt-get -y -q --no-install-recommends install \
    build-essential \
    curl \
    bash \
    git \
    libffi6 \
    libffi-dev \
    python \
    python-dev \
    ruby \
    ruby-dev \
    rubygems

Force the container to use Bash

I prefeer to work with Bash and don’t care about the standard shell that comes with Debian because it will not let me do completions, so I’ve added a layer to remove the default shell (/bin/sh) and create a symbolic link from Bash to make it the default font.

Now calling either bin/bash or bin/sh will run Bash. Problem solved, I think.

RUN rm /bin/sh && ln -s /bin/bash /bin/sh

Install SASS and SCSS Lint

Now that we’ve installed Ruby and Rubygems we can install the Ruby packages (gems) that we need for some of our Gulp tasks later on.

I know that there is a version of SASS that will work directly with Node and is written in C. I’ve used node-sass and the associated libsass but, despite what the authors say about functional parity between Ruby SASS and libsass some aspects of my tasks did not work so, rather than go through the hassle of debugging, I’ve chosen to stay with the Ruby version.

RUN gem install \
    sass \
    scss_lint

Make a directory and set its (Unix) permissions

Before we can install things we need to create a directory where to place our files. We use standard Unix commands to create a directory (mkdir) and set its permissions (chmod). Because this will be local to our image I’ve felt it was OK to make it publicly accessible (everyone can read, write and execute any file in the directory).

If you want to learn more about permissios or check for values that will make these permissions more restrictive, check Unix – File Permission / Access Modes

# Create the development directory
RUN mkdir /app/code/ && chmod 777 /app/code/

Copy defaults to work area

I hate reinventing the wheel so rather than create a new gulpfile and install packages manually for every project, I’ve copied my default gulpfile.js and package.json files to the root of my Docker development directory and use a layer to copy them into the container.

# If this works it should copy the package.json and gulpfile.js
# to the code directory
COPY package.json gulpfile.js /app/code/

Node related tasks

I’m usig four layers to complete all the Node associated tasks for this container. Since I changed the base image I no longer need to install NVM or Node (but I no longer have the capability to test code with different versions of Node) until I figure out how to install NVM at build time I’ll consider this a fair trade.

The first two layers make sure that the tree benneath /opt/bitnami/node/lib/ is publicly writeable to avoid possible errors.

We install packages for Node in 3 stages. The first stage will install node-gyp and node-pre-gyp to install binary packages. I have at least two of those packages in the list I install from package.json so I better do it now.

The next stage installs global packages. These will give me access to tools like Assemble, ESLint, Gulp and Netlify without having to create crazy npm script commands to make them work.

The task contiues by installing a small set of packages globally so we get command line tools for the following programs to work with:

  • NPM (later version than what’s installed with Node)
  • Assemble
  • ESLint
  • Gulp
  • Netlify
  • NPM

Polymer and Firebase were part of the list but they triggered errors in NPM when working inside Docker so I removed them. If I need them for a particular project I can always install them from the shell.

After copying package.json I can install the packages listed there by running npm install. One thing to do periodically is update versions and make sure they still work, other than that the versions already there are known to work. This is the longest stage and the one where we could potentially find the most issues with, especially with dependencies that need to be compiled.

# Make the tree under /opt/bitnami/node/lib/ publically writeable
RUN /bin/bash -c "chmod -R 777 /opt/bitnami/node/lib/"
RUN /bin/bash -c "chmod -R 777 /opt/bitnami/node/lib/node_modules/"

# Install Gyp related tools for Node binary packages
RUN npm install -g \
    node-pre-gyp \
    node-gyp

# Install global packages
RUN npm install -g \
    assemble \
    eslint \
    gulp-cli \
    netlify-cli \
    npm

# Install packages from package.json
RUN npm install

Expose port from container to outside world

This layer uses EXPOSE to tell the world what port (or ports) the container will listen on. When we initialize the container we’ll use this port to connect the container to our local development environment. Because the development server I set up in Gulp uses port 3000 we’ll use the same port.

# Expose default gulp port
EXPOSE 3000

Provide a bash shell for user to work from

Because we are providing interactive tools I want to make sure that the user of the container has a way to interact with the provided tools. We set up a WORKDIR pointing to our code directory and then use CMD to set up our default shell.

# Run with bash
WORKDIR /app/code/
CMD ["/bin/bash"]

Now to test.

Building the Docker image, creating a Github repo and pushing image to Docker Registry

Now that we know what we want to have in our image we can actually build it. The command is simple. What we want to do is change to the directory where you put your Dockerfile and run the following command”

Docker build -t front-end-dev -f Dockerfile.Docker .

The flags I used are as follows:

  • -t front-end-dev assigns a name to the image.
  • -f Dockerfile.Docker specified the location of the Docker file we’ll use to build the image.

The period at the end (.) indicates the location where to build the image.

By default Docker caches the layers used to create an image. During development, where there may be many changes, you may consider disabling caching using –no-cache=true

We’ll also create a Github repository to store the files. Right now there are only 3 but the number may increase as the project becomes more and more complex. Follow standard procedure:

  1. Create the repository on Github
  2. In your local directory initialize an empty Git repo (git init)
  3. Add the project files (git add .) and commit them to the repo (git commit -am "message here")
  4. Push the files to the repository for the first time (git push -u origin master)
git init
git add .
git commit -am "initial message here"
git push -u origin master

The final stage is to push your image to the Docker Hub a central repository of Docker related images. This will also simplify building the container later on.

To make this easier I rebuilt the image with a namespace (my username in Docker Hub) and a tag name in addition to the image name, like so:

Docker build -t elrond25/front-end-dev:1.0 -f Dockerfile.Docker .

I logged into Docker Hub and created a project called front-end-dev.

Next, I logged in to the Docker hub from the command line:

Docker login -u elrond25 -p <password>

and finally I push the image to the registry

Docker push elrond25/front-end-dev:1.0

The public URL for the image is https://hub.Docker.com/r/elrond25/Docker-front-end/

Further ideas: Continous builds

Rather than rebuild and upload the image every time I make changes I’ve chosen to use Docker Hub’s ability to build the image on the Hub whenever an associated Git (Github or Bitbucket) changes.

Configuring automated builds is a two step process. First you must link yourt Github and Docker Hub accounts:

  1. Log into Docker Hub
  2. Navigate to Profile > Settings > Linked Accounts & Services
  3. Click the service you want to link (Github or Bit Bucket)
    • The system prompts you to choose between Public and Private and Limited Access. The Public and Private connection type is required if you want to use the Automated Builds.
    • Press Select under Public and Private connection type.
  4. The system prompts you to enter your service credentials (Bitbucket or GitHub) to login

After you grant access to your code repository, the system returns you to Docker Hub and the link is complete.

The second step is to create the automated build

  1. Select Create > Create Automated Build from Docker Hub
  2. The system prompts you with a list of User/Organizations and code repositories
  3. Select from the User/Organizations
    • Optionally, type to filter the repository list
  4. Pick the project to build
  5. The system displays the Create Automated Build dialog
    • The dialog assumes some defaults which you can customize. By default, Docker builds images for each branch in your repository. It assumes the Dockerfile lives at the root of your source. When it builds an image, Docker tags it with the branch name
  6. Customize the automated build by pressing the Click here to customize this behavior link
  7. Specify which code branches or tags to build from. You can add new configurations by clicking the + (plus sign). The dialog accepts regular expressions.
  8. Click Create.

Create container for new project

We now have a working image we have pushed into the Docker Hub so other people can share it. Now all we have left is to actually build a container with the image. I’m doing this last because I want to leverage the Hub to download a smaller image than I would if it was used locally.

Docker  run  -v `pwd`:`pwd` -w `pwd` -it --rm elrond25/front-end-dev:1.0

This time it’s a single command, Docker run with the following parameters:

  • The -v flag mounts the current working directory into the container
  • The -w lets the command being executed inside the current working directory, by changing into the directory to the value returned by pwd
  • –rm automatically remove the container when it exits
  • -it instructs Docker to create an interactive bash shell in the container

So this combination executes the command using the container, but inside the current working directory. It will also provide a shell inside the container to run commands from.

Further ideas: Integrating MongoDB and other tools

So far we’ve only discussed building single use containers. One of the biggest attractions of Docker is that you can create applications using mulltiple containers using a separate tool called Docker Compose.

In this example we’ll build a WordPress application using two images: One for WordPress 4.8.1 Running PHP 7.0 in Apache and one for MySQL 5.7. This wwill be similar to the example in the Docker samples site but tailored to run the latest versions of software.

Create a stack.yaml file and copy the content below to it:

version: '3.1'

services:

  wordpress:
    image: 4.8.1-php7.0-apache
    ports:
      - 8080:80
    environment:
      WORDPRESS_DB_PASSWORD: example

  mysql:
    image: mysql:latest
    environment:
      MYSQL_ROOT_PASSWORD: example

Then the command to bring this application up is:

Docker-compose -f stack.yaml up

This will automatically download the images (if needed) and configure them to give you a WordPress installation ready for you to configure.

This image does not provide any additional PHP extensions or other libraries, even if they are required by popular plugins. There are an infinite number of possible plugins, and they potentially require any extension PHP supports. Including every PHP extension that exists would dramatically increase the image size.

If you need additional PHP extensions, you’ll need to create your own image FROM this one. documentation of the php image explains how to compile additional extensions. Additionally, the wordpress Dockerfile has an example of doing this. it looks like this:

# install the PHP extensions we need
RUN apt-get update && apt-get install -y libpng12-dev libjpeg-dev \
  && rm -rf /var/lib/apt/lists/* \
    && Docker-php-ext-configure gd --with-png-dir=/usr --with-jpeg-dir=/usr \
    && Docker-php-ext-install gd
RUN Docker-php-ext-install mysqli

Closing thoughts

The two examples in this post are fairly simmple as far as using Docker is concerned. They give us an idea of what you can do and starting point for further experimentation.

The next experiment is to create an image similar to front-end-dev but set up for Express and MongoDB development. Then we can compose that image with a MongoDB image from the Docker library and have a fully working development environment.

This also gives us the opportunity to play and experiment with applications we wouldn’t normally use. I’m wondering if Apache Cassandra would work better than MongoDB. Now I can create stacks with both databases and test the code.

The sky is the limit… with moderation 🙂

Links and Resources

Quick Note: Loading images using javascript

I’m not a fan of doing this but there are use cases when we need to retrieve an image from the network and place it in the page. One case that comes to mind is the need to retrieve generated images from a real time application to display in web pages or other parts of a web application.

In the example below I’ve added comments to the specific lines to discuss.

fetch('https://rivendellweb.net/blog/wp-content/uploads/2016/10/IMG_0813.jpg')
// 1
    .then(response => {
      return response.blob(); // 2
    })
    .then(imageBlob => {
    let img = document.createElement(img); // 3
    img.src = window.URL.createObjectURL(imageBlob); // 4
    })
  // 5  Place the image
  .catch(err => { // 6
    console.log('There was an error loading the image ', err);
  });
  1. Fetch the image
  2. Return the response as a blob (binary object)
  3. Create and image
  4. Create a src attribute by using URL.createObjectURL with the imageBlob as the parameter
  5. Place the image in the page. Left intentionally blank because it depends on usage
  6. Catch any errors and log them to console

Links and Resources

Migrating projects to Gulp 4.0 and ES6

This post updates my April, 20016 post: Gulp Workflow: Looking at Gulp 4 and provides additional insights to what has changed and how to update existing Gulp 3 files to Gulp 4.

Even though Gulp 4 is still at Alpha 2, I will begin moving my projects to Gulp 4. I will use Babel first to make sure the projects work with older versions of Gulp and will move away from Babel to using @std/esm as support for es6 becomes more widespread.

Yes, Gulp 4.0 is real. Yes, it has been in development for a long time but it appears to be stable enough to work with it. The biggest issue is the incompatible changes between versions 3 and 4 and how the format for Gulp tasks and commands has changed and what changes we need to make to our build files as a result.

I will take the time to also make the gulpfile and the build system ES6 compliant by trying two ways to . Running the build system through a Babel Transformer and using @std/esm as a way to work with modules using ES6 syntax.

The tasks to complete

I want to be able to complete the following tasks:

  • compile SCSS into CSS
  • run Autoprefixer on our CSS

We can use this as the basis for further work and refinement.

Running Gulp with ES6: Babel Core

Recent versions of Node have done a great job of implementing ES6+ features and it’s been almost transparent to me to start using arrow functions and other areas of ES6 in my build scripts.

The one area that is lacking is modules. It’s a long story and it boils down to Node and its ecosystem supporting CommonJS modules and implementing ES2016 modules has proved hard without breaking the thousands, if not millions, of line of Javascript running in Node. The best descriptions of the issues I’ve found is James. M. Snell’s Node.js, TC-39, and Modules.

Until we have ES6 module support in Node we have to come up woth workarounds. The easiest one is to use Babel to transpile and parse the gulpfile. This has the advantage that will work in older versions of Node that lack ES6 support or lack full support for the specification.

To make Gulp work with Babel we need to complete the following steps:

  1. Make sure that babel-core is installed as a dependency
  2. Rename the gulpfile to gulpfile.babel.js
import gulp from "gulp";
//SASS
import sass from "gulp-sass";
// Post CSS and Plugins
import postcss from "gulp-postcss";
import autoprefixer from "gulp-autoprefixer";

const sassPaths = {
  src: "src/sass/**/*.scss",
  dest: "src/css/"
};

// Tasks Begin
gulp.task("sass", () => {
  return sass(`${sassPths.src}`, {
    sourcemap: true, style: "expanded"
  })
  .pipe(gulp.dest(`${sassPaths.dest}`))
  .pipe($.size({
    pretty: true,
    title: "SASS"
  }));
});

gulp.task("processCSS", [sass], () => {
  // What processors/plugins to use with PostCSS
  const PROCESSORS = [
    autoprefixer({browsers: ["last 3 versions"]})
  ];
  return gulp
    .src("src/css/**/*.css")
    .pipe($.sourcemaps.init())
    .pipe(postcss(PROCESSORS))
    .pipe($.sourcemaps.write("."))
    .pipe(gulp.dest("site/static/css"))
    .pipe(
      $.size({
        pretty: true,
        title: "processCSS"
      })
    );
}));

In this little example we’ve used string literal templates, arrow functions and module imports. It will work regardless of the Node version we’re working with but it requires an additional dependency and may take longer to work on large codebases.

@std/esm

While using Babel is not hard it’s repetitive. Node 4 and later have increasing support for ES6 features

I’m excited to announce the release of @std/esm (standard/esm), an opt-in, spec-compliant, ECMAScript (ES) module loader that enables a smooth transition between Node and ES module formats with near built-in performance! This fast, small, zero dependency package is all you need to enable ES modules in Node 4+ today 🎉🎉🎉

ES Modules in Node Today!

I’m really excited about this package as it, temporarily, fixes the only ES6 feature missing from Node: Module Import without requiring Babel.

To get the @std/esm working you need to:

  1. install @std/esm as a dependency on your project
  2. require the moduel using require("@std/esm") before you import any modules

The example create to work with Babel looks like this when using @std/esm

require("@std/esm")
import gulp from "gulp";
//SASS
import sass from "gulp-sass";
// Post CSS and Plugins
import postcss from "gulp-postcss";
import autoprefixer from "gulp-autoprefixer";

const sassPaths = {
  src: "src/sass/**/*.scss",
  dest: "src/css/"
};

// Tasks Begin
gulp.task("sass", () => {
  return sass(`${sassPths.src}`, {
    sourcemap: true, style: "expanded"
  })
  .pipe(gulp.dest(`${sassPaths.dest}`))
  .pipe($.size({
    pretty: true,
    title: "SASS"
  }));
});

gulp.task("processCSS", [sass], () => {
  // What processors/plugins to use with PostCSS
  const PROCESSORS = [
    autoprefixer({browsers: ["last 3 versions"]})
  ];
  return gulp
    .src("src/css/**/*.css")
    .pipe($.sourcemaps.init())
    .pipe(postcss(PROCESSORS))
    .pipe($.sourcemaps.write("."))
    .pipe(gulp.dest("site/static/css"))
    .pipe(
      $.size({
        pretty: true,
        title: "processCSS"
      })
    );
}));

The changes

OK, we’ve discussed how to run our gulpfiles as ES6, now let’s dive into the four changes from Gulp 3 to 4

  1. gulp.series and gulp.parallel
  2. change of the task method signature
  3. async work
  4. change on how watchers work

Change #1: gulp.series and gulp.parallel

I’ve always had issues running Gulp tasks sequentially. run-sequence is a workaround that works most of the time but I don’t think it’s a good idea to run to plugins every time that we need to do this.

Gulp 4 has added two new methods to address this issue: gulp.series and gulp.parallelto address these needs. As they names impply, serial will run one or more tasks sequentially and parallel will run the tasks concurrently.

This task, taken from an old project, defines a set of tasks to run in the order written and one after the other. This will ensure that the results from earlier tasks will be available later in the process.

gulp.task('default', () => {
  runSequence('processCSS', 'build-template', 'imagemin', 'copyAssets');
});

The same task written for Gulp 4 would look like this:

gulp.task('default', gulp.series('processCSS', 'build-template', 'imagemin', 'copyAssets'));

We can also mix and match. Taking the previous example let’s say that we want to run build template and imagemin in parallel to make the task run faster. We can change the task like so:

gulp.task('default', gulp.series('processCSS', gulp.parallel('build-template', 'imagemin'), 'copyAssets'));

Change #2: Change of the task method signature

For Gulp 4 has changed the signature of the gulp.task method to only accept two parameters, rather than two or three depending on whether the task has dependencies or not

In the example below the default task depends on both scripts and styles running before it.

This what the task look like in our current Gulp.

gulp.task('default', ['scripts', 'styles'], function() {
    ...
});

And this is what it would look like in Gulp 4. Note the nested calls to series and parallels. What this means is we want Gulp to run the parallel script before we run the body of the function and inside parallel we run scripts and styles concurrently as we don’t expect the results to affect one another.

gulp.task('default', gulp.series(gulp.parallel('scripts', 'styles'), function() {
    ...
}));

When multiple tasks depend on a single task

I’ve worked in projects where multiple tasks depend on a common task as prerequisite. In this example both styles and scripts require clean to be run before them.

gulp.task('styles', ['clean'], () => {...});

gulp.task('scripts', ['clean'], () => {...});

gulp.task('clean', () => {...});

gulp.task('default', ['styles', 'scripts'], {...});

Gulp 3 is smart enough to only run clean once and preventing the task from running multiple times and deleting the work from one task when setting up for the next.

It would be tempting to write the Gulp 4 tasks like this:

gulp.task('styles', gulp.series('clean', () => {...}));
gulp.task('scripts', gulp.series('clean', () => {...}));

gulp.task('clean', () => {...});

gulp.task('build', ['styles', 'scripts'], () => {...});

And this wouldn’t work the way you want.

Because Gulp 4 changed the task method signature to remove dependencies there is no way for gulp to know that both tasks have the same dependency. To fix that we fall back to gulp.series and gulp.parallel and define a custom task to run our dependencies.

// The tasks don't have any dependencies anymore
gulp.task('styles', function() {...});
gulp.task('scripts', function() {...});

gulp.task('clean', function() {...});

// Per default, start scripts and styles
gulp.task('build',
  gulp.series('clean', gulp.parallel('scripts', 'styles'),
  function() {...}));

In this example we concentrate on the build task. We use gulp.series to run clean and gulp.parallel to run scripts and styles after clean has completed running once.

Change #3: Async

In Gulp 3, if the code you ran inside a task function was synchronous, you needed no additional work. Gulp 4 is different: now you need to use the done callback. For asynchronous tasks, you had 3 options for making sure Gulp was knew when your task finished, which were:

Callback

You can provide a callback parameter to your task’s function and then call it when the task is complete:

var del = require('del');
gulp.task('clean', function(done) {
    del(['.build/'], done);
});

Return a Stream

You can also return a stream, usually made via gulp.src or even by using the vinyl-source-stream package directly. This is how we do it today.

gulp.task('somename', function() {
    return gulp.src('client/**/*.js')
        .pipe(minify())
        .pipe(gulp.dest('build'));
});

Return a Promise

Node supports promises natively so this is a very helpful option. Just return the promise and Gulp will know when it’s finished:

import Promise from 'promise';
import del from 'del';

gulp.task('clean', function() {
  return new Promise(function (resolve, reject) {
    del(['.build/'], function(err) {
      if (err) {
        reject(err);
      } else {
        resolve();
      }
    });
  });
});

or, if your library already supports promises:

var promisedDel = require('promised-del');

gulp.task('clean', function() {
  return promisedDel(['.build/']);
});

Gulp4 provides two new ways of signalling a finished asynchronous task.

Return a Child Process

You can return a spawned child process.

import {spawn} from 'child_process';

gulp.task('clean', function() {
  return spawn('rm', ['-rf', path.join(__dirname, 'build')]);
});

Return a RxJS observable

Gulp 4 can return RxJS observables if that’s your cup of tea. I don’t use them and find it odd that they are part of a build systemm but someone must have figured out a use for them.


gulp.task('sometask', function() { return Observable.return(42); });

Change #4: How watchers work

The last aspect that merits discussion is how watchers have changed. Since tasks are specified via series or parallel which simply return a function, there’s no way to distinguish tasks from a callback, so they’ve removed the signature with a callback.

gulp.task('watch', () => {
  gulp.watch('src/md-content/*.md', ['build-template']);
});

Instead, like before, gulp.watch will return a “watcher” object that you can assign listeners to on different events

const js_watcher = gulp.watch('js/**/*.js', gulp.series('scripts', 'jsdoc', 'uglify'));

js_watcher.on('change', function(path, stats) {
  console.log('File ' + path + ' was changed');
});

js_watcher.on('unlink', function(path) {
  console.log('File ' + path + ' was removed');
});

In the example, gulp.watch runs the function returned by gulp.parallel each time a file with the js extension in js/ is updated. It also logs when a file matching the glob is updated or deleted.

Conclusion

There’s more that you can do with Gulp 4. I’ve concentrated on the things I need to change so my gulpfiles will continue to work. Upgrading is not required and Gulp has been in Alpha state for at least 2 years, but I think the advantages are worth the upgrade even in its current status.

The links in the following section provide a better overview of the API and the installation/upgrade process.

Links and Resources

Digital Storytelling: Conclusion: Are we ready for the change?

Does anyone remember Google Glass and their deployment nightmares? Stories like the Seattle restaurant that banned Google Glass citing privacy concerns presents one of the first challenges to public AR and VR but not the last or, by any means, the only one. For more information see this Huffington Post Summary Post and this Guardian outlining why Google Glass was banned in theaters.

But the changes go beyond physical comfort with the technology and reach deep into what the technologies can do to people’s real lives. In A Rape in Cyberspace Julian Dibbell tells the story of a “virtual” rape and the consequences (virtual and real), both for the people involved directly, the two victims of the event, the perpetrator, for the administrator, wizard in MOO parlance, who took it upon himself to enact the ultimate punishment available in the MOO and the writer himself.

The what and the how of the event have never been in question. The perpetrator, a character named Mr. Bungle, used a MOO object programmed to force other users to perform actions he created… in this case text based representations of rape and lesbian sex, all done against the other players’ will and without their consent.

The perpetrator’s character was deleted from the MOO’s database by a Wizard, as administrators in MOOs are known. However the MOO doesn’t ban people based on IP or email addresses used during registration so it was easy for Mr. Bungle to create a new account in the MOO.

Perhaps the most important thing, to me, is how real the event was for the participants even when it wasn’t real (in the sense that it didn’t happen in real life) but the consequences were just as serious as if they happened in the real world.

As a thank you for reading this long series of posts, I’ve created an ebook (epub) format of the material in the series for you to download.
ebook download

Bibliography

Text based MUDs

Simnet

Habitat

Hololens

Creating stories for the Metaverse

Are we ready for the change?

Links, articles, books and concepts

Videos