Style Interaction

I’ve been working on a project where I want to convey the idea of pages spread out on a table or post-it notes pasted randomly against a wall or another surface. It took me a while to reason through it but I ended happy with the result. It also shows the interaction between stylesheets and inline styles generated with JavaScript.

The HTML for the project is fairly simple. It doesn’t matter what’s inside the story elements… for the purpose of the movement we only care about the story elements.

<div class="story-container">
  <div class="story">

  <div class="story">

  <div class="story">

  <div class="story">
</div> <!-- closes story-container -->

The first block of CSS provides styles for the container, the initial state for stories (the .story class declaration), and state for the static elements ( the .absolute class declaration)

Some things two notice.

The container has been absolutely positioned (position: absolute) and the stories have a relative position (position: relative). We did this to make sure that the stories can be relatively positioned and therefore have their location in the page manipulated.

The absolutely positioned elements will be placed at the top of the screen, covering any other elements.

.story-container {
  position: absolute;
  width: 90vw;
  margin: 0 auto;

/* Initial state */
.story {
  position: relative;
  background-color: lightgoldenrodyellow;
  color: black;
  border: 2px solid black;
  border-radius: 15px;
  width: 50%;
  padding: 1em;
  margin: 1em 0;

/* state when clicked */
.absolute {
  position: absolute;
  background-color: lightgoldenrodyellow;
  color: black;
  border: 2px solid black;
  border-radius: 15px;
  width: 80%;
  padding: 1em;
  margin: 1em 0;
  z-index: 100;

The second block is where we position our stories. The SCSS files use functions to generate random values for each of the three elements we transform, rotation in the Z axis (rotateZ), movement in the X axis (translateX), and movement in the Y axis (translateY).

Because of limitations on how SASS works with random numbers, calculated at compile time rather than runtime, we run the risk of getting the same numbers applied to each instance. That’s why we create different rules for different stories. The full example has 7 different rules that place the corresponding elements in random locations.

/* Position of the stories */
.story:first-child {
  transform-origin: top;
  transform: rotateZ(-29deg) translateX(52px) translateY(50px);
  z-index: 26;

.story:nth-child(2n) {
  transform-origin: top;
  transform: rotateZ(-29deg) translateX(55px) translateY(25px);
  z-index: 24;

JavaScript is where the magic happens.

We create an array with all the elements with class story (.story) using array.from.

Once we have the array we walk through it using a for loop. In the loop we use a click event handler to toggle both the story and absolute classes using classlist.toggle.

Toggle will add a class when it’s not present and remove it if it exists for the element. Since all the elements initially are story, it will be removed for the element we clicked on. No element has the absolute class to begin with so it will be added at the same time the story class is removed.

const stories = Array.from(document.querySelectorAll('.story'));

for (let i = 0; i < stories.length; i++) {
  stories[i].addEventListener("click", function () {

The last bit to remember. Javascript doesn’t write to an existing stylesheet but adds the styles as an inline style attribute. This is important if you have other CSS rules that apply to the element and may cause cascade issues.

See Cascade and inheritance at MDN for a more thoroguh discussion of the cascade and inheritance in CSS.

Lazy loading images using intersection observers

Lazy loading allows you to delay loading images until the user actually scrolls the page to where the image or video is visible to the user. This post will describe why lazy loading is important, one way to lazy load images and videos and alternatives for browsers that don’t support intersection observers.

This is a more polished version of Intersection Observers: Making it easier to lazy load content

Why is lazyloading important

Images are the largest part of a web page, whether site or app. The median number of images requested per page, according to the HTTP Archive is 32 request for desktop and 28 for mobile. The HTTP Archive defines an image request as:

The number of external images requested by the page. An external image is identified as a resource with the png, gif, jpg, jpeg, webp, ico, or svg file extensions or a MIME type containing the word image.

[HTTP Archive](https://HTTP

The time-series below shows a time series for the number of requests for the period between December 2015 and December 2018.

HTTP Archive timeseries of the median numer of image requested for crawled domains

So things are getting better, right. We have fewer requests and that should make things better, right?

Sadly it’s not the case. While we have fewer requests per page the median for these requests is still huge: 930K for desktop and 491K for mobile… and this is median, not average; we have an equal number of requests above and below this. The HTTP Archive defines image bytes as:

The sum of transfer size kilobytes of all external images requested by the page. An external image is identified as a resource with the png, gif, jpg, jpeg, webp, ico, or svg file extensions or a MIME type containing the word image.

HTTP Archive

HTTP Archive timeseries of the median weight of image requested for crawled domains

Most of the time a web project is an exercise in compromises. Different stakeholders may have different and competing priorities that may impact the size of your images’ payload and your initial page load time.

With these numbers (weight and requests) on hand, we can make the case for not loading images until they are needed; that way we only load the things we need when we need them and not before and we prevent waste:

  • Wasted data. On limited data plans loading stuff the user never sees could effectively be a waste of their money
  • Wasted system resources like CPU, and battery. After a media resource is downloaded, the browser must decode it and render its content in the viewport. Rendering stuff that the user may not see is unnecessarily wasteful

The how

The code below is adapted from Jeremy Wagner’s article in Google Developers and it’s a development from the script that I used in Intersection Observers: Making it easier to lazy load content as the technology is now better supported in browsers, but Safari (Desktop and iOS) and Edge lag behind in support. So we’ll have to come up with a polyfill strategy or a way to undo the changes we made to our images to lazy load them.

Both the native and polyfilled versions require some changes to the way you markup your images in HTML. If you’re using a single image:

  <img class="lazy" src="placeholder.jpg" data-src="image-to-lazy-load.jpg" alt="Alternative text to describe image.">
  <figcaption>Image description</figcaption>

If you’re using srcset attributes in your images:

  <img class="lazy" src="placeholder.jpg" data-srcset="image-to-lazy-load-2x.jpg 2x, image-to-lazy-load-1x.jpg 1x" data-src="image-to-lazy-load-1x.jpg" alt="Alternative text to describe image.">
  <figcaption>Image description</figcaption>

With the markup in place, we can now look at the code. It does the following things

  1. It collects all the images with class lazy (img.lazy)
  2. It checks whether we support Intersection Observers. Because browsers may only partially support observers, we need to test for each individual item that we want to use
  3. Create a new Intersection Observer object
  4. For each of our lazy images
  5. If it’s intersecting, meaning that it’s in the observer’s range: change add the src and srcset attributes and give them the values of the data.src and data.srcset attributes respectively. Remove the lazy class and unobserve the image
  6. For each image with the .lazy class observe it
  7. If the browser doesn’t support Intersection Observers then change add the src and srcset attributes and give them the values of the data.src and data.srcset attributes respectively and remove the lazy class
document.addEventListener("DOMContentLoaded", function() {
  const lazyImages = (...document.querySelectorAll("img.lazy")); // 1

  if ("IntersectionObserver" in window && "IntersectionObserverEntry" in window && "intersectionRatio" in window.IntersectionObserverEntry.prototype) { //2
    let lazyImageObserver = new IntersectionObserver (function(entries, observer) { // 3
      entries.forEach(function(entry) { // 4
        if (entry.isIntersecting) { // 5
          let lazyImage =;
          lazyImage.src = lazyImage.dataset.src;
          lazyImage.srcset = lazyImage.dataset.srcset;

    lazyImages.forEach(function(lazyImage) { // 6
  } else { // 7
    lazyImages.forEach(function(lazyImage) {
      lazyImage.src = lazyImage.dataset.src;
      lazyImage.srcset = lazyImage.dataset.srcset;


This is an all or nothing approach. Either we support Intersection observers and use them or don’t and provide a hard fallback for browsers that don’t support them.

Lazy loading images in CSS

One of the things I hadn’t seen before is how to lazy load images that are loaded from CSS. Take for example the code below that uses an image for the element’s background

.lazy-background {
  /* Placeholder image */
  background-image: url("hero-placeholder.jpg"); 

We then add a second element with the visible class

.lazy-background.visible {
  /* The final image */
  background-image: url("hero.jpg");

And finally we use JavaScript to manipulate the elements to add the visible class and make it visible. The script does the following:

  1. Create an array for all elements that have a CSS background
  2. Create a new Intersection Observer
  3. For every element in the array: Add the class visible and unobserve the element
  4. Observe all elements with the .lazy-background class
  5. If the browser doesn’t support Intersection observer then for each element in the lazyBackground array: Add the visible class
document.addEventListener("DOMContentLoaded", function() {
  var lazyBackgrounds = (...document.querySelectorAll(".lazy-background")); // 1

  if ("IntersectionObserver" in window) { // 2
    let lazyBackgroundObserver = new IntersectionObserver(function(entries, observer) {
      entries.forEach(function(entry) { // 3
        if (entry.isIntersecting) {

    lazyBackgrounds.forEach(function(lazyBackground) { // 4
  } else {
    entries.forEach(function(entry) { // 5"visible");

Again this is an all-or-nothing approach. Either we support observers and progressively enhance the application or we don’t and skuip the process altogether.


While we have a working version of our lazy loader the all-or-nothing approach may not be what we need, particularly in image heavy sites or sites with fewer, larger images.

I’ve chosen yall.js (Yet Another Lazy Loader) as my polyfill. It saves me from having to make changes to the markup I already changed to get Intersection Observers working.

In order to use it at the most basic level you need to load and initialize the script like so:

<script src="js/yall.min.js"></script>
  document.addEventListener("DOMContentLoaded", function() {
      observeChanges: true

When you initialize the library you can pass in an options object as the second parameter. The options currently available are:

  • lazyClass (default: “lazy”): The element class used by yall.js to find elements to lazy load
  • lazyBackgroundClass (default: “lazy-bg”): The element class used by yall.js to find elements to lazy load CSS background images for
  • lazyBackgroundLoaded (default: “lazy-bg-loaded”): When yall.js finds elements using the class specified by lazyBackgroundClass, it will remove that class and put this one in its place. This will be the class you use in your CSS to bring in your background image when the affected element is in the viewport
  • throttleTime (default: 200): In cases where Intersection Observer throttleTime allows you to control how often the code standard event handlers used as replacement fire in milliseconds
  • idlyLoad (default: false): If set to true, requestIdleCallback is used to optimize the use of browser idle time to limit monopolization of the main thread
    • This setting is ignored if set to true in a browser that doesn’t support requestIdleCallback
    • Enabling this could cause lazy loading to be delayed significantly more than you might be okay with
    • Test extensively, and consider increasing the threshold option if you set this option to true
  • idleLoadTimeout (default: 100): This option sets a deadline in milliseconds for requestIdleCallback to kick off lazy loading for an element
  • threshold (default: 200): The threshold (in pixels) for how far elements need to be within the viewport to begin lazy loading.
  • observeChanges (default: false): Use a Mutation Observer to examine the DOM for changes.
    • This is useful if you want to lazy load resources for markup injected into the page after initial page render
    • This option is ignored if set to true in a browser that doesn’t support Mutation Observer
  • observeRootSelector (default: “body”): If observeChanges is set to true, the value of this string is fed into document.querySelector to limit the scope in which the Mutation Observer looks for DOM changes
    • The <body> element is used by default, but you can confine the observer to any valid CSS selector (e.g., #main-wrapper)
  • mutationObserverOptions (default: {childList: true}): Options to pass to the MutationObserver instance. Read this MDN guide for a list of options.

Pay particular attention to the lazyClass, lazyBackgroundClass, and lazyBackgroundLoaded configuration parameters. These are the ones most likely to change.

Things to be careful about

There are a few things to consider when lazy loading images and, depending on your images and your page, one or more may come back to bite you.


As unlikely as it is we may still find instances where JavaScript is not enabled. To deal with these use <noscript> to provide an alternative that will work without JavaScript

  An image that eventually gets lazy loaded by JavaScript -->
<img  class="lazy" 
      alt="I'm an image!">
<!-- An image that is shown if JavaScript is turned off -->
  <img src="image-to-lazy-load.jpg" alt="I'm an image!">

Another way to deal with No JavaScript is to manually add a no-js class to the root of the page.

<html class="no-js">

And then use JavaScript to remove it when the page is loaded and we know JavaScript is working.

  Remove the no-js class on the <html>
  element if JavaScript is on

this script will remove the no-js class from the <html> element as the page loads, but if JavaScript is turned off, this will never happen. From there, you can add some CSS that hides elements with a class of lazy when the no-js class is present on the </html><html> element:

/* Hide .lazy elements if JavaScript is off */
.no-js .lazy {
  display: none;

It’s your decision as to which alternative to use. If you load the images with <noscript> then you lose the benefits of lazy loading but you don’t load assets that may delay the loading of the page. But if you hide them completely you lose content that may be important.

Take care of the all-mighty fold

We may be tempted to lazy load everything in the page using JavaScript but we must resist the temptation. Assets that appear above the fold should not be lazy-loaded as they should be considered critical and loaded normally.

The reasoning behind loading critical assets normally is that we don’t want to delay their load. The lazy loading strategies that we’ve covered so far wait until the DOM content is loaded and scripts have finished executing. For the resources the users will see first this is not always acceptable as it is for below the fold content

Loading content above the fold quickly becomes harder when the fold changes according to the devices you use. One way to address this is to let your analytics tools help you figure out what kind of devices your users are accessing your site with. CrossBrowserTesting gives an example of how this would work with Google Analytics.

Softening the lazy loading boundaries

You may want to change the conditions that trigger lazy loading. It may work better if you build a buffer zone so that images begin loading before the user scrolls them into the viewport.

The intersection observer API allows you to specify a rootMargin property in an options object when you create a new IntersectionObserver. This effectively gives elements a buffer, which triggers lazy loading behavior before the element is in the viewport:

let lazyImageObserver = new IntersectionObserver(function(entries, observer) {
  // Lazy loading image code goes here
}, {
  rootMargin: "0px 0px 256px 0px"

The value for rootMargin is similar to the values you’d specify for a CSS margin property. In this case, we’re broadening the bottom margin of the observing element by 256 pixels. This causes the callback function to execute when an image element is within 256 pixels of the viewport. The image will begin to load before the user actually sees it.

Layout shifting and placeholders

Lazy loading media can cause shifting in the layout if placeholders aren’t used. These changes can be disorienting for users and trigger expensive DOM layout operations that consume system resources and contribute to jank. At a minimum, consider using a solid color placeholder occupying the same dimensions as the target image, or techniques such as LQIP or SQIP that hint at the content of a media item before it loads.

For <img /> tags, src should initially point to a placeholder until that attribute is updated with the final image URL. Use the poster attribute in a <video> element to point to a placeholder image. Additionally, use width and height attributes on both <img /> and </video><video> tags. This ensures that transitioning from placeholders to final images won’t change the rendered size of the element as media loads.

Image decoding delays

Loading large images in JavaScript and dropping them into the DOM can tie up the main thread, causing the user interface to be unresponsive for a short period of time while decoding occurs. Asynchronously decoding images using the decode method prior to inserting them into the DOM can cut down on this sort of jank, but beware: It’s not available everywhere yet, and it adds complexity to lazy loading logic. If you want to use it, you’ll need to check for it. Below shows how you might use Image.decode() with a fallback:

var newImage = new Image();
newImage.src = "my-awesome-image.jpg";

if ("decode" in newImage) {
  // Fancy decoding logic
  newImage.decode().then(function() {
} else {
  // Regular image load

Links and resources

Thoughts about front end best practices

I posted this as an answer to this question in Quora and I thought I would post it here and expand on it a little bit with things I thought about after I wrote the answer.

This is not an exhaustive list of performance best practices. It’s what I use and how I use them. You may have others and some of these may not apply to you. I’d love to hear what works for you… you can contact me via Twitter (elrond25).

The question:

What are the best practices for optimizing resources (JavaScript, CSS, images) used by an HTML page?

  • In general
    • Use Lighthouse (available in Chrome as part of the DevTools audits menu or as an extension)
      • The performance score is a good sign of how your app/site is doing
      • There are other audits you can run separately or at the same time
    • Always try to serve content via HTTP2.
      • HTTP2 solves a lot of the performance issues in older versions of HTTP. See this article for the nerdy details
      • If you want to take the time to test it, http2 push may also help increase your site’s performance. It is imperative that you test this because, if poorly implemented, you can wreck performance with push
    • Use a CDN like Akamai or Cloudflare to host and serve your static assets. Even basic services are good enough based on my experience
    • Consider using a service worker even if you’re not creating a PWA.
      • Service workers will improve performance on second and subsequent visits because of the browser will fetch content from your local computer
      • A service worker is the entry point to advanced features like web push notifications, background sync and background fetch among others
      • You can configure different caching strategies based on the needs of your site or app
      • If the browser doesn’t support service workers then it won’t get the performance boost and you will lose access to the advanced features but it will still display content for your users
    • Consider preloading resources
  • For your images
    • Use responsive images rather than create a single version of the image. Using a single small image means that it’ll look like crap in retina displays for desktop and higher-end mobile devices
    • Create responsive images as part of your build process. I use Gulp and gulp-responsive
    • Serve WebP images to browsers that support them.
    • You can incorporate WebP support in your responsive images
    • WebP is significantly smaller than JPG or PNG but not all browsers support the format
    • If you can’t or don’t want to use responsive images you can compress the images with Imagemin. I also do this as part of my build process with Gulp and gulp-imagemin
    • Use an image CDN like Cloudinary or Photon if you use WordPress to host your assets. They’ll do all the work for you
  • For your scripts and stylesheets
    • Consider minimizing your scripts. I use gulp and uglify-es
    • If you will use a lot of Javascript consider using a Bundler like Webpack or Rollup
    • I’m one of the few developers who doesn’t think you need to bundle all your assets (CSS, Javascript, and images) when building your site or app
    • Test if a bundler improves performance for the content you’re using it for before you decide to adopt it
    • Consider minimizing your stylesheets. I use sass/scss and normally create a compressed version either from CLI or using gulp-sass during the build process
    • I don’t concatenate them because I cache on the client using service workers and they work better as separate items
  • HTML
    • I don’t normally minimize my HTML until the size hits 75K or so. I’m old school and a lot of what I learned when I first started working on the web was by looking at other people’s code, duplicating it locally and then tweaking it to see what happened. I think it’s still useful to learn that way.
    • With the performance optimizations for scripts, images and stylesheets I think I’ve made up for not removing whitespace from my HTML content

Defensive Coding: Default Parameters

When working with Javascript we can add default values to our function parameters so they will work if we forget to pass them when declaring the function. In this post, we’ll discuss why we should and how to give default values to our function parameters.

Why give defaults to function parameters

The simplest reason, for me, is that I tend to forget to do so when using the function I just declared. Take the function below that, in theory, should just fetch the file at the given URL and display it inside the body of the page

async function getFile(url) {
    const response = await fetch(url);
    document.body.innerHTML = await response.text();
  catch {
    console.log('There was an error retrieving the file');

But what happens if we forget to give it the URL to fetch?


In my experiments getFile() without a URL gives the expected 404 error. How can I prevent this?

I think the best way is to assign a default value to the URL parameter so, if we forget to give it a URL it will go somewhere useful and not error out. The code now looks like this:

async function getFile(url = '') {
    const response = await fetch(url);
    document.body.innerHTML = await response.text();
  catch {
    console.log('There was an error retrieving the file');

Now, when we leave the URL blank, it will go to the root of

Now for the caveat:

The request will obey CORS and CORB restrictions so if you’re pointing people to third party sites they may not work

Starting a new Node Project

Most of the time, starting a Node project involves a lot of typing, copying and pasting and typing data into your repository. This post lists some ways to automate the process in the command line and via scripts.

Thanks to Phil Nash and Tienery Cyren for the information. 🙂

npx license mit uses the license package to download a license of your choice for the project, in this case, MIT

npx gitignore node uses the gitignore package to download the appropriate .gitignore file from GitHub

npx covgen uses covgen to generate the Contributor Covenant agreement and give your project a code of conduct.

npm init -y accepts all of the default options for npm init and creates a package.json file.

npx first became available with NPM 5 and it’s also available as a standalone package. It provides a way to run Node packages either from your local installation or from your global node repository, installing whatever packages it needs to run the command. This is awesome because it means you only need to install the packages you need like license or covgen once in the global scope rather than install them in each individual project.

Customizing the init file

Going back to npm init -y. Unless you’ve done it already it’ll produce a completely blank package.json file that you have to go edit later. Better than not having it or have to create the file by hand but it’s still a pain.

Until I read an article by Phil Nash I didn’t realize that you could customize the parameters npm init uses as defaults. They look like this:

npm set "Your name"
npm set "[email protected]"
npm set ""
npm set init.license "MIT"
npm set init.version "1.0.0"

Once the parameters are configured, they will be used whenever you run the npm init command, whether it’s automated or not.

We can take this a step further by creating a shell script to automate the steps down to one command. I created a file and add the code below in it; then put the file somewhere in your shell’s path.

git init
npx license $(npm get init.license) -o "$(npm get" > LICENSE
npx gitignore node
npx covgen "$(npm get"
npm init -y
npx eslint --init
git add -A
git commit -m "Initial commit"

This assumes a few things:

  • You want to put things in a Git repository
  • You’ve filled out the defaults for init parameters
  • You want to use Code Covenant code of conduct
  • You want to use ESLint

So with this script, you have a one-liner to get your repository set up and ready to go. Some next steps may include additional tool configuration or populating package.json with other tools you normally use.