What is the modern view-source?

I started working on the web in 1994 and I’ve been privy to the evolution and complication of the web and its component technologies.

The web was much simpler then, with a limited set of tags and little CSS and JavaScript that only changed the look of items on the page. If we wanted applications we had the option of Perl or C to create CGI script and roundtrips to the server for each request.

We’ve improved considerably from those days to where the web is now. We have hundreds, if not thousands, of projects that we can learn from but that’s not the answer.

Frank Chimero’s Everything Easy is Hard Again presents the view of someone who left the web design business and returned a few years later to find out how much more complex the web had become and how much more we do for the sake of doing it.

It’s from this viewpoint that messages like this worry me.

“Can we agree that, in 2018, human-readable “View Source” is a constraint the web can discard? I benefitted from “View Source” too, but today we have an embarrassment of resources and open source examples I would have killed for as a kid.”— Tom Dale

Jonathan Snook and Christian Heilmann present interesting positions on the issue of having text-based renditions of our content in addition to what’s interpreted by the browser to provide the output on-screen.

Snook writes towards the end of his post:

The sites some build may be simple static sites, befitting of a simple View Source. The sites some build may be compiled and bundled and requiring tools that allow us to dig deeper. Just because you don’t need those tools doesn’t mean that somebody doesn’t need those tools.

Chris Heilmann makes three points that I think are interesting to look at:

Except for a few purist web sites out there, what you see in your current device isn’t the code of the web site.

The bytes sent to the browser may be different because the browser decided what image to load or the JS engine decided what script to load when using modules and providing a fallback for older browsers. Even if the loading process injected JavaScript, styles, or images into the page the source of the document will still match what’s rendered.

Code sent to the web is often minimized and bundled. Developer tools give you options to pretty-print those and thus make them much more understandable.

DevTools in all modern browsers work hard in letting you see what the code you’re viewing really look like and give you a human-readable version of the code, within the limitations of the tool used to create the minimized scripts and bundles:

  • Minimized JavaScript expanded by Chrome DevTools is still nearly impossible to read when the tool that produced the minified code also mangled variable names to single characters
  • Looking at the code generated by Webpack you get more lines of Webpack code than bundled code, it’s difficult to figure out which bundle contains the script.

Of course, it is great that there is no barrier to entry if you want to know how something works. But the forgiving nature of HTML and CSS can also lead to problems.

I agree that HTML can cause problems but it’s on us as developers, as standards organizations and as people who teach about the web that it got to where it is. Some developers embraced the tag soup markup (a defensive measure to make sure that the web still worked in modern browsers) and produced markup that would never validate as HTML in any strict validator… and we claimed it was OK and we moved on. As a result, web browsers must be forgiving because developers took the shortest path to a result rather than taking the correct path to the solution.

We need to put the best examples front and center so that people who copy and paste code will find semantic correct HTML rather than tag soup garbage.

But there are more fundamental questions than whether we should have view source for our web content or whether view source is a constraint to current web development.

The entry barrier

What do we need to learn in order to do basic web development? What APIs and what technologies? It’s not enough to know HTML, CSS, and JavaScript but you have to make selections about build systems (either your own choice or whatever your project is using), what version of JavaScript will you use, whether you want to use templates for your project and, if so, whether to use native templates or a templating engine. The choices keep increasing and get more complicated with new technologies and frameworks being released frequently.

Pointing people to Github as a resource means that they know what they are looking for and that they are proficient at the target language, Javascript in this case, to recognize the code and what it does. This wasn’t always the case for me and still isn’t when a beginner looks at the code. When I looked at a page it took me a while to figure out what the code did and, several times, I had to copy the code into a page of my own and then play with it until I figured it out or did what I wanted it to do.

But now with all the minimization and bundling of our code it has become very hard, if not outright impossible, to do that kinds of “learn by doing” because there is no easy way to identify mangled variables or figure out how many Webpackk generated bundles we need to keep to make sure that the code works.

Another of Tom’s quotes in the same Twitter thread makes me wonder if I’m missing something. The tweet in question:

I'll go a step further: insistence on human-readable formats on the web is a pretty intense display of Western privilege. Binary formats are important for reaching people with slower devices and capped data plans. I'll happily sacrifice my own nostalgia to achieve that goal. — Tom Dale

I don’t think that a binary format will change the way we address transfer and weight of our web content, if nothing else, we’ll be throwing the same volume of material in a binary format, thus removing any advantage that the binary format offers.

It’s not just the network time that’ll kill your app’s startup performance, but the time spent parsing and evaluating your script, during which time the browser becomes completely unresponsive.

On mobile, there are additional startups that need to happen (cell modem startup and connection, the communication between the cell tower and the Internet, potentially powering up the high-end CPUs to do the heavy lifting on parsing your JavaScript) those milliseconds rack up very quickly.

See this presentation from Alex Rusell to get a better understanding of the challenges of the mobile web. It’s from 2016 but the underlying principles have not changed.

The following figures show how much stuff measured by the median number of requests and size in kilobytes have grown in a 3-year period from 2016 to March 2019 (data taken from the HTTP Archive’s state of the web report).

Timeseries of median total requests over a 3 year period
Timeseries of median total requests over a 3 year period
Timeseries of median total kilobytes over a 3 year period
Timeseries of total kilobytes over a 3 year period

I think that to solve the performance problem we’ve created, we have to become more restrictive of what we can and cannot do on the web. We can start with enforcing best practices for any one of the many performance patterns available… RAIL and PRPL offer actionable goals for you to pursue but actually meeting the performance goals is up to you.

This is also about being serious in creating a performance culture in our organizations. Addy Osmani and Lara Hogan provide good introductions to performance budgeting.

Tools like Performance Budget Calculator, Performance Budget Builder and Browser Calories can help in building the budget once we figure out what a budget is and decide that we want to use one for our project.

Smashing magazine publishes an annual front-end performance checklist. The 2019 edition provides sensible and actionable steps for you to follow if you want to improve performance on your site or app.

Once we have the budget we need to enforce it. Webpack has a plugin that will warn (or error out) if you go over a pre-defined bundle size and Pinterest has created an ESLint rule that disallows importing from certain packages.

How we address these performance requirements and how seriously we enforce them is up to us. But I see no other way to really to get out of this bloated mess we’ve turned our web applications into.

Using Javascript to insert content into the DOM

JavaScript gives us multiple ways to insert content into the DOM. One generic and simple way and a more flexible and complicated alternative.

Inserting new elements: The simple version

The simple way to insert content into an existing element is to use appendChild to insert the element after the existing children of the specified element

We’re using the following HTML element as the host of our new element.

<div id="container"></div>

In the JavaScript, we capture the container element in a variable and then create a button element.

Next, we assign attributes to the button we just created. An id and the text that will become the label of the button using innerHTML.

Finally, we attach the button element to the container div using appendChild.

const container = document.getElementById("container");
const button = document.createElement("button");

button.id = "clicky";
button.innerHTML = "Click Me";

  • you can’t currently add an attribute to the element when it’s created; you must use setAttribute or similar method elsewhere on the script
  • You may think about adding an id attribute to the element, but it’s unlikely that you need to do so. You already have a reference to the element when you create it.
    • Only reason why you may need an id or class is if you’ll reference the element from a separate script
  • Once you add the element to the page, it will follow all the rules you apply via CSS, either a stylesheet or Javascript

See the following pages on MDN for additional methods to manipulate DOM nodes.

Placing new elements: insertAdjacentHTML

One disadvantage of appending the child to an existing document is that you don’t have a way to choose where the child element is placed.

insertAdjacentHTML takes care of this. It takes two parameters: a position string and a string representing the element we want to position at the given location.

The position parameter can be:

  • beforebegin: Before the element itself.
  • afterbegin: Just inside the element, before its first child.
  • beforeend: Just inside the element, after its last child.
  • afterend: After the element itself.

So inserAdjacentHTML gives us the flexibility of putting content where we need it inside the existing elements of the page so we are safe when inserting content into existing elements.

The following pen illustrates how the four different values work:

So we’ve looked at different ways to insert HTML in an existing document using both appendChild and insertAdjacentHTMLL and discussed what they do differently. Which one works best will depend on your use case and how you’ve structured the document and the script that adds to it.

Using SVG as images

Most of the work I’ve done recently has been as inline SVG meaning that the SVG is inserted directly into the document; this has advantages and disadvantages. In this post we’ll discuss why we would use SVG as images, what are the disadvantages and disadvantages and a possible fallback using the picturefill polyfill.

SVG is a very powerful vector graphics format that can be used either as an inline element or as a format for images on web pages. Which one you use will depend on a few things:

  • What browsers do we need to support?
  • What are we using the graphics for?
  • What SVG features do we need for the individual graphics

For the following discussion, we’ll assume we need to support IE9 and newer plus all modern evergreen browsers; we won’t need animation baked into individual icons if we need to animate we’ll do so from CSS or using GSAP. We’ll use SVG to create a small set of social media icons to use on the page.

Advantages And Disadvantages Of SVG As An Image

Here are some advantages of working with SVG in images:

Smaller file size: SVG images are made of text describing the shape of the objects in the image so they will be consistently smaller than equivalent raster images.

Scale easier: Because they are vector graphics they scale up or down regardless of resolution. That means that you only have to load one image for all the resolutions and pixel densities you want to use on the page

Compresses better: SVG is text and, most of the time, the text will compress better than binary data

Not everything is rainbow and roses, there are a few disadvantages of working with SVG inside an image

Can not be formatted with CSS: Most of the time you can style SVG images with CSS either inside the element itself or through an external CSS. I can’t seem to do so with SVG images.

Does not work on older browsers: Not all browsers support SVG images, particularly IE9 and older. IE9 will support it but with a workaround.

Next, we’ll explore how to provide fallbacks for non-supported browsers and a polyfill for making the job easier.

Providing fallbacks

The simplest way for this to work is to use the picture element, part of the Responsive Images additions to the HTML specification

The example below shows one ideal way of providing a fallback for SVG images and providing a default image to render when neither source is supported. This is a first item matched is used algorithm, similar to what browsers do for the video and audio elements.

In this example, the browser tests support for SVG images and loads and render it if supporter; if not the browser checks if it can render WebP images and if it doesn’t then it falls back to the img element that should be rendered by all browsers. I’ve used a single src attribute for the image, we could also add srcset and sizes to the image to further enhance the responsiveness.

For larger line drawings or diagrams below the fold, we could also incorporate lazy loading (native and through polyfill).

  <source   srcset="examples/images/large.svg"
  <source   srcset="examples/images/large.webp"
  <img      src="examples/images/large.png"
            alt="Large Image Of Cats">

Working with Picturefill

The problem is that older browsers are not likely to follow the ideal case. For browsers that don’t support the picture element, we’ll have to use a polyfill to make sure that the image will load regardless of the browser we’re using.

I’ve chosen to work with Picturefill polyfill for responsive images. It’s stable and works in the cases and for the browsers we wanted to tackle when defining the project.

To run the polyfill we first need to trick older versions of IE to accept the picture element before the polyfill has loaded and added the polyfill to the page using a script tag with the async attribute.

  // Picture element HTML5 shiv
  document.createElement( "picture" );
<script async src="picturefill.min.js"></script>

This makes all responsive images elements (picture) and attributes (srcset, and sizes) available to the page.

Now we move to the fallback solution for SVG images (finally!).

The final code looks pretty close to our ideal example, except for the IE-specific conditional comments that will only load the video element wrapper for IE9 (this addresses an issue with IE9 handling of source attributes inside a picture).

  <!--[if IE 9]><video style="display: none;"><![endif]-->
  <source srcset="examples/images/large.svg" type="image/svg+xml">
  <source srcset="examples/images/large.webp" type="image/webp">
  <!--[if IE 9]></video><![endif]-->
  <img src="examples/images/large.png" alt="…">

And that’s it. We have a way to display SVG images and provide multiple fallbacks for browsers that do not support them and a default image that will be supported everywhere.

Paginating or infinite scrolling web content

The web has always been a scrolling medium but there are reasons and motivations that will make people decide for one or the other. There is no perfect ‘one-size-fits-all’ solution and you will have to evaluate which solution works best for your project. In this post, we’ll discuss the advantages and disadvantages of pagination and scrolling and suggest one combined solution that may work well as a generic solution when working outside of frameworks.

In Pagination vs. Scrolling: The Great Website Debate ThriveHive presents the advantages and disadvantages of both pagination and scrolling for web sites. They come to the same conclusions most developers have: it depends.

But just like the Nielsen Norman Group reminds us that Infinite Scrolling Is Not for Every Website the same case can be made for pagination.


Scrolling, in this context, refers both to the regular scrolling of a web page and to infinite scrolling: a technique where the browser loads content as the user reaches the bottom of the screen so it appears as if the content will scroll continually as long as there is new content available to display.

Vue, React and Angular

All the frameworks I’ve reviewed have some sort of virtual/infinite scrolling tool available. These are the ones I found, I’m pretty sure there are more:

Web Platform

Surprisingly the web platform doesn’t provide built-in mechanisms for infinite scrolling and there several items necessary for successful infinite scrolling that are not part of the web platform yet.

One, incomplete, example is this pen from Werner Beroux that uses vanilla JavaScript to generate an infinite scrolling list.

At the 2018 Chrome Dev Summit, Gray Norton presented a new tool called the virtual scroller; a set of custom elements that will do the heavy work of creating virtual scrolling for you.

The Github repository has two different concepts for virtual scrolling. Check the project’s readme for more information about the branches and what they accomplish.


Pagination takes a different approach. Rather than provide infinite scrolling, it breaks the content into “pages” and provides means to navigate the content.

According to the Interaction Design Institute:

Pagination is the process of splitting the contents of a website, or a section of contents from a website, into discrete pages. This user interface design pattern is what we designers use to save site visitors from being overwhelmed by a mass of data on one page – we take that ‘continental’ chunk and splinter it sensibly into ‘islands’, literally distinct pages which users will be able to devote their attention to without sighing in exasperation.

Depending on the type of content that you’re working with, pagination may or may not be the best solution. In 2013 Jakob Nielsen warned us that “listings might need pagination by default, but if users customize the display to View All list items, respect that preference.”

Working with long-form content works differently. We definitely want to paginate books or long essays… but we still need to be mindful of how we organize the pagination for this type of content.


Someone has thought about pagination for the frameworks I look at regularly so there shouldn’t be a problem with implementing the pattern in any of these frameworks.

Web Platform

A possibility is to use CSS scroll snap as a way to navigate between sections of content.

An example of scroll snap shows how it works with mouse events. A next step would be to convert the mouse events into pointer events to make sure we cover both desktop and mobile devices.

Something worth researching is whether it’s possible to copy the pagination tools from frameworks like Foundation or Bootstrap 4 without having to use the whole framework.

So which one do we use

As with many things dealing with the web, it depends. It depends on the type and quantity of material that we want to display and what our target devices are.

For book-like content it would be ideal to paginate the content at the chapter level and, inside the chapter, let the content scroll as necessary. This will give us the best of both worlds in situations where physical books would just add pages.

Native lazy loading in Chrome

Addy Osmani posted a note on twitter about native lazy loading support that will, hopefully, appear in Chrome 75 (canary builds as I write this). This is awesome and I hope that other browsers will implement this as a native feature but it introduces complexities that we need to evaluate before we implement this in development.

Before we start

Native lazy loading works for both images and iframes but it’s behind two flags. Surprisingly neither of them is Experimental Web Platform features.

Go into chrome://flags and enable the following flags if you want both elements to work with lazy loading:

  • enable-lazy-image-loading
  • enable-lazy-frame-loading

Restart your browser and you’re good to go.

The basics

  • loading="lazy" Lazy loads an offscreen image when the user scrolls near it
  • loading="eager" Loads an image right away instead of lazy-loading.
    This is the same as not using the attribute at all
  • loading="auto" lets the browser decides whether or not to lazy load the element
<img src="building1.jpg" loading="lazy" alt=".."/>
<img src="building1.jpg" loading="eager" alt=".."/>
<img src="building1.jpg" loading="auto" alt=".."/>

The lazy loading feature will also work with picture elements as long as you add the loading attribute to the fallback image element. If I understand it correctly the img element drives the display of any image inside the picture element so, if you add loading to it, whatever image loads will be lazy loaded.

  <source media="(min-width: 40em)" srcset="big.jpg 1x, big-hd.jpg 2x">
  <source srcset="small.jpg 1x, small-hd.jpg 2x">
  <img src="fallback.jpg" loading="lazy">

The same thing happens if the image has srcset attributes. As long as the image has the loading attribute set to lazy then the image will be lazy loaded.

<!-- Lazy-load an image that has srcset specified -->
<img src="small.jpg"
     srcset="large.jpg 1024w, medium.jpg 640w, small.jpg 320w"
     sizes="(min-width: 36em) 33.3vw, 100vw"
     alt="A rad wolf" loading="lazy">

The one example I haven’t seen elsewhere is for iframes. The example below shows a Youtube iframe embed set up for lazy loading. The same values apply here as they apply for images.

<iframe   loading="lazy"
          width="560" height="315"
          allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture"

Feature Detection and Lazy Loading Cross browser

I’m using the yall lazy loading library in this example and initialize it when we determine the browser doesn’t support native lazy loading. We need to load the library using something like the following command

<script defer src="scripts/yall.js"></script>

This will load the library every time we load the page. I am willing to take the extra 2Kb (and potentially the additional weight of an intersection observer polyfill) if it means that page will load faster because not all images are loaded when the page loads.

Once we’ve loaded yall.js, we check if the browser supports native lazyloading ('loading' in HTMLImageElement.prototype).

If it does, we store all the images we want to work with (the ones with a lazyload class) in a variable and then, for each of those images, we copy the data-src attribute to the source (src) attribute. This will lazy load the images.

If the browser doesn’t support native lazy loading then we load the Lazysizes script and initialize it. The script will take the data-src attribute and use it to lazy load the images instead.

(async () => {
  if ('loading' in HTMLImageElement.prototype) {
    const images = document.querySelectorAll('img.lazy');
    images.forEach(img => {
        img.src = img.dataset.src;
  } else {
    // Make sure the library is already loaded
    // Initialize yall
    document.addEventListener("DOMContentLoaded", yall);

Using the script above we can use the following to load an image above the fold immediately.

<!-- Let's load this in-viewport image normally -->
<img src="hero.jpg" alt=""/>

And this is the code we use to lazy load an image. Note how it doesn’t have a src attribute because we’ll change it programmatically.

<!-- Let's lazy-load the rest of these images -->
<img  data-src="image1.jpg"
      loading="lazy" alt=""

I’ve got a working example in this pen.