Compiling Chromium From Source

Note:
This post discusses the compilation process in a Macintosh. Instructions for other operating systems can be found at Get the code

A few months ago Quora started posting questions about the difference between Chromium and Chrome. A lot of the questions assumed that it would be easy to add the missing features to Chromium and just start it up and you would have a clone of Chrome without any further hassle.

Sadly this is not true. The majority of the features require compilation time flags to be enabled or disabled and some, like Widevine DRM support, require contacting the Widevine team at Google and requesting keys to enable the feature. They don’t work with Open Source software and that’s what your own version of Chromium is.

But, come on, how hard would it be to compile Chromium from source? Considering that I’ve compiled several tools I use on my Mac I thought it wouldn’t be too hard and I could quickly get the browser up and running.

This is the report of what I did and How I did it.

Prerequisites

We need the following things to be installed and running in the Mac where we want to compile and run Chromium.

We also need

  • A lot of disk space
  • Time and patience

Get the code

Ensure that Unicode filenames aren’t mangled by HFS:

git config --global core.precomposeUnicode true

Prepare the directories and download code

mkdir chromium && cd chromium

Run the fetch tool from depot_tools to check out the code and its dependencies.

fetch chromium

This command took about 30 minutes on my 2018 MacBook pro. It may take significantly longer when using slower connections

If you don’t need the full repo history, you can save time by using fetch --no-history chromium. You can call git fetch --unshallow to retrieve the full history later.

When fetch completes, it will have created a hidden .gclient file and a directory called src in the working directory. The remaining instructions assume you have switched to the src directory:

cd src

Optional: You can also install API keys if you want your build to talk to some Google services, but this is not necessary for most development and testing purposes. I’ve chosen not to do it.

Setting up the build

Chromium uses Ninja as its main build tool along with a tool called GN to generate .ninja files. You can create any number of build directories with different configurations. To create a build directory:

gn gen out/Default

Things to note:

  • You only have to run this once for each new build directory, Ninja will update the build files as needed.
  • You can replace Default with another name, but it should be a subdirectory of out.
  • For other build arguments, including release settings, see GN build configuration. The default will be a debug component build matching the current host operating system and CPU.

For more info on GN, run gn help on the command line or read the quick start guide.

Actually Build Chromium

Build Chromium (the “chrome” target) with Ninja using the command:

autoninja -C out/Default chrome

autoninja is a wrapper that automatically provides optimal values for the arguments passed to ninja.

This command took 6+ hours to complete for a fresh compilation. It should take significantly shorter for updates.

Run Chromium

Once it is built, you can simply run the browser:

out/Default/Chromium.app/Contents/MacOS/Chromium

Unless you have a developer account with Apple you will not be able to sign the app and run it normally

Portion of About Chromium showing the version and indicating it’s a developer’s build

What we don’t get with Chromium

Before we move any further let’s look at the things you don’t get.

  • Access to the Google client libraries. If any tool you’re used to working with breaks, this may be the reason why
  • No EME playback so no Netflix and no application that plays any stream of encrypted media (audio or video)
  • No MP3 audio, it requires a license that Google provides
  • No MP4 and no AAC audio, they too require a license that Google provides

If you want to get the client keys to recover that functionality, and understand that Google will be able to track some of your actions online while using them, you can follow the instructions on this page to get them.

Out of the 4 items that I mentioned the one that worries me the most is the EME. You must have a license for any of the CDMs available and the easiest one to work with, Widevine (owned by Google), will not work with open source projects as documented by Samuel Maddock and then blown out of proportion by the media. I fully agree that whoever handled the issue on the Widevine side more gracefully I would have been surprised had the project been approved but then, we would have never heard about it in the first place.

Further thoughts on performance

In the time I’ve been researching performance I’ve become even more convinced that it’s not just an issue with Javascript, CSS, images, HTML or what framework you’re using, although any one of those items, if not used carefully, can have a negative impact on your users’ experience.

We have to think deeper than just the tools and ask ourselves why are we using these technologies and how to leverage them to improve the user experience of our sites and applications.

What technology stack to use

I will not preach one technology over another one, particularly frameworks. I have an opinion and, if you know me, you know what it is. However, there are things to consider when starting a new project or when rearchitecting for performance.

Don’t pay much attention to the latest and greatest tool. As long as you get the results you want and are happy maintaining the tool, you’re doing just fine. The only exception might be a bundler that provides code splitting, tree shaking, and a good plugin ecosystem.

Besides images, JavaScript is one of the largest components. Staying within the boundaries of our budget that sill contains the critical-path HTML/CSS/JavaScript, all the resources, and the app logic necessary for the route the bundle will handle we need to be extra careful when creating the bundle and the costs (network, transfer, parse/compile and runtime) of the components that you choose to use

Not every project needs a framework, and not every part of a SPA needs to load the framework.

Be deliberate in your choices. Be thorough in evaluating third-party JS regarding these areas:

  • features
  • accessibility
  • stability
  • performance
  • package ecosystem
  • community
  • learning curve
  • documentation
  • tooling
  • track record
  • team
  • compatibility
  • security

Pick your framework battles wisely. Make sure that your chosen framework has all the feature you need for your project and understand the way your framework works.

If you make multiple calls to an API or use multiple APIs in your app, they might become a performance bottleneck. Explore ways to reduce your dependencies on foreign APIs

Depending on how much dynamic data you may be able to move your content to a static site and serve it through a content delivery network. Make sure that your static site generator has plugins to do what you need to increase performance (e.g. image optimization in terms of formats, compression and resizing at the edge), support for servers workers and other tasks you’ve identified as necessary

Optimize your build

Run an inventory on all of your assets (JavaScript, images, fonts, third-party scripts, “expensive” modules on the page), and break them down in groups. Break the content into groups:

  • The basic core experience (fully accessible core content for legacy browsers)
  • An enhanced, full experience for capable browsers
  • Whatever extras assets that are nice to have and, thus, that can be lazy-loaded)

Define what “cutting-the-mustard” means for your site

Either use feature detection to send features only to those browsers that support them or use ES2015 modules to break the difference between core and enhanced tools.

If you haven’t seen them, feature detection queries the Browser’s Javascript engine if it supports a given feature.

 if ('querySelector' in document) {
  console.log('Yay!')
} else {
  console.log('Boo')
}

We can have multiple items in the if statement to give us finer control over how we cut the mustard for the enhanced experience.

This example uses the and logical operator to only return true if both arguments are true, meaning that the browser must support both features to return true.

if (('querySelector' in document) && ('serviceWorker' in navigator)) {
  console.log('Yay!')
} else {
  console.log('Boo')
}

Modules in browsers

Another, coarser, way to cut the mustard is to use ES2016 ES Modules and the type="module attribute in the script tag for loading JavaScript. Modern browsers will interpret the script as a JavaScript module and run it as expected, while legacy browsers won’t recognize it and hence ignore it

Be aware that cheap Android phones will cut the mustard despite their limited memory and CPU capabilities, so consider feature detect Device Memory API and then decide on what features you send the user, using feature detection to make sure they are supported).

const memory = navigator.deviceMemory
console.log ("This device has at least " + memory + "GiB of RAM.")

Remember that parsing JavaScript is expensive, so keep it small. Look for modules and techniques to speed up the initial rendering time (loading, parsing, and rendering). Be mindful that this can take significantly longer in low-end mobile devices like those used in emerging markets.

Reducing the size of your payload

Use tree-shaking, scope hoisting, and code-splitting to reduce payloads where appropriate. I’m not against tree shaking and minimizing, as long as we can understand the code when we expand the minimized files.

  • Tree-shaking is a way to clean up your build process by only including code that is actually used in production
  • Code-splitting splits your code base into “chunks” that are loaded on demand
  • Scope hoisting detects where import chaining can be flattened and converted into one inlined function without compromising the code.

Make use of these techniques via your bundler of choice.

Figure out if you can you offload JavaScript

Offloading expensive computations to a worker (running in a separate thread) or to WebAssembly improve your site’s performance and free the main thread for UI work and user interaction.

WebAssembly provides several benefits when working with Javascript for performance

Trim unused CSS/JavaScript: Differential loading and using smaller packages

You can also serve different code to browsers based on the features that they support (called differential loading). This is different from using type="module" on script tags.

Use babel-preset-env to only transpile ES2015+ features unsupported by the modern browsers you are targeting. Then set up two builds, one in ES6 and one in ES5.

<script type="module" src="path/to/module.mjs">
</script><script async nomodule src="path/to/es5/script.js">

Browsers that support modules natively will load the first script element and ignore the nomodule script.

Browsers that don’t support modules will ignore the first script and load the second.

Tools like Autoprefixer allow to write clean CSS and only use prefixes needed for your target browsers, in a similar way to what babel-preset-env does for Javascript.

Chrome Dev Tool’s coverage menu helps in identifying parts of your CSS and Javascript that is not used and can be moved to separate scripts and lazy loaded.

If you work with libraries like Moment or Lodash you can load only the methods you actually use rather than load the entire library.

Using Lodash as an example, instead of loading the complete library using methods like these

// Load the full build.
const _ = require('lodash');
// Load the core build.
const _ = require('lodash/core');

You can load method categories:

const array = require('lodash/array');
const object = require('lodash/fp/object');

Or even individual methods:

// Cherry-pick methods for smaller bundles.
const at = require('lodash/at');
const curryN = require('lodash/fp/curryN');

Loading individual methods guarantees the smallest possible build but depending on what you use, you may still end up loading a lot of code anyways but, likely less than you would load if using the full library.

Moment.js is very heavy and it doesn’t seem to have a way to load individual methods or categories. You may want to look at date-fns as an alternative.

const formatDistance = require('date-fns/formatDistance')
// Require english and spanish locales
const en = require('date-fns/locale/en-US')
const es = require('date-fns/locale/es')

const resultES = formatDistance(
  new Date(2016, 7, 1),
  new Date(2015, 0, 1),
  {locale: es} // Pass the locale as an option
)

const resultEN = formatDistance(
  new Date(2016, 7, 1),
  new Date(2015, 0, 1),
  {locale: en} // Pass the locale as an option
)

console.log(resultES)
//más de 1 año

console.log(resultEN)
// over 1 year

A lot of the functionality of libraries like Moment or date-fns can be done natively using the Intl object and its methods in modern browsers.

const date = new Date(Date.UTC(2019, 5, 20, 3, 0, 0));

let dateUS = new Intl.DateTimeFormat('en-US').format(date);
console.log(dateUS)
// -> 6/19/2019

let dateGB = new Intl.DateTimeFormat('en-GB').format(date);
console.log(dateGB)
// -> 19/06/2019

See the following articles for more information:

Restrict Third-Party code loading additional assets

Too often one single third-party script ends up calling many additional scripts that have little or no usefulness to your page or its content. Establish a Content Security Policy (CSP) to restrict the impact of third-party scripts, e.g. disallowing the download of audio or video. Embed scripts via iframe and sandbox them, so scripts don’t have access to the DOM and run with only the permissions you assign them.

Make sure cache headers are set correctly

Assuming that you have access to your server’s configuration file, double-check that expires, cache-control, max-age, and other HTTP cache headers are set properly.

In general, resources should be cacheable either for a very short time (if they are likely to change) or indefinitely (if they are static). Use cache-control: immutable, designed for fingerprinted static resources, to avoid revalidation.

Check that you aren’t sending unnecessary headers or headers that may expose your server to potential hacks.

Evaluate if service workers are a good solution

Consider using a Service Worker to optimize future visits. Service workers, inside or outside a PWA give you finer control over the duration of items in the cache without any new server configurations. Libraries like Workbox make the job easier.

Treat service workers as a progressive enhancement. If the browser doesn’t support them or has Javascript disables then the browser will not cache the resources and will not work offline so plan accordingly.

Optimize assets

Compressing plain text assets (HTML, CSS, and Javascript) provides good results for little effort. Use Brotli or Zopfli in addition to GZip for compressing text files.

Evaluate what compression strategy works best for your content. Usually, you can compress static assets with Brotli+Gzip at the highest level, compress HTML on the fly with Brotli lower levels.

Use responsive images with srcset, sizes and the <picture> element. Make use of the WebP format, by serving WebP images in </picture><picture> and a JPEG fallback.

Note: Users might see an actual image faster with JPEG files although WebP images might travel faster through the network.

Tools like gulp-responsive will generate files for your responsive images as part of your build process. This assumes that you have large, high density, images to use as your source.

Make sure you optimize images, as much and as often as possible. Even if you don’t use responsive images you owe it to your users to optimize your images.

Tools like Imagemin either standalone, as a Gulp plugin or a plugin for your favorite build system automate the compression process.

Video formats and compression

Likewise, we need to ensure that videos are properly encoded. Use WebM or MP4 with HEVC encoding videos instead of animated GIFs.

Evaluate what codecs will work best for your video. WebM, MP4 and HEVC have wide support but AV1 has finally stabilized and it’s gaining adoption both in browsers (see Caniuse.com AV1 support matrix for more information) and hardware.

Test your video in all the formats your target audience can play and choose which one is the most efficient for you to encode and for your users to play. Prioritize user experience over encoding speed.

See the following entries in my blog for more information about video encoding, codecs and containers:

Optimizing fonts

Be very careful when working with web fonts. They carry a lot of extra baggage that you may not need.

Subsetting fonts will shrink them to only the glyphs you’re actually using. Depending on the function of the font it may reduce the number of glyphs (characters) considerably. A good example is if you’re using a different font for your pages’ headings, that’s a prime candidate for subsetting.

More information on subsetting:

Where possible choose WOFF2 as your primary font format and fallback to WOFF. These two formats will produce smaller files for the same content and some tools use zoplfi compression, giving you better overall compression.

Variable fonts offer another way to save on font size and download. Consolidating multiple axes of variation (from normal to italic and normal to bold) means only one font at a smaller size that will do double duty for both italics and bold.

We can subset variable fonts but, this is important, we can’t subset the custom axes. This becomes important when we start working with fonts with too many axes.

I love Roboto and I particularly love the variable font implementation

Roboto VF provides twenty named axes, each produces a different visual effect, whether you use them or not. The weight of this version of RobotoVF is 1MB as a WOFF2 file. Not small but not overly large either.

In Improving Font Performance: Subset fonts using Glyphhanger I discuss how to use Glyphhanger to create smaller subsets of fonts.

As good as they are I have a problem with variable fonts. You can’t subset the variations to only keep those you need for a specific project.

Jason Pamental reports on a promising development on this front. The W3C’s Web Fonts Working Group charter has been extended with the mandate to explore improving the performance of web fonts, particularly in light of new Variable Fonts and their potential impact on overall page performance.

According to Jason:

A simplistic description would be something like ‘font streaming’ but in truth that wouldn’t actually solve the problem: users would still be constantly downloading entire font files even if they only needed a small portion to render the one or two pages they might view on a given site. The problem with existing subsetting solutions is that either the subset is thrown away with each page view or the solution requires a proprietary server resource, thereby greatly reducing the usefulness of the subset while increasing the complexity and resource requirements on the server.

The ideal solution would combine the benefits of both of these approaches: subset a font request to what’s necessary for a given page, but add to the original font asset on subsequent content requests, thereby enabling the gradual enrichment of the font file. Adobe has been doing something like this for a while with their own custom implementation, which shows it’s possible to preserve the enriched font’s cacheability and greatly enhances the viability of using web fonts with very large character sets like Arabic and CJK.

Responsive Web Typography — Progressive Font Enrichment: reinventing web font performance

So, hopefully, in the not-so-distant future, we should be able to have smaller and faster web fonts where the size would be less of an issue and where variable fonts

Still, I think that Variable Fonts are the best solution to handle font size and bloat.

Optimize delivery

Use progressive enhancement as a default.

Run an inventory on all of your assets (JavaScript, images, fonts, third-party scripts, “expensive” modules on the page), and break them down in groups. Break the content into groups:

  • The basic core experience (fully accessible core content for legacy browsers)
  • An enhanced, full experience for capable browsers
  • Whatever extras assets that are nice to have and, thus, that can be lazy-loaded).

Design and build the core experience first, and then enhance the experience with advanced features for capable browsers, creating resilient experiences.

If your website runs fast on a slow machine with a poor screen in a poor browser on a sub-optimal network, then it will only run faster on a fast machine with a good browser on a decent network.

As developers, we have to explicitly tell the browser not to wait and to start rendering the page. The defer and async attributes of the script element handle this.

Which attribute you use will depend on what you need for the specific script in the page, and if you need the scripts to load in a specific order.

Use IntersectionObserver

lazy-load all expensive components, such as heavy JavaScript, videos, iframes, widgets, and potentially images using Intersection Observers, native support in Chromium browsers (behind a flag) and through libraries like yall.js.

Having some browsers support lazy loading natively and some not introduces some interesting conundrums. We need to feature detect native support and use it if available and load a library if it’s not supported natively.

Following Addy Osmani’s example, we can do something like this. Then we do two things

  • We set up a class for the assets we want to lazy load
  • We add the loading attribute to flag the lazy loading behavior we want in native support
<!-- Let's load this in-viewport image normally -->
<img src="hero.jpg" alt=".."/>

<!-- Let's lazy-load the rest of these images -->
<img data-src="unicorn.jpg"
     loading="lazy"
     alt=".."
     class="lazyload"/>
<img data-src="cats.jpg"
     loading="lazy"
     alt=".."
     class="lazyload"/>
<img data-src="dogs.jpg"
     loading="lazy"
     alt=".."
     class="lazyload"/>

The script does the following

  1. Detects if the browser supports lazy loading natively
  2. Collect all the images we want to lazy load
  3. For all the images we want to lazy load, copy the data-src into src
  4. Load a lazy-load library and initialize it
(async () => {
  // 1
  if ('loading' in HTMLImageElement.prototype) {
    // 2
    const images = document.querySelectorAll("img.lazyload");
    // 3
    images.forEach(img => {
        img.src = img.dataset.src;
    });
  } else {
    // 4
    const yallLib = await import('/scripts/yall.js');
    // Initiate yall
    document.addEventListener("DOMContentLoaded", yall);
  }
  })();

Push critical CSS quickly

Collect all of the above the fold CSS required to start rendering the first visible portion of the page and inline it in the <head> of the page.

Experiment with regrouping your CSS rules into purpose-specific modules or queries for individual breakpoints and import them as needed.

Make sure you don’t place <link rel="stylesheet" /> before async scripts.

Cache inlined CSS with a service worker and experiment with in-body CSS.

Consider using client hints and Network Information API to customize your users’ experiences

The Save-Data client hint request header allows us to customize the application and the payload to cost- and performance-constrained users.

  • Serve low-resolution images devices that request it
  • Omit non-essential imagery
  • Omit non-essential web fonts
  • Opting out of server pushes

See Delivering Fast and Light Applications with Save-Data for more specifics. Of course, your projects may need.

Another tool supported in some browsers is the Network Information API. The API enables web applications to access information about the network connection in use by the device; you can then use this information to decide what assets to serve based on the reported network conditions from the users.

navigator.connection.addEventListener('change', logNetworkInfo);

function logNetworkInfo() {
  // Network type that browser uses
  console.log('type: ' + navigator.connection.type);

  // Effective bandwidth estimate
  console.log('downlink: ' + navigator.connection.downlink + 'Mb/s');

  // Effective round-trip time estimate
  console.log('rtt: ' + navigator.connection.rtt + 'ms');

  // Upper bound on the downlink speed of the
  // first network hop
  console.log('downlinkMax: ' + navigator.connection.downlinkMax + 'Mb/s');

  // Effective connection type determined using a
  // combination of recently observed rtt and
  // downlink values
  console.log('effectiveType: ' + navigator.connection.effectiveType);

  // True if the user has requested a reduced
  // data usage mode from the user agent.
  console.log('saveData: ' + navigator.connection.saveData);
}

logNetworkInfo();

Be aware that the information the API provides can change drastically and without warning, even on desktop machines

Service Workers

Service Workers provide a programmatic way to cache content on the client, intercept requests and provide fallbacks and custom offline fallbacks and pages. You can select the types of assets you cache and how long do we cache these individual assets for.

Be aware that Service Workers will not help performance on the first load. Service Worker precaches and cached content doesn’t exist before the page is fully loaded the first time so another system for improving first load performance (preconnect and preload) is necessary.

Libraries like Workbox make working with Service Workers easier.

Service Workers are also the basis for a series of progressive enhancements: Background Sync (both one-off and periodic), push notifications and others.

Stay consistent in the user experience

Isolate expensive components with CSS containment so that the rendering engine will not traverse its children when doing layout, paint or style work.

Where possible, make sure that there is no lag when scrolling the page or when an element is animated, and that you’re consistently hitting 60 frames per second. If that’s not possible, then making the frames per second consistent is at least preferable to a mixed range of 60 to 15.

Use CSS will-change to inform the browser ahead of time of what kinds of changes you are likely to make to an element, so that it can set up the appropriate optimizations before they’re needed, therefore avoiding a non-trivial start-up cost which can have a negative effect on the responsiveness of a page. Only use will-change with those aspects of an element that you know will change, otherwise, you will lose the benefits of optimization if you optimize everything.

Perceived performance is important

Don’t underestimate the role of perceived performance. While loading assets, try to always be one step ahead of the customer, so the experience feels swift while there is quite a lot happening in the background. To keep the customer engaged, use skeleton screens instead of loading indicators and add transitions and animations.

Server configuration

Yes, I know we’re talking about front-end development but the server you host your content on is also important and knowing the basics about your server and how it’s configured will help in making the sites hosted in it more performant.

Most, if not all, modern browsers support HTTP/2 and take advantage of its feature to boost performance and, in most cases, you’re better off using it. Test your site’s performance in HTTP/2 with mobile clients. HTTP/2 is often slower on networks which have a noticeable packet loss rate so mobile may be adversely affected.

Some of the differences between HTTP/2 and HTTP/1.x:

  • HTTP/2 is binary, instead of textual. This allows for better compression
  • HTTP/2 can send multiple requests for data in parallel over a single TCP connection
  • It compresses headers for more efficient communication
  • It allows servers to “push” responses proactively into client caches instead of waiting for a new request for each resource. Take this with a grain of salt.
  • It reduces additional round trip times (RTT), making your website load faster without any optimization

HTTP/1.1 (the previous version of HTTP) suggested you package all your page/app resources into as few bundles as possible so as to optimize the server performance, you could also shard your content to different domains or subdomains so they’d come from different origins and not be subject to download restrictions for a single origin. With HTTP/2, domain sharding and asset concatenation are no longer needed

You need to find a fine balance between packaging modules and loading many small modules in parallel to take advantage of HTTP/2.

Break down your entire interface into many small modules; then group, compress and bundle them. Test how different combinations of individual and bundled files work for your application. Sending around 6–10 packages seems like a decent compromise (and isn’t too bad for legacy browsers).

There is no one-size-fits-all solution.

Keeping ourselves honest

Once we have the budget we need to enforce it. Webpack has a built-in tool that will warn (or error out) if you go over a pre-defined bundle size without an additional plugin. The following Webpack configuration snippet shows how to

module.exports = {
  //...
  performance: {
    maxAssetSize: 100000,
    maxEntrypointSize: 100000,
    hints: "warning"
  }
};

See Setting performance budgets with webpack for more information.

While we’re in the Webpack area; Contentful published a series of articles on how to put your Webpack Bundle on a diet and make your gzipped bundle less than 100KB.

The techniques discussed in the series may or may not be applicable to your individual needs but they point the way to how to improve the performance of your bundled content.

There are ESLint rules that disallow importing from certain packages or modules based on your team’s criteria (you may have standardized on a specific package).

The Lighthousebot tool provides ways to run Lighthouse for every pull request on your project and you can choose to reject the PR if the Lighthouse run doesn’t match our criteria.

How we address these performance requirements and how seriously we enforce them is up to us. But I see no other way to really to get out of this bloated mess we’ve turned our web content into.

Closing

Because Performance Matters we should all work towards improving performance and the overall user experience.

Echoing the words from Why we focus on front-end performance

Performance is therefore an integral part of the service we provide and every member of a service team should be involved in the optimisation process. Even minor changes can make a huge difference for our users.

A new baseline for front-end development

In 2012 Rebeca Murphey wrote A baseline for front-end developers as a minimum of what people should know to work in front-end development back in the day.

I went back to the article and was surprised at how well it holds up 7 years after it was written; the tools may have changed but we need to keep the discussion going. What does it take to do front-end development in 2019? What tools we need to do it?.

Javascript

I won’t advocate specific frameworks or technologies. I believe we should all start from a basic understanding of the language in question so we can build basic interactivity and move to advanced concepts and frameworks at a later time if we need to.

If you need to work through some of the basics these are some good resources

Eloquent Javascript: A wonderful book (also available as a PDF, an EPUB and a print purchase) that takes you back to JavaScript basics without being overbearing.

While Eloquent Javascript provides a strong foundation it doesn’t cover all the changes brought into the language in newer versions. JavaScript for impatient programmers provides a more recent introductory Javascript text, covering the language until its more recent update (ES2019). It also skips portions of the language that only work in the browser.

If you feel like you have a good background in the language, Exploring ES6 and the updates Exploring ES2016 and 2017 and Exploring ES2018 and ES2019 provide information only on the new features released in those version of the language specification.

Babel

Front end developers should know what Babel is, how it works and how to use it to create bundles based on what the browser supports using the babel-env plugin.

Another important thing to know when working with Babel is how to build a Babel configuration file to convert modern ES2015+ so it will run in current browsers.

For a refresher I suggest A short and simple guide to Babel

CSS and pre-processors

CSS Preprocessors like SASS and LESS, Compass (and their associated libraries like Bourbon Neat, Bitters, and others) and post processors like PostCSS have greatly improved the way we work with CSS but whatever tool we use, we need to remember that we still need to know our CSS basics… none of the tools will build CSS for us, they will only enhance what’s already there.

Sites like MDN’s Learn to style HTML using CSS are a good starting point for learning the what and the how about CSS.

The last thing in this area is to know about vendor prefixes, what they are, how they work and how to implement them with tools like Autoprefixer.

Modularity

The way we create modular content has changed. From AMD and CommonJS we’re moving to CommonJS in Node.js and ES native modules and, sometime in the not so distant future, to native ES Modules alone.

As front end developers we should be aware of the differences between Common JS and ES Modules, where and when we would use each of them.

We should learn how to optimize the loading of production code using differential loading.

Understand what the following snippet does

<script type="module" src="myModules.js"></script>
<script nomodule src="myScript.js" defer></script>

Git and Github

Most development happens in Git repositories, either private, hosted on Gitlab or hosted on Github (who has lost users since the Microsoft acquisition1).

Git has become essential to development, either front-end or backend so we need to know the basics of how the software works.

Some suggested starting points.

  • Create repositories in your local machine and on your Git host
  • Set your local git repository to sync with a remote Git server like Github, Gitlab or another Git host
  • Create and use a branching strategy for your team to collaborate in a project

There are plenty of resources for working with Git.

Build and Process Automation

Building our web content has gotten more complicated both in terms of what we’re requiring our build systems to do and the number of build systems available on the market.

Pick a system and learn it well enough so you can use it to build the tooling that you need for your projects.

Understand the differences and similarities between build systems like Gulp, Grunt and others versus bundlers like WebPack and RollUp and when would you use one or the other.

Mulyana’s Gulp 4 tutorial provides both a good overview of how to implement plugins and an example of the varied things you can do with the tool

The Web Performance Optimization with webpack series gives you a good starting point for how to use Webpack.

Browser Dev Tools

The developer tools built into modern browsers give you a lot of power to inspect your site/app. Learn what DevTools can do and how to best use them… for example, learn how to use DevTools to debug a web application on a device.

Different browsers DevTools have different areas where they excel so it pays to learn what the differences are and what areas where each of them excels (if any).

Chrome has a particularly good PWA debugging and give you an easier way to clean up the content of your caches when you’re testing your application.

Firefox CSS Grid Inspector and Shapes Editor make it easier for you to work with the respective items.

Here’s some information for each browser’s DevTools

Your web server matters

The web server we use to serve our content may also have an impact on how we build and package our content. Be aware of the differences between HTTP1.1 and http2 in terms of performance and how it changes the way we package and deliver our front end content.

See Getting Ready For HTTP2: A Guide For Web Designers And Developers, and HTTP2 is here, let’s optimize for starting points in understanding the differences and how they affect front end practices.

Differences between CSS Custom Properties and Houdini Properties and values

CSS Custom Properties, also known as CSS Variables allow you to do awesome things. In this post, we’ll explore the different types of CSS Custom properties, what they are, how they work and which one to use in what circumstances.

The current version

The current version of CSS Custom Properties is a W3C Candidate Recommendation that define a way to set custom properties for our CSS content that we might want to dynamically change or that we want to reuse throughout the stylesheet.

What they are and how they work

The idea behind custom properties is to give developers the ability to create reusable properties and a way to use them. As explained in the Introduction to the specification:

Large documents or applications (and even small ones) can contain quite a bit of CSS. Many of the values in the CSS file will be duplicate data; for example, a site may establish a color scheme and reuse three or four colors throughout the site. Altering this data can be difficult and error-prone, since it’s scattered throughout the CSS file (and possibly across multiple files), and may not be amenable to Find-and-Replace.

This module introduces a family of custom author-defined properties known collectively as custom properties, which allow an author to assign arbitrary values to a property with an author-chosen name, and the var() function, which allows an author to then use those values in other properties elsewhere in the document. This makes it easier to read large files, as seemingly-arbitrary values now have informative names, and makes editing such files much easier and less error-prone, as one only has to change the value once, in the custom property, and the change will propagate to all users of that variable automatically.

CSS Custom Properties for Cascading Variables Module Level 1 — Introduction

This sounds like a mouthful so let’s unpack it.

This API now allows developers to create custom properties to use in their stylesheets. It also introduces the var function to make use of these custom properties.

The example below defines two custom properties in the :root element for the stylesheet and later uses the properties as the value of the var function.

:root {
  --main-color: #06c;
  --accent-color: #006;
}

/* The rest of the CSS file */
#foo h1 {
  color: var(--main-color);
}

This little example shows the basic syntax but you can imagine lager stylesheets where we use --main-color in multiple places. Now let’s assume that marketing is changing the company colors. We only have to change the value of --main-color once and the changes will automatically change everywhere they are used.

If we want to override a specific instance we can just redeclare the custom property where we want to use a different value for the same variable

:root {
  --main-color: #06c;
  --accent-color: #006;
}

/* The rest of the CSS file */
#foo h1 {
  color: var(--main-color);
}

#foo2 h1 {
  --main-color: rebeccapurple;
  color: var(--main-color);
}

#foo3 h1 {
  color: var(--main-color);
}

Houdini

Houdini is a joint effort by the W3C TAG and the W3C CSS Working Group to create APIs that will allow developers to tap into the internals of the browser to get work done. Check Is Houdini Ready Yet? for the status of the different APIs and their implementation across browsers.

The differences

As good as they are, custom properties, as defined in CSS have several significant drawbacks that Houdini addresses as explained in the sections below.

Inheritance

All custom properties will inherit down the cascade. There are times when you don’t want this inherited behavior.

Houdini custom properties let you choose whether the value inherits or not allowing for better encapsulation of styles.

Values

All custom properties defined in CSS are strings, regardless of what values we set them up. This makes them harder to work within Javascript where we have to convert them to the actual value that we need.

Houdini props allow you to define specific value types for your properties. These are the same values used in other places in CSS so they will work the same throughout your stylesheet.

Animatable

Because they are strings, CSS custom properties don’t animate or produce unexpected results.

Because Houdini properties use values defined in CSS specifications, the browser can figure out how to animate those properties and if it’s possible to animate them or not.

Validation

Because all CSS custom properties are treated as strings, it’s impossible to validate them with the proper value.

Houdini properties, on the other hand, have defined values that make validation possible and easy to use.

How they work

The first step in using custom properties is to define them in JavaScript. We use CSS.registerProperty to register the property with the CSS parser. It is always a good idea to check if the browser supports CSS.registerProperty before using it, this allows for fallbacks if it not.

if ('registerProperty' in CSS) {
  CSS.registerProperty({
    name: '--my-custom-prop',
    syntax: '<color>',
    inherits: true,
    initialValue: 'black'
  });
} else {
  console.log('registerProperty is not supported');
}

There are 4 values that we need to pass to registerProperty

Name

The name is what we’ll use to reference the property. The two dashes at the beginning should be familiar from CSS variables and are required. This is how we’ll distinguish our custom variables and properties from what the CSS WG does and will do in the future.

Syntax
indicates the possible syntaxes for the property. The following values are available in level 1 of the spec and matching corresponding units in CSS Values and Units Module Level 3 (check that specification for a full list and the [CSS Properties and Values API Level 1]())
You can create fairly complex syntax for your custom properties but until we become familiar with them, I advocate for the KISS (Keep It Simple Silly) principle.

inherits
Inherits tells the CSS parser if this custom rule should propagate down the cascade. Setting it to false gives us more power to style specific elements without being afraid to mess up elements further down the chain.

initialValue
Use this to provide a sensible default for the property. We’ll analyze why this is important later.

That’s it… we now have a custom property.

Using custom properties

To demonstrate how to use Custom Properties we’ll reuse the –bg-color Javascript example and use it in several different elements.

CSS.registerProperty({
  name: '--bg-color',
  syntax: '<color>',
  inherits: false,
  initialValue: 'red'
});

The CSS will not be any different than if we used variables. But the things it does for free are much more interesting.

First we define common parameters to create 200px by 200px squares using div elements.

The examples below use SCSS syntax to make the code easier to read; It is also important to note that SCSS variables are not the same as CSS or Houdini custom properties so having both in the same stylesheet will not cause any problems.

div {
  border: 1px solid black;
  height: 200px;
  width: 200px;
}

.smoosh1 and .smoosh2 set up colors other than the initial value and each has a different color to change on hover.

.smoosh1 {
  --bg-color: rebeccapurple;
  background: var(--bg-color);
  transition: --bg-color 0.3s linear;
  position: absolute;
  top: 50vh;
  left: 15vw;

  &:hover {
    --bg-color: orange;
  }
}

.smoosh2 {
  --bg-color: teal;
  background: var(--bg-color);
  transition: --bg-color 0.3s linear;
  position: absolute;
  top: 20em;
  left: 45em;

  &:hover {
    --bg-color: pink;
  }
}

.smoosh3 was set up with a wrong type of color (1 is not a valid CSS color). In normal CSS the rule would be ignored and there would be no background color. Because we added an initial value to the property, it’ll take this value instead of giving an error or needing a fallback.

.smoosh3 {
  --bg-color: 1;
  background: var(--bg-color);
  transition: --bg-color 0.3s linear;
  position: absolute;
  top: 5em;
  left: 35em;

  &:hover {
    --bg-color: lightgrey;
  }
}

When would you use which?

This is a tricky question. Most of the time you’ll want to use the Houdini version that allows you tighter control over the individual properties.

But We have to consider that only recent versions of some modern browsers (partial support in Safari TP and Chrome and under development for Firefox) support the Javascript API.

So, in the end, it depends on what you need. Most of the time your production code should use the CSS-only version of custom properties or use Houdini properties with a CSS-only fallback like the code below. This way, if the browser doesn’t support Houdini we still have a custom property that the CSS code can use.

if ('registerProperty' in CSS) {
  CSS.registerProperty({
    name: '--my-custom-prop',
    syntax: '</color><color>',
    inherits: true,
    initialValue: 'black'
  });
} else {
  console.log('registerProperty is not supported');
  console.log('reverting to old-style properties')
  const sheet = document.styleSheets[0];
  sheet.insertRule(":root { --my-custom-prop: #000000 }", 1);
}

Understanding the CSS box model

One of the things I’ve always had a problem understanding is the CSS box model. I’m writing this, based on content from MDN to try and understand it better.

When laying out a document, the browser treats each element as a rectangular box according to the standard CSS basic box model. CSS determines the size, position, and properties of these boxes.

Every box is composed of four areas: content, padding, border, and margin.

Different box models in CSS

The content area contains the “real” content of the element, such as text, an image, or a video player.

Its dimensions are the width and height of content-box. It often has a background color or background image.

The padding area extends the content area to include the element’s padding. Its dimensions are the padding-box width and height.

The thickness of the padding is determined by the padding-top, padding-right, padding-bottom, padding-left, and shorthand padding properties.

The border area extends the padding area to include the element’s borders. Its dimensions are the border-box width and the border-box height.

The thickness of the borders is determined by the border-width and shorthand border properties.

If the box-sizing property is set to border-box, the border area’s size can be explicitly defined with the width, min-width, max-width, height, min-height, and max-height properties.

The margin area, bounded by the margin edge, extends the border area to include an empty area used to separate the element from its neighbors. Its dimensions are the margin-box width and the margin-box height.

The size of the margin area is determined by the margin-top, margin-right, margin-bottom, margin-left, and shorthand margin properties. When margin collapsing occurs, the margin area is not clearly defined since margins are shared between boxes.

So the idea is that, by default, when we have the following CSS declaration:

.my-box {
  width: 1200px;
  height: 900px;
  margin-top: 10px;
  margin-left: 20px;
  margin-bottom: 20px;
  margin-right: 200px;
}

The dimensions of the box containing the .my-box element are:

height: 930px (900px content height + 10px margin-top + 20px margin-bottom)

Box sizing

If the element has any border or padding, this is then added to the width and height to get the final size of the box that’s rendered on the screen. When you set width and height, you have to adjust the value you give to allow for any border or padding that may be added.

The box-sizing property can be used to adjust this behavior:

content-box gives you the default CSS box-sizing behavior.

If you set an element’s width to 100 pixels, then the element’s content box will be 100 pixels wide, and the width of any border or padding will be added to the final rendered width.

.my-box {
  box-sizing: content-box;
  width: 800px;
  height: 800px;
  border: 10px solid #5B6DCD;
  padding: 5px;
}

border-box tells the browser to account for any border and padding in the values you specify for an element’s width and height.

If you set an element’s width to 100 pixels, that 100 pixels will include any border or padding you added, and the content box will shrink to absorb that extra width. This typically makes it much easier to size elements.

When there is a background-color or background-image set on a box, it extends to the outer edge of the border (i.e. extends underneath the border in z-ordering). This default behavior can be altered with the background-clip css property.

.my-box {
  box-sizing: border-box;
  width: 800px;
  border: 10px solid #5B6DCD;
  padding: 5px;
}

Make sure that if you’re using box-sizing in an element that will be reused in multiple places that the same value for the property is used everywhere in the app.