What is a front-end developer

I’m an old school developer. The first website I made dates back to 1995 when a friend and I built our school’s first website for independent study credit.

The web has come a long way from those early days but with all the good has come to, some separation and arguments that never seem to end. The latest one seems to be about the role of a front-end developer and what is front-end development as a discipline.

The rest of the post will work on the following assumptions:

  • Front-end developers should be equally proficient with HTML, CSS and (vanilla) Javascript. I don’t mean they should be experts but at least enough to formulate a good question for Stack Overflow or how to search MDN
  • Javascript knowledge should not be tied to particular frameworks or libraries. We should separate basic Javascript used for interactivity from the tooling and frameworks we use, create and package the content
  • CSS Knowledge should not be tied to frameworks or depend on forward-looking technologies

Javascript, since it’s inception in 1995, has become the must-have, must use tool for web developers used in all kinds of projects that touch the web in some capacity.

From serverless to CSS in JS to bundling all newer technologies now revolve around Javascript and I think that’s where the problem starts but not where it ends.

Javascript, CSS, UX, oh my! or How to be a front-end developers

Over the years the job of creating web content has become more and more specialized. It’s not enough to know the basics anymore but we’ve added piles of stuff that may or may not be strictly front end to the mix.

I think that the first thing we need to do as front end developers is to create a basic set of competencies that we all agree front-end developers should have… I know it’s opening a can of works in wanting all to agree what are the core competencies for the front end but I think it essential that we all “play from the same playbook”.

I did a search on Indeed.com for the words “front end developer” and was somewhat surprised at the specificity of what most of these jobs were looking for in a front end person and also how little some of it has to do with the crafting of a web application front-end.

Some of the things that bothered me about what I found begin with the fact that there is no uniform description of what makes for a front-end web developer. I would expect some basic competencies, for example, how many years of HTML, CSS, and Javascript do you expect a front-end developer to have for a position in your company?

I would also like to see common-sense and realistic expectations of experience based on how old and/or complex a given framework is? How do recruiters, and the people who tell HR what they need for a given position, expect to get quality applicants if they want a React developer with 7 years of experience (in a framework that has been publically available for 6)?

Where do we draw the line?

The arguments over front-end versus back-end versus full stack are not new; it’s the tools that have changed. Whether we like it or not our front end experiences have become Javascript heavy and we’re paying the price for it.

But that begs two questions:

When you work in large Javascript codebases that produce user-facing content programmatically, are you a front-end developer or a programmer?

Conversely, if you’re great at CSS but may not know Javascript or HTML that well, are you really a front-end developer? Do you create the markup your style? How do you create interactions that require Javascript?

One of the comments in Chris’ article that called my attention was Steven Davis’:

I think we need to move away from the term myself. PHP/ASP developers are dying off. Those people are moving into mobile development or JS. We should split old school “front end” into UX Engineers and Javascript Engineers IMO. They are different mindsets. Most people are not amazing at both JS and CSS. It’s usually either/or. Let UX Engineers work closely with UX/Design to create great interactions, prototypes, interaction events, etc and let JS Engineers handle all the data part. Just my 2 cents. So sick of being great at CSS but being forced into JS. I’m not a programmer!

What makes a good front-end developer? comment

It left me with more questions than it answered. If we were to break things as the comment suggests… How do you prototype interactions for your web content? What role do we expect UX engineers to take in the coding of the prototype? Are you expecting to have a Javascript engineer as part of your UX team? Or are you just handling static mocks with instructions for the JS engineers to work with?

But along with those questions comes concern about the community and how we’re onboarding new people ass front-end developers and what do we expect from them.

How do we move forward as a community?

In 2012 Rebeca Murphey wrote A baseline for front-end developers as a minimum of what people should know to work in front-end development at the time it was written.

I went back to the article and was surprised at how well it holds up 7 years later; the tools may have changed but the sentiment that we should all have a common base to start the conversation definitely has not.

We should all come to an agreement on what do we want the basics to be. What are the basics that someone should know for a front-end developer position?

I have opinions and have written about them elsewhere. But I’m one voice and one opinion would love to hear more about what other people think and how can we build a dialogue about this.

Links and resources

New Promise methods

There have been new additions to the promise arsenal that warrants a deeper look so we’re ready to use them when they are stable enough to do so.

What we started with

promise.all returns a single Promise that resolves when all of the promises passed resolve. It rejects with the reason for the first promise that rejects.

Promise.race returns a promise that fulfills or rejects as soon as one of the promises in an iterable fulfills or rejects, with the value or reason from that promise.

What we got recently


Promise.finally ensures that code will run once, regardless of the promise status (settled or rejected). This will make sure that any cleanup code will happen and that developers don’t need to remember to put the code in multiple places.

In the examples below, rather than having to put the hideLoadingSpinner method in the then and catch blocks like the first example.

const fetchAndDisplay = ({ url, element }) => {
    .then((response) => response.text())
    .then((text) => {
      element.textContent = text;
    .catch((error) => {
      element.textContent = error.message;

  url: someUrl,
  element: document.querySelector('#output')

We can leverage the finally method and place it there, knowing that it will run regardless of how the promise settles and hide the spinner.

const fetchAndDisplay = ({ url, element }) => {
    .then((response) => response.text())
    .then((text) => {
      element.textContent = text;
    .catch((error) => {
      element.textContent = error.message;
    .finally(() => {

We can also use the async/await to do the same thing with the full try/catch/finally blocks; taking into account that we still want to use hideLoadingSpinner only once.

const fetchAndDisplay = async (url) => {
  try {
    const response = await fetch(url);
    const text = await response.text();
    element.textContent = text;
  } catch (error) {
    element.textContent = error.message;
  } finally {

The new and shiny

There are two new methods of the promise object that are making their way through the TC39 process. promise.allSettled is at stage 3 and promise.any is at stage 1 of the TC39 process.


Promise.allSettled returns a promise that is fulfilled with an array of promise state snapshots, but only after all the original promises have settled, i.e. become either fulfilled or rejected.

A common use case for this combinator is wanting to take action after multiple requests have completed, regardless of their success or failure. Other Promise combinators (promise.all and promise.race) can short-circuit, discarding the results of input values that lose the race to reach a certain state.

Promise.allSettled will always wait for all of its input values.

Here we are only interested in the promises which failed, and thus collect the reasons. allSettled allows us to do this.

const promises = [

const results = await Promise.allSettled(promises);
const errors = results
  .filter(p => p.status === 'rejected')
  .map(p => p.reason);


Promise.any accepts an iterable of promises and returns a promise that is fulfilled by the first given promise to be fulfilled, or rejected with an array of rejection reasons if all of the given promises are rejected.

This is different than promise.race and promise.all in that only one promise has to succeed for the promise to fulfill (unlike promise.all) but they all must fail for the promise to reject.

    .then(() => 'home'),
    .then(() => 'web dev'),
    .then(() => 'docs')
]).then((first) => {
  // Any of the promises was fulfilled.
  // → 'home'
}).catch((error) => {
  // All of the promises were rejected.

Native Internationalization

Making web content work across locales, each with it’s own way to display dates, time is a challenge. Most of the time, when I hear about internationalization or locale aware data manipulation I hear about Moment.js or date-fns.

Both libraries are awesome, they allow you programmatically control how certain portions of text are presented to the user based on their locale (either a default or one they’ve provided).

However, there is also a built-in way to do these presentations. The Intl object is the namespace for the ECMAScript Internationalization API, which provides language sensitive string comparison, number formatting, and date, time formatting, and other language sensitive functions.

In order to work with this API, we have to learn more about locales
. First of all, let’s give a definition.

A locale is an identifier that refers to a set of user preferences such as:

  • dates and times
  • numbers and currencies
  • translated names for time zones, languages, and countries
  • measurement units
  • sort-order (collation)

A locale is not case sensitive; using Uppercase names is only a convention.

The locale must be a string holding a BCP 47 language tag, and all part are separated by hyphens

Relative Time Format

I wrote about Intl.RelativeTimeFormat in an earlier blog post so I won’t cover it in detail here, just enough to give you an idea of what it does.

The first step is to set up what one or more Relative Time Format Locales. In this case we set up a locale for English.

const rtf = new Intl.RelativeTimeFormat('en', {
  localeMatcher: 'best fit',
  style: 'long',
  numeric: 'auto',

Once we have the locale object for relative times we use the format methods with two parameters: the value and the unit that we want to use. Positive values indicate values in the future, negative values represent the past.

rtf.format(3.14, 'second');
// 'in 3.14 seconds'

rtf.format(-15, 'minute');
// '15 minutes ago'

rtf.format(8, 'hour');
// 'in 8 hours'

rtf.format(-2, 'day');
// '2 days ago'

List format

Intl.ListFormat enable language-sensitive list formatting.

Different locales use different words to indicate separate the last character in a list and they use different words to indicate a conjunction (all objects together) or a disjunction (one object from the list).

The example below defines a default locale and a list of objects to work with.

const defaultLocale = 'en-US';
const list = ['Motorcycle', 'Bus', 'Car'];

Then we create new list format objects with different locales and types to show the difference in use and how the different locales (American English, Canadian French and Chilean Spanish) handle the different use cases.

console.log(new Intl.ListFormat(defaultLocale, {
  style: 'long',
  type: 'conjunction'
// > Motorcycle, Bus and Car

console.log(new Intl.ListFormat(defaultLocale, {
  style: 'short',
  type: 'disjunction'
// > Motorcycle, Bus or Car

console.log(new Intl.ListFormat('fr-CA', {
  style: 'long',
  type: 'conjunction'
// > Motorcycle, Bus et Car

console.log(new Intl.ListFormat('fr-CA', {
   style: 'short',
   type: 'disjunction'
// > Motorcycle, Bus ou Car

console.log(new Intl.ListFormat('es-CL', {
  style: 'long',
  type: 'conjunction'
// > Motorcycle, Bus y Car

 console.log(new Intl.ListFormat('es-CL', {
   style: 'short',
   type: 'disjunction'
// > Motorcycle, Bus o Car

DateTime Format

Intl.DateTimeFormat enables language-sensitive date and time formatting.

The example below creates a new date object.

const defaultLocale = 'en-US';
const date = new Date('December 17, 1995');

Then we format the date in our default locale (American English), Canadian French and Chilean Spanish. The expected result is shown in a comment under the command.

console.log(new Intl.DateTimeFormat(defaultLocale)
// expected output: "12/17/1995"

console.log(new Intl.DateTimeFormat('fr-CA')
// expected output: "1995-12-17"

console.log(new Intl.DateTimeFormat('es-CL')
// expected output: "17-12-1995"

Format Range

Intl.formatRange is an extension to DateTimeFormat that allows you to do range of dates, for example, from January 10th to 20th.

We first set the dates that we want to work with. I set three dates to provide for different examples

let date1 = new Date(2007, 0, 10);
let date2 = new Date(2007, 0, 15);
let date3 = new Date(2007, 0, 20);

We then create a DateTimeFormat object. Because we’re using all three values (year, month, date) the results will also incorporate all three.

let fmt2 = new Intl.DateTimeFormat("en", {
    year: 'numeric',
    month: 'short',
    day: 'numeric'

I test by logging to console. The results are in comments after each particular test.

// Jan 10, 2007
console.log(fmt2.formatRange(date1, date2));
// Jan 10 – 15, 2007
console.log(fmt2.formatRange(date1, date3));
// Jan 10 – 20, 2007

If we know that we’re working on a single year we can eliminate the year field. The new DateTimeFormat object looks like this.

let fmt3 = new Intl.DateTimeFormat("en", {
    month: 'short',
    day: 'numeric'

And the results omit the year since we didn’t include it in the object we are using to format the ranges.

// Jan 10
console.log(fmt3.formatRange(date1, date2));
// Jan 10 – 15
console.log(fmt3.formatRange(date1, date3));
// Jan 10 – 20

Once we get better browser coverage, using the native internationalization libraries will reduce app payload by removing libraries like Moment of date-fns. Until then you’ll have to feature detect support for the specific APIs and provide a fallback where native APIs are not supported.

Noto Fonts: Same family, multiple languages

One of the limitations of the World Wide Web is the lack of fonts for all the world languages. Even if we were to find fonts for all the languages we want to use it’s unlikely that we’ll find fonts that work well together.

Google has developed and released as open source a family of fonts that cover all the languages covered in the Unicode standard. They are not quite up to the intended goal of full Unicode support but they are getting close.

The idea is to have a set of fonts for Unicode languages that work well together and look good together.

This is a different problem than the one Variable fonts solve; variable fonts interpolate font attributes such as weight, slant, and custom attributes between axes meaning that we need fewer font files to represent the same group of characters. However, this doesn’t include all the characters that we need for non-western European languages; for that, we still need multiple files.

Quick note about licenses

As we start diving into the Noto family we need to look at the license for the fonts. For a while, Google used to license fonts under the Apache 2 license, just like their other software but as of 2015 they moved the license to the SIL Open Font License 1.1

Using the fonts

I’m working with web content so I’ll concentrate on that aspect of font usage and not worry about local use or bundling them with an app.

Make sure that the fonts are available as web fonts when you declare them. When you declare the font in your CSS but don’t load them as web fonts, the browser will immediately go to the next option in the stack.

Compress the fonts using either WOFF or WOFF2 depending on your browser support needs. The same suggestions outlined in Improving Font Performance: Work to control font loading and Improving Font Performance: Subset fonts using Glyphhanger apply when working with multiple fonts of the same family.

I strongly recommend using font-display to improve performance and keep yourself from Flash of Invisible and Flash of Unstyled Text.

Serve your fonts from the same domain as your app. Some Noto fonts are available at Google Web Fonts Early Access but they may not be the latest version of Noto or it may only cover a subset of the language.

In addition, be aware that the web latency for large fonts, such as for Noto Sans CJK or when using multiple languages, can be large. Google Fonts uses font-display: swap to mitigate the problem but the solution may still not be ideal in all cases, particularly when working with slow devices or in areas with poor connectivity.

See Zach Leat’s Google Fonts Is Adding font-display 🎉, and this twitter thread for more in-depth analysis of how to best use Google Fonts.

Using Noto fonts in the CSS font-family property

Because fonts in the Noto family may overlap there are some considerations as written by the Google Fonts team

Put fonts for the languages/scripts you care about most at the very beginning.

This should go without saying but don’t include fonts you don’t need to improve Time to First Byte (TTFB), Time To Interactive (TTI), and other performance metrics.

It is recommended to retain “Noto Sans” in the list. Other Noto fonts usually do not cover Latin letters, digits or punctuation. If you’re writing content that doesn’t require any Latin language you can forego Noto Sans on your list.

Put “Noto Sans” before “Noto Sans CJK”. Currently, the Latin characters in the CJK fonts are from Adobe’s Source Sans Pro so they may not look the same as the other sans serif fonts in the document.

Wherever possible use “Noto Sans CJK {JP, KR, SC, TC}” rather than “Noto Sans {JP, KR, SC, TC}” (note the difference in the file names).

Each of the font families “Noto Sans CJK {JP, KR, SC, TC}” supports all four languages, but has a different default language. So depending on the primary language you’re targetting what font you should use.


These examples assume that you’ve already loaded using @font-face or, if you’re happy with the restrictions that Google Fonts may add to the fonts it delivers, through Google Fonts. It also assumes that the names on the @font-face declarations match the names we use in the font-family rules.

For a Japanese website:

font-family:  "Noto Sans",
              "Noto Sans CJK JP",

For a website targeting Hindi, and then Tamil users

font-family:  "Noto Sans Devanagari",
              "Noto Sans Tamil",
              "Noto Sans",

For an Arabic website that needs to use a UI font for UI elements, such as buttons and tabs, that have more strict vertical space:

font-family:  "Noto Naskh Arabic UI",
              "Noto Sans UI",

For a website targeting Armenian and Georgian users who prefer serif style:

font-family:  "Noto Serif Armenian",
              "Noto Serif Georgian",
              "Noto Serif",

Why bother?

I like to think that fonts like Noto and other fonts that target specific non-Latin (especially ethnic, endangered or minority) languages give people who wouldn’t normally have one a voice on the web. It allows them to communicate in their own language and helps make true the promise of a world wide web.

When is an é not an é? Working with Unicode on the web

There are times when working with non-English languages on the web can be a real problem. We’ll look at what’s the historical background for the issue, how different encodings have tried to solve it, and how Unicode addresses the problem.

The web and the larger Internet started in the US in English and, as such needed, only 7-bit characters to represent all the characters needed for the English alphabet. This character set was known as ASCII (American Standard Code for Information Interchange).

While the web was primarily an American endeavor written in English everything was fine but as the web became international the 7-bit ASCII character set was no longer enough to represent the characters in non-English languages.

That led to the introduction of the ISO 8859 family of 8-bit character sets and encodings, each representing a group of related languages or dialects.

The different language groups needed their own encoding to display characters in supporting fonts appropriately. It worked well but there was no way to mix and match encodings to display them together on a web page.

Also to consider is that two encodings in the ISO-8859 family could use the same codepoint (the number for a given character) for two different characters so it would lead to all sorts of confusion and wrong characters appearing on the page.

And that brings us to Unicode.

Rather than work on specific characters, Unicode provides a unique identifier (called a code point) to each character from most of the world’s languages. The only limiting factor is the device having fonts capable to display the character.

Unicode is divided into 16 planes and each plane contains one or more blocks, with each block representing a language or dialect.

But, as good as Unicode is working with multiple languages and character sets in the same document, it’s not free of issues and pitfalls.

Chief among these issues that we need to keep in mind is that there is more than one version of Unicode, UTF-8, and UTF-16 (we’ll ignore UTF-32 for now as it’s not supported in browsers and developers are discouraged from using it).

UTF-8, what most of us use when thinking about Unicode, uses between 1 and 4 bytes to represent all characters it supports. It’s a superset of ASCII, so the first 128 characters are identical to those in the ASCII table.

UTF-16 (which Javascript uses to represent characters) that uses either 2 or 4 bytes. This difference in how many bytes an element takes makes the two encodings incompatible.

But even if we stick to UTF-8 exclusively, there’s more than one way to represent a character. For example, the letter é could be represented using either:

  • A single code point U+00E9
  • The combination of the letter e and the acute accent, for a total of two code points: U+0065 and U+0301

While we get the same acute accent e character, they are not the same in terms of equality or in terms of character length as demonstrated below:

console.log('\u00e9') // => é
console.log('\u0065\u0301') // => é
console.log('\u00e9' == '\u0065\u0301') // => false
console.log('\u00e9'.length) // => 1
console.log('\u0065\u0301'.length) // => 2

The same thing happens to characters with accents or other diacritical marks.

  • n + ˜ = ñ
  • u + ¨ = ü

You may be wondering why is this important. When you’re doing string matching and measurements (is this password 6 characters or longer) then some ways to represent characters will give false positives and return a value that is either larger or shorter than the actual string.

So, how do we solve the problem?

Since ES2015/ES6 there is a method in the string object called normalize that takes the following signature: String.prototype.normalize([form]) The form argument is the string identifier of the normalization form to use; it can take one of four values:

  • NFC — Normalization Form Canonical Composition. This is the default is no form is provided
  • NFD — Normalization Form Canonical Decomposition.
  • NFKC — Normalization Form Compatibility Composition.
  • NFKD — Normalization Form Compatibility Decomposition.

Rather than bore you with the details of what each of the forms means and how do they work, I’ll refer you to the Unicode Annex #15: Unicode Normalization forms

Going back to our é example we’ll add normalization to the mix. For each version of the string we want to work with we run it through the normalize methods and then test if they are equal (they are), what are their lengths (they are both 1) and if their lengths are equal (they are the same length).

const e1 = '\u00e9';
const e1n = e1.normalize();

const e2 = '\u0065\u0301';
const e2n = e2.normalize();

console.log(e1n == e2n) // true
console.log(e1n.length) // 1
console.log(e2n.length) // 1
console.log(e1n.length == e2n.length) // True

So, when working with international characters, particularly if you’re working with user-provided input, you should take normalization into account to make sure that the characters are the same throughout the application and that you won’t get unexpected results when using that data as part of your results.

Links and resources