Categories
Uncategorized

JS Goodies: Nullish Coalescing and Optional Chaining

One of the things I like about the annual release schedule for Javascript is that the learning curve for new versions is much smaller than ES2015/ES6 and previous versions.

Two of the features that I’m most interested in are optional Chaining and Nullish Coalescing.

Optional Chaining

Optional Chaining, allows yout to check the existence of a property or short circuit and return undefined if any property in the chain that doesn’t exist.

In the example below, we define a zoo object with animal types and their names.

const zoo = {
  name: 'Alice',
  bird: {
    name: 'Hawkeye',
  },
  dog: {
    name: 'Fluffy'
  },
  cat: {
    name: 'Dinah'
  }
};

We can then query for properties at any point down the chain. The dog object doesn’t have a breed property so, if we use zoo.dog?.breed; to query for the breed, it will return undefined because the property doesn’t exist rather than an error as we’d normally expect.

const dogBreed = zoo.dog?.breed;
console.log(dogBreed);
// Outputs undefined

const dogName = zoo.dog?.name;
console.log(dogName);
// Outputs Fluffy

const birdType = zoo.bird?.type;
console.log(birdType);
// Outputs undefined

This makes it easier to query long chains of parent/child elements and avoid fatal errors in our applications.

Nullish coalescing operator

Nullish coalescing operator addresses an interesting shortcoming with the logical or operator || when it comes to setting up default values for an application.

const mySetting = '' || 'setting 1';

If the left-hand value can be converted to true then that’s what the application will use, otherwise the value on the right-hand side will be used.

These expression evaluate to false in Javascript:

  • null
  • NaN
  • 0
  • empty string (“” or ” or “)
  • undefined

But there’s a problem with this method of setting values for preferences. There are times when an empty or otherwise false value (other than null or undefined) is acceptable for the setting that we want to work with.

That’s where the nullish coalescing operator comes into play. It will produce the right side value if the left side value is null and the left value otherwise.

In the first example, the value of foo will be default string because the left side value is null. In this case the behavior is the same as the logical or operator.

const foo = null ?? 'default string';
console.log(foo);
// expected output: default string

const foo2 = null || 'default string';
console.log(foo2);
// expected output: default string

In the second example the value of baz will be 0. The first value is not null or undefined so the constant takes the left side value.

Compare the result with the baz2 constant where, using the logical or operator, we get the value of 42. 0 is a falsy value so we use the right side value as the value of the constat.

const baz = 0 ?? 42;
console.log(baz);
// expected output: 0

const baz2 = 0 || 42;
console.log(baz2);
// expected output: 42

The differences are subtle and can lead to annoying bugs when they don’t produce the value you expect. It’s up to you which one you use as long as you’re ok with the results you get.

Categories
Uncategorized

pre-commit hooks (an update)

This is a different take on hooks from Pre Commit Hooks: Combating Laziness, written two years ago

There are times when it would be awesome if we could force ourselves (or our development team) to perform some actions before committing code to the project’s repository.

Git has a set of tools called hooks that can help with this enforcement proccess.

The idea behind hooks is that they will run at set times during the request lifecycle and run the code you specify in the hook file. You can use hooks to run linting and accessibility checks before you commit code and reject the commit if any of the check fails.

In this example we want to run gulp axe and gulp eslint before each and every commit and fail the commit if there are errors returned from either command.

We’ll leverage the pre-commit hook to accomplish this. This hook is run first, before you type a commit message. It’s used to inspect the snapshot that’s about to be committed, complete tasks before the commit happens. Exiting non-zero from this hook aborts the commit.

Move this code into the hooks/pre-commit file inside your .git directoory.

#!/bin/sh

# Stash non-commited changes
git stash -q --keep-index
# if node_modules directory doesn't exist 
# then run npm install
if [ ! -d "/node_modules/" ]
then
    echo "Directory node_modules not created"
    npm install
fi
# Run gulp axe to check accessibility
gulp axe
# Run gulp eslint to check for syntax
gulp eslint
# Assigns the exit status to a variable
RESULT=$?
# Pop the changes back to current directory
git stash pop -q
[ $RESULT -ne 0 ] && exit 1
# Otherwise exit with error code 0
exit 0

This example makes the following assumptions:

  • You’ve added node_modules to .gitignore
  • You’ve added axe and eslint as gulp tasks

The downside of hooks is that they are not copied when you clone a repository. If your intent with these scripts is to enforce a policy, you’ll want to do that on the server side. The Git Book provides examples of server side scripts to enforce Git Policies.

Categories
Uncategorized

prefers-color-scheme in CSS and Javascript

prefers-color-scheme is geared towards accommodating user preferences.

With prefers-color-scheme we can control the color scheme we use based on the operating system preferences for the user. It supports three values:

  • no-preference: The user has not specified a preference. This keyword value evaluates as false
  • light: The user has indicated that uses a light theme (dark text on light background)
  • dark: The user has notified the system that they prefer a page that has a dark theme (light text on dark background).

The example below taken from prefers-color-scheme: Hello darkness, my old friend shows one way to use prefers-color-scheme to prioritize download and use of a given color scheme stylesheet. We’re guaranteed to have a light scheme if the media query is not supported.

The browser will load the light or dark stylesheet based on what media query matches. They are mutually exclusive so only one will be active at a time.

<script>
  if (window.matchMedia('(prefers-color-scheme: dark)').media === 'not all') {
    document.documentElement.style.display = 'none';
    document.head.insertAdjacentHTML(
        'beforeend',
        '<link rel="stylesheet" href="/light.css" onload="document.documentElement.style.display = \'\'">'
    );
  }
</script>

<link rel="stylesheet" href="/css/dark.css" media="(prefers-color-scheme: dark)">
<link rel="stylesheet" href="/css/light.css" media="(prefers-color-scheme: no-preference), (prefers-color-scheme: light)">
<!-- The main stylesheet will always load -->
<link rel="stylesheet" href="/css/style.css">

Then we have our traditional CSS way of styling using media queries. In this example, the light color scheme is the default and we don’t need to change it manually, whenever the browser detects that the operating system changed to dark mode it will automatically change the color scheme to match.

.circle {
  height: 100px;
  width: 100px;
  border-radius: 50%;
  background-color: yellow;
  border: 3px solid black;
}

@media (prefers-color-scheme: dark) {
  .circle {
    background-color: black;
    border: 5px solid red;
  }
}

For more information see prefers-color-scheme: Hello darkness, my old friend

Categories
Uncategorized

Chrome DevTools Network Tab

The idea is that using this tab we can check how our page loading and troubleshoot loading issues as they happen with a visual representation of the loading process

  1. Make sure that you are running in Incognito Mode to prevent extensions from causing any problems
  2. Open DevTools (Command + Option + I on Macintosh or Control + Shift + I or F12 on Windows)
  3. Go to the Network tab

The image below shows the result of running the Network tab in Chrome 78 (Canary when the article was written).

Network Panel Showing Results of a run

The Network panel provides the following information for every resource loaded for the page:

  1. Method: HTTP method used to retrieve the resource; usually GET
  2. Status: The status code for the response. Usually 200 for successful responses.
  3. Protocol: The HTTP protocol of the server handling the request
  4. Type: What type of resource it is expressed by mime type
  5. Initiator: What triggered the loading of the resource
  6. Size: How big the resource is
  7. Time: How long did it take to load the resource
  8. Priority: The priority the browser used to fetch the resource
  9. Waterfall: shows different time metrics for the resource. We’ll revisit the waterfall in a later section

Things we can do

There are some additional things that we can do when in the network panel.

If we Disable Cache we will get a fresh download just like a user visiting the site for the first time.

Online is a pull-down menu that gives us the option to throttle our connection speed to one of the availble presets or to customize the way the connection behaves.

The two arrows in the far right allow you to import (arrow pointing up) and export (arrow pointing down) HAR files, a cross-browser way to review performance data.


The result

Below the waterfall, we get several aggregate results for the page. These may give a first impresion of why the page may be experiencing performance issues.

How many requests succeed and what’s the total number of requests for the page if they are different.

How much storage do the resources you needed to download for the page cost in terms of weight?

How much did all the resources for the page cost in terms of bandwidth weight? This number may be different than the weight of transferred resources because it includes the resources that the browser cached in prior visits.

The final three numbers are indicators of speed. Going from right to left:

  • finished indicates how long did the page take to load
  • DOMContentLoaded shows how long did the browser take before firing the DOMContentLoaded event. The DOMContentLoaded event fires when the initial HTML document has been completely loaded and parsed, without waiting for subresources to finish loading
  • Load shows how long did the browser takes to fire the load event. This event fires when the whole page has loaded, including all dependent resources such as stylesheets or images

The waterfall in detail

Rather than try to explain in detail what each possible element is, I’ll pick some items from Google Developers’ Network Analysis Reference section on Timing breakdown phases and refer you to the full document for further information:

  • Queueing. The browser queues a request when:
    • There are higher priority requests.
    • There are already six TCP connections open for this origin, which is the limit. Applies to HTTP/1.0 and HTTP/1.1 only.
    • The browser is briefly allocating space in the disk cache
  • Stalled. The request could be stalled for any of the reasons described in Queueing.
  • DNS Lookup. The browser is resolving the request’s IP address.
  • Proxy negotiation. The browser is negotiating the request with a proxy server.
  • Request sent. The request is being sent.
  • ServiceWorker Preparation. The browser is starting up the service worker.
  • Request to ServiceWorker. The request is being sent to the service worker.
  • Waiting (TTFB). The browser is waiting for the first byte of a response. TTFB stands for Time To First Byte. This timing includes 1 round trip of latency and the time the server took to prepare the response.
  • Content Download. The browser is receiving the response.
  • Receiving Push. The browser is receiving data for this response via HTTP/2 Server Push.
  • Reading Push. The browser is reading the local data previously received.

This is not everything you can do in DevTools’ network tab but it’s a good starting point. A good place to continue is Chrome DevTools in the Google Developers website.

Categories
Uncategorized

Thought on digital books: A reply to Wendy Reid

I’m doing this post to reply to a series of tweets from Wendy Reid, chair of the W3C Publishing WG, presenting questions about the future of digital publishing.

Rather than reply via tweets, I’ve consolidated my thoughts here and broken them down according to the tween in the stream that I think they address.

[[Comment on tweet 1]]

I think you should start asking why is epub the predominant format outside Amazon and Apple and why there’s no desire for even exploring another format and its implications.

We already use (X)HTML in epub so why are we so afraid that we’ll lose discoverability or, heaven forbid, the ability to encrypt our books?

The work that Web Publications did is awesome as a starting point but it’s only the beginning… but it needs to be more web than epub and that might be the issue.

[[Comment on tweet 3]]

From an outside perspective to both the WG and the W3C overall I’ve always questioned the real purpose of the merger.

Yes, epub uses HTML, SVG, Javascript, CSS, and other technologies but it uses them in ways that make the technologies epub only, either by restricting what you can do or adding them to the manifest. This has always made it harder to work with epub as a unit.

The root cause is that epub is predicated on isolated file system based individual containers, the zipped book. This imposes limitations to the way JavaScript rendering engines can work with scripts.

These restrictions are not applicable to content that will run inside a browser. Existing (Service Workers) and upcoming (web packaging) specifications help solve the offline and download portions of books-as-web-content.

I’m sure there’s a lot more to research to be done if there is interest, including possible ways to provide backward compatibility.

[[Comment on Tweet 4]]

I have the same questions about web publications and their support or lack of support. Web Publications tried to be epub without being epub and without really being web because it was too much epub.

What would people see as a compromise?

[[Comment on Tweet 8]]

While I agree that specifications like epub should go through the full W3C specification cycle we should also understand the importance (or lack thereof) of ratification as a W3C recommendation and what the consequences are of such adoption.

Considering the different levels of support among readers I think it would be hard to spec the different levels of conformance in a way that makes sense to both reader implementors and content publishers. I don’t know if W3C recommendations have optional sections beyond the wording of RFC 2119 and how enforceable the SHOULD and SHOULD NOT sections of a specification are when they describe processes and not algorithms.

[[Comment on Tweet 9]]

If the community does those things incredibly well then what’s the danger in bringing outside experts to validate the work the community has done? I would love to see more of what are the perceived problems with backward compatibility and whether it’s worth to keep that compatibility moving forward.

I understand the need to keep backward compatibility but epub already broke that promise when they moved from epub2 to epub3. Sure, you can read an epub2 book in an iPad but, as far as I understand it, that’s the extent of compatibility.

I want the full compatibility of the web where my current Chrome can do 99% of the things I coded on a web page in 1995.

[[Comment on Tweet 10]]

Does the epub community overall know how to give this type of feedback? Would they? Would the feedback mean the same to all members of the community given that epub readers range from e-ink readers to iBoooks and everything in between?

I think this is a deeper question. Who is the epub community and what are members expecting to get out of it?

[[Comment on Tweet 11]]

Opening the epub spec is one side of the coin. How to bring other people fully into the epub fold is the other.

I don’t think that epub, as it exists now, is a good host to the basic tools of the web and the work that takes to make sure an engine works across readers is staggering in its insanity (I respect Jimmy Panoz and the work he’s done… I could have never done it)

The necessity for the compatibility work is just like browsers were in the late 1990s to early 2000s. Either proprietary tools, tags, and APIs or only partially implemented standards with little or no accountability for spec compliance.

So, to me, again it boils down to the container. Is the epub file the best delivery mechanism or can we leverage the larger web ecosystem to deliver content that matches both design views and readers’ needs?

[[Comment on Tweet 12]]

What would it take to fix the things that drive us mad and could we do it while keeping backward compatibility? If we have to choose one or the other which one wins? why?

Yes, TAG would give awesome feedback on how to integrate into the web platform, assuming that it’s what the community wants.

My concern in this area are:

  • The level of technical expertise and commitment to the extended feedback loop
  • How ready are we to implement TAG architectural recommendations when/if they conflict with “the way we did things” before we became part of this large organization?