Better Links for Print

When we print web pages we see links as underlined text either black or, if we printed in a color printer, blue or whatever color we made our links. That doesn’t help us know what the actual links are.

A possible solution is to use generated content for paged media and the :after selector to insert the text of the URL after the link.

The code

We’re saving all the code in this section in a print.css file. Why we’re doing this will become clear when we use the styles.

The easiest way to start is to tell the browser to add the content of the href attribute after all anchor elements (a).

a:visited::after {
  content: " (" attr(href) ") ";
  font-size: 90%;

There is a problem, this won’t work with relative links, those pointing to pages within the same site that doesn’t use a full URL or to anchors within the same document. We can use the ^= attribute selector to only target those links that start with http and this will only target external links, whether they are http or https; they both start with http 🙂

So now our code looks like this:

a[href^=http]:visited::after {
  content: " (" attr(href) ") ";
  font-size: 90%;

The final problem to deal with is overcrowding. If a paragraph has many links, the text of the links will make the paragraph hard to read, particularly if you have many long links. This will require case-by-case testing and decisions, but I’ve created a special case to remove the link.

If you add the no-print class to a link it will not print the URL.

That code looks like this:, {
  content: "";

So now we have both a way to print the URL of a link in parenthesis after the link and a special case to avoid printing the links when it makes sense.

Using it

Using the styles we created is easy to link it using a link element in the head of the page and the media="print" attribute to make sure it will only get triggered when we print the page.

<link rel="stylesheet" href="main.css">
<link rel="stylesheet" media="print" href="print.css" />

Generating screenshots

I’m documenting behaviors that I take for granted and then forget just when I need them. The question is How do I generate screenshots of what I’m working on?

The answer is It depends

How to generate screenshots will depend on what tool you’re using. Both Windows and Macintosh systems offer you ways to generate files from screenshots and, a surprise to me, Chrome DevTools will let you do it too

Windows 10

Windows 10 gives you two options to capture screenshots. Which one you use will depend on whether you need to keep a file of the screenshot for later use.

Pressing Print Screen copies the current screen to the clipboard and allows you to paste into the program you need it in (I’ve used this in Word and Photoshop).

Press the Windows + Print Screen buttons on your keyboard to save the screen to a file. You can then open the file in Photoshop or insert it into a Word document or where you need it.


Macintosh also provides two different ways to do screen captures. They are not the same as the Windows version as they will both generate a file.

Command + Shift + 3 will generate a full-screen shot of whatever is visible on the screen at the time. Make sure you hide applications or any material that you don’t want to see on the screen shoot.

Command + Shift + 4 will give you a selection cursor that you can use to select the area you want to capture.

Both commands will generate PNG files and save them to the desktop by default. You can then process them as needed.

Chrome DevTools

Much to my surprise Chrome DevTools also provides a way to generate screenshots, some of them being exclusive too how a browser works.

This also has the advantage that it’s cross platform. It’ll work everywhere Chrome does.

Open DevTools (Control + Shift + I on Windows / Command + Option + I on Mac) then open the command palette Control / Command + Shift + P and type screenshot. You will get something similar to the image below

Capture area screenshot is similar to the Macintosh Command + Option + 4 where it lets you select the area of the screen to capture.

Capture full size screenshot is a little counterintuitive. It will not capture a full-screen size shot of your page but the entire pare, no matter how long it is.

Capture node screenshot will make a screenshot of whatever node you have selected in the elements panel.

Finally, Capture screenshot will do a full-screen screenshot as we would expect.

Measuring your site’s performance with Lighthouse

As developers, we are fixated with performance and rightly so. But we also tend to put web performance in a vacuum and not worry how our site’s performance got to where it is today.

In this post, we’ll talk about using Chrome DevTools to measure performance using Lighthouse from the Audits panel.

The idea is that, through testing, you’ll get actionable items to fix for performance improvement as you start and later you can measure the impact of specific loading strategies and pieces of your content.

Make sure that you run the test in Incognito Mode create a blank Chrome profile just for testing. This will prevent extensions from interfering with the results of the test.

Lighthouse Audits

The first thing we should do when looking at getting reports from DevTools is to use the built-in Lighthouse reports from the Audits panel.

We run Limelight first because we get more immediately actionable results from Lighthouse than we do from the other panels. Both Network and performance require deeper analysis and knowledge.

To launch the Lighthouse audit

  1. Make sure that you are running in Incognito Mode to prevent extensions from causing any problems
  2. Open DevTools (Command + Option + I on Macintosh or Control + Shift + I or F12 on Windows)
  3. Go to the Audits tab

You will see something similar to the image below.

Getting ready to launch Lighthouse from DevTools Audit Panel

Lighthouse offers 4 areas of configuration:

  1. Device: Either mobile or desktop
  2. Audits: What audits to run. You can run one or more of these at the same times. The audits are:
    • Performance measures different performance aspects of the page you’re testing
    • PWA checks if different components of a PWA
    • Best Practices evaluates different best practices in front end web development
    • Accessibility uses Axe Core to do automated evaluation testing. It is impossible to do complete accessibility evaluation programmatically, there will be plenty of things you will have to check manually
    • SEO
  3. Throttling: Whether to throttle the connection for the tests or not
  4. There is a checkbox right above the Run Audit button for clearing the browser storage. It is not checked by default

For this example we will run the audit with the following configuration:

  • Mobile Device
  • Performance
  • No Throttling
  • Clear Cache checked

Once the configuration is set click the blue Run Audit button.

The results

Lighthouse rates every category but PWA on a scale of 0 to 100 with higher values being better. Our example page scored 98 in the performance category. Let’s look at specifics.

The top of the performance section repeats the score and it gives you basic metrics for the run on your page.

Top of the Lighthouse Results Run Performance section

The performance metrics Lighthouse reports are:

Bottom of the Lighthouse Results Run Performance section

The next part of the screen shows a filmstrip of the loading of the page. From left to right it’ll show the different stages of the loading process. The fewer empty frames you have, the faster the page loads.

The last two portions of the performance section show opportunities, things we can manually change to improve performance and diagnostics, things about our site that might help improve performance but are primarily informational at this point.

For our example site, there are two opportunities: Removing Render Blocking Resources, in this case, stylesheets that can be inlined, and Reducing the server response times.

Your results will be different based on your content, how it’s structured and what changes are you willing to make to get your site to load faster.

Final Notes

The first time you run Lighthouse, make sure to save the results to use as your baseline for later comparison. This way, when you run the tests after making changes, you will have concrete numbers to measure, not just subjective impressions.

Test both on mobile and desktop. I worry more about mobile performance but that doesn’t mean that we should throw desktop aside.

Throttle the connection before your test. While this is nowhere close to testing in an actual device (always preferred), it will give you a closer approximation to what your users will see when visiting your site.

Trying to understand DRM on the web…

We’ve had DRM in browsers for a while now in the shape of EME and its associated technologies but we haven’t asked ourselves what does it do or if it works or not.

To help frame the issue I’ve taken a section of Henri Sivonen’s article What is EME to frame the post.


EME is a JavaScript API that is part of a larger system for playing DRMed content in HTML video/audio. EME doesn’t define the whole system. EME only specifies the JS API that implies some things about the overall system. A DRM component called a Content Decryption Module (CDM) decrypts, likely decodes and perhaps also displays the video. A JavaScript program coordinating the process uses the EME API to pass messages between the CDM and a server that provides decryption keys. EME assumes the existence of one or more CDMs on the client system but it doesn’t define any or even their exact nature (e.g. software vs. hardware). That is, the interesting part is left undefined.


Major Hollywood studios require that companies that license movies from them for streaming use DRM between the streaming company and the end-user. Traditionally, in the Web context, this has been done by using the Microsoft PlayReady DRM component inside the Silverlight plug-in or the Adobe Access DRM component inside Flash Player. As the HTML/CSS/JS platform gains more and more capabilities, the general need to use Silverlight or Flash becomes smaller and smaller, such that soon the video DRM capability will be the only thing that Silverlight and Flash have but the HTML/CSS/JS platform doesn’t.

Proposals have been written to augment

What is EME?

With a basic understanding of what technologies are involved let’s dive into EME and what all the controversy is about.

EME and W3C

I was disappointed when I saw that EME had been published as a recommendation but I was even more disappointed that there were no provisions for researchers and archival tools exceptions. The excuse of “if we don’t implement DRM it on the web companies that need/want it will go somewhere else” badly underestimates the reach of piracy and ignores that the web is not necessarily a driver for it. Encryption on the web doesn’t stop people outside the web ecosystem from contributing to piracy and making the content available shortly after it becomes available.

Because the Digital Millennium Copyright Act (DMCA) (passed in 1998 in the US) and the EU Copyright Directive (passed in 2001) include provisions to prevent circumvention of DRM, it is impossible to implement DRM tools and, therefore EME support in open source products. Likewise, security researchers, people who need to modify encrypted content to enhance its accessibility, and people working to archive media content are not allowed to circumvent the EME encryption and don’t have access to the source material.

So yes, we can watch Netflix on the browser but, at what cost?.

So, how does EME work?

Other than my philosophical opposition to DRM in general, my biggest problem with EME is that it leaves a lot of behavior up to the implementors of the CDM. The theory of how EME works is relatively simple.

These steps are taken from Sam Dutton’s What is EME? article in Google Developers.

  1. A web application attempts to play audio or video that has one or more encrypted streams.
  2. The browser recognizes that the media is encrypted and fires an encrypted event with metadata obtained from the media about the encryption.
  3. The application handles the encrypted event:
    • If no MediaKeys object has been associated with the media element, first select an available Key System to check what Key Systems are available, then create a MediaKeys object for an available Key System via a MediaKeySystemAccess object. The initialization of the MediaKeys object should happen before the first encrypted event. Getting a license server URL is done by the app independently of selecting an available key system. A MediaKeys object represents all the keys available to decrypt the media for an audio or video element. It represents a CDM instance and provides access to the CDM, specifically for creating key sessions, which are used to obtain keys from a license server.
    • Once the MediaKeys object has been created, assign it to the media element, so that its keys can be used during playback, i.e. during decoding.
  4. The app creates a MediaKeySession. This creates a MediaKeySession, which represents the lifetime of a license and its associated key(s).
  5. The app generates a license request by passing the media data obtained in the encrypted handler to the CDM.
  6. The CDM fires a message event: a request to acquire a key from a license server.
  7. The MediaKeySession object receives the message event and the application sends a message to the license server (via XHR, for example).
  8. The application receives a response from the license server and passes the data to the CDM
  9. The CDM decrypts the media using the keys in the license. A valid key may be used, from any session within the MediaKeys associated with the media element
  10. Media playback resumes.

How does the browser know that media is encrypted?

This information is in the metadata of the media container file, which will be in a format such as ISO BMFF or WebM. For ISO BMFF this means header metadata, called the protection scheme information box. WebM uses the Matroska ContentEncryption element, with some WebM-specific additions. Guidelines are provided for each container in an EME-specific registry.

Note that there may be multiple messages between the CDM and the license server, and all communication in this process is opaque to the browser and application: messages are only understood by the CDM and license server, although the app layer can see what type of message the CDM is sending. The license request contains a proof of the CDM’s validity (and trust relationship) as well as a key to use when encrypting the content key(s) in the resulting license.

But what do CDMs do?

An EME implementation does not in itself provide a way to decrypt media: it simply provides an API for a web application to interact with Content Decryption Modules.

What CDMs do is not defined by the EME spec, and a CDM may handle decoding (decompression) of media as well as decryption. From least to most robust, there are several potential options for CDM functionality:

  • Decryption only, enabling playback using the normal media pipeline, for example via a </video><video> element.
  • Decryption and decoding, passing video frames to the browser for rendering.
  • Decryption and decoding, rendering directly in the hardware (for example, the GPU).

There are multiple ways to make a CDM available to a web app:

  • Bundle a CDM with the browser.
  • Distribute a CDM separately.
  • Build a CDM into the operating system.
  • Include a CDM in firmware.
  • Embed a CDM in hardware.

How a CDM is made available is not defined by the EME spec, but in all cases, the browser is responsible for vetting and exposing the CDM.

EME doesn’t mandate a particular Key System; among current desktop and mobile browsers, Chrome supports Widevine, IE11, and Edge before its migration to Chromium support PlayReady and Safari support FairPlay Streaming. This will become important when we look at Gatekeepers

Is it or isn’t it?

I think that the debate over EME is hanging on too much on semantics. Does it matter how the web got its DRM do we care about the process and how many people think the process was circumvented?

Adrian Rosselli makes the following point when criticizing Cory Doctorow’s Boing Boing

[…] what we are trying to do with EME is provide a clean API to integrate these solutions with the HTML Media Element.

And that’s the crux of what the W3C is doing with DRM — developing a standard API so browsers can access content that will be locked down with or without their participation anyway.

This is part of the issue. While EME doesn’t provide DRM directly, it still enables it and, with it, all the baggage that DRM brings with it. By leaving many aspects of the technology unspecified it makes it possible for multiple competing products to restrict content in different ways and requiring one or more licenses for the content to play in the browser at all or requiring browsers to provide incompatible solutions that are at the mercy of content producers.

One of the things that worries me about the whole process that got EME to recommendation status was the unwillingness of the W3C leadership (and its largest members) to extend IPR protections to security researchers and people who need to break DRM to provide services to the users.

Existing copyright legislation in the US already forbids circumvention and gives copyright owners every legal justification to take you to court for doing so, even if you paid for the Blue Ray and even if you just want to make a copy so you don’t have to carry the external DVD player around.

The moment you crack DRM (Digital Rights Management) to rip the DVD, you’ve violated Title I of the Digital Millennium Copyright Act. 17 U.S.C. 1201 prohibits circumvention of DRM… Some courts have tried to leaven this rather harsh rule, but most have not. While it’s typically hard to detect small-scale circumvention, the question is whether bypassing DRM is legal. The statute sets up some minor exceptions, but our ripper doesn’t fall into any of them. So, the moment a studio protects the DVD with DRM, it gains both a technical and a legal advantage—ripping is almost certainly unlawful.

Is It Legal to Rip a DVD That I Own?

If I understood this correctly, if I, as a security researcher, publish a paper or a blog post on issues with a CDM then it’s up to the copyright owner and the CDM vendor to decide if they will sue me on DMCA grounds and it would be up to me to prove that I did this in good faith and with no ulterior motive.

So why would I want to open myself to this risk? Under DMCA security researchers and academics have been arrested for violating section 1201. Why wouldn’t the people enabling DRM on the web want to protect me while I help hold CDM vendors accountable?

The nature of the beast

There is no single way to implement DRM on the web and there never was. Before EME you had to decide which plugin you wanted to use to encrypt your content. Both Silverlight and flash provided a whole development ecosystem but the plugins have been superseded by native HTML, CSS and Javascript technologies. Now, if you decide that you want to encrypt your video, browsers need EME to play it and you need to get multiple licenses for the different encryption providers for different browsers but there’s still not guaranteed that your users will be able to play the content nor that the content is fully protected.

There are further restrictions that content distributors and movie studios can “request” from CDMs vendors and EME implementors. If they don’t comply then they won’t get to play that encrypted content using that vendor’s CDM.

Furthermore, the people who may be in the best position to ensure safety and reliability of DRM systems, security researchers can’t really reverse engineer systems to work through flaws and provide a fair evaluation of what the CDM (or any DRM implementation) is doing because, unless given permission, that would be circumvention and why would DRM vendors allow people to reverse engineer their DRM products?

What’s Next?

I still have my worries as to what’s the next medium to claim for their copy protection scheme based on EME.

It comes to us as developers whether this is a technology that we want to support or not or if we believe that technology has a place in the web ecosystem.

I don’t believe it does.

Links and research

Combining Houdini APIs

I’ve written about Houdini before and how awesome it is. The articles I’ve written are:

But, because it’s not widely deployed and not all APIs have an equal level of support in the browsers where they work, it’s hard to get something that works well without having to rely on writing two versions of the code.

It wasn’t until I saw Design System Magic with CSS Houdini that I realized that you can combine the different APIs and make fully working designs with them. It also prompted me to start looking at combinations of the different APIs and how to provide API fallbacks for browsers that have not implemented them.

In CSS Houdini & The Future of Styling, Una Kravets makes an interesting case for Houdini Custom Properties and Houdini APIs to style the web now and how much power these APIs can make your styles and design systems.


Most of the Houdini APIs will take CSS elements as input. We can leverage Houdini Custom Properties. An example, taken from the specification.

The body of the page contains the following content. In the head of the document we add the styles:

  #example {
    --circle-color: deepskyblue;

    background-image: paint(circle);
    font-family: sans-serif;
    font-size: 36px;
    transition: --circle-color 1s;

  #example:focus {
    --circle-color: purple;

In the body of the document we add the element we add the textarea element we’ll be working in and a script that will add the custom property, using CSS.registerProperty and load our paint worklet.

We feature test that the methods are available before we run them. If they are not available we log the fact to the console; in a production application, we may want to add the custom property via CSS and load a polyfill for the Paint API.

<textarea id="example">CSS is awesome.</textarea>

if ('registerProperty' in CSS) {
    name: '--circle-color',
    syntax: '<color>',
    initialValue: 'deepskyblue',
    inherits: false,
  console.log('property successfuly registered');
} else {
  console.log('Houdini custom properties not supported');

if ('paintWorklet' in CSS) {
  console.log('paint worklet added successfully');
} else {
  console.log('Paint API not supported or not working properly');

The paint worklet for this example registers input properties that we’ll take from the page’s existing properties and custom properties. The browser doesn’t care how we created the custom property, only that it exists.

The syntax of the Paint Worklet is a subset of the Canvas API. Text rendering methods are missing and for security reasons, you cannot read back pixels from the canvas.

registerPaint('circle', class {
  static get inputProperties() {
    return ['--circle-color'];

  paint(ctx, size, properties) {
    // Get fill color from property
    const color = properties.get('--circle-color');

    // Determine the center point and radius.
    const xCircle = size.width / 2;
    const yCircle = size.height / 2;
    const radiusCircle = Math.min(xCircle, yCircle) - 2.5;

    // Draw the circle \o/
    ctx.arc(xCircle, yCircle, radiusCircle, 0, 2 * Math.PI);
    ctx.fillStyle = color;

Other worklets you may find in the wild will have inputArguments instead. I’m researching how to use input arguments… the examples I’ve found don’t work in Chrome (stable or canary).


Houdini is awesome when it works, but what do we do when it doesn’t?

Different areas of the Houdini universe have different ways to polyfill the APIs and not all APIs have been implemented to the point where having a polyfill works or makes sense.

There is a CSS Paint Polyfill from Jason Miller.

PostCSS Register Custom Property works by converting CSS-based custom element syntax (basically writing Houdini properties in CSS) using the syntax below:

@property --theme {
  syntax: '<color>+';
  initial-value: #fff;
  inherits: true;

and converting it to Javascript

if ("registerProperty" in CSS) {
    name: "--theme",
    syntax: "<color>+",
    initialValue: "#fff",
    inherits: true

Packaging Ideas together

Another way to support Houdini APIs is to package them for consumption like Una Kravets did with Extra.css.

Rather than provide a do-it-yourself framework where you’re responsible for all the details, it provides ready to use examples that you just link to your page.

The following example, taken from illustrates the process.

In the HTML document, we load the paint Worklet as a Javascript file.

<h1>Hello<br/> World</h1>
<p>content goes here</p>

<!-- This is where we include the worklet -->

The CSS portion is where the magic happens. The CSS Paint API allows you to define custom paints, defined in the paint worklet, that we can use everywhere you can use an image.

We wrap our CSS in a @supports statement to make sure that the browser supports the feature we’re working with before we use it. We can also leverage the cascade to make sure we have something that works, either CSS variables, Houdini variables, and APIs or something else.

@supports (background: paint(something)) {
  h1 {
    --extra-crossColor: #fc0;
    --extra-crossWidth: 3;

    background: paint(extra-crossOut);
    line-height: 1.5;

  span {
    --extra-crossColor: #d4f;
    background: paint(extra-crossOut);