Categories
Uncategorized

Feature Policies

Some of these policies may be under Origin Trials or may not be available in browsers at the time the article was written. Check Chromestatus for a list of policies that are either active or under consideration.

Feature policies are a way to restrict what web APIs we make available to what web context (page or iframe) thus reducing the risk of malicious third-party code, and footguns (shooting ourselves in the foot) by misusing APIs or using too many of them.

With Feature Policy, you opt-in to a set of “policies” for the browser to enforce on specific features used throughout your site. These policies restrict what APIs the site can access or modify the browser’s default behavior for certain features.

Policies are a contract between developer and browser. They tell the browser about what out intent as developers is and keeps us honest when our app tries to go off the rails and do something we’re not allowed to. If the site or embedded third-party content violates any policies, the browser overrides the behavior with better UX or blocks the API altogether.

The full set of APIs we can restrict with feature policies is listed below. For more information about the policies, browser support, and discussion of how they work (for the feature policies that have been implemented) see featurepolicy.info

  • accelerometer
  • ambient-light-sensor
  • autoplay
  • camera
  • document-domain
  • document-write
  • encrypted-media
  • font-display-late-swap
  • fullscreen
  • geolocation
  • gyroscope
  • layout-animations
  • lazyload
  • legacy-image-formats
  • magnetometer
  • microphone
  • midi
  • oversized-images
  • payment
  • picture-in-picture
  • speaker
  • sync-script
  • sync-xhr
  • unoptimized-images
  • unsized-media
  • usb
  • vertical-scroll
  • vr
  • wake-lock

Set headers on the server

The best and best way to set up feature policies globally is to set the headers on your server configuration.

These examples use image-related feature policies to tighten the type of images that get served to your users.

Apache

Apache 2.4.7 and later allow you to set headers only if they are empty. These are normal headers that can be set everywhere Apache would normally allow you to do so.

<Location />
  Header setifempty Feature-policy
    unsized-media 'none';
    oversized-images 'self'(2.0) *(inf);
    unoptimized-lossy-images 'self'(1) *(inf);
    unoptimized-lossless-images 'self'(1) *(inf);
    unoptimized-lossless-images-strict 'self'(1) *(inf);
</Location>

Nginx

In Nginx I’ve added multiple policies in the same header. The result is the same

location / {
  add_header Feature-Policy "unsized-media 'none';
    unoptimized-lossy-images 'self'(0.5) *(inf);
    unoptimized-lossless-images 'self'(1) *(inf)
    unoptimized-lossless-images-strict 'self'(1) *(inf);"
}

Using Feature Policy in iframes

Another way we can use feature poolicy for our content is inside the allow attribute of an iframe element. The example below, taken from Youtube, shows how it works.

<iframe width="560" height="315"
src="https://www.youtube.com/embed/ht_HDdtyy9s"
frameborder="0"
allow="accelerometer; fullscreen; autoplay;
encrypted-media; gyroscope;
picture-in-picture" allowfullscreen></iframe>

The allow attribute contains a list of all the feature policies that are allowed for that specific iframe.

To make sure we can handle older browsers, we have to add the old-style attribute, in the example we use allow to set up the full screen feature policy for the iframe. We also use the older allowfullscreen standalone attribute for older browsers that don’t support the Feature Policies or where it hasn’t been implemented.

If both the feature policy and the equivalent attribute are present and the values conflict, the more restrictive of the two will win.

Javascript API

To help in our code we can use document.featurePolicy to query what Policy features are available, whether a page allows a given feature, whether an origin is allowed to use a feature available through policy and what origins has the page allowed to use a feature through a feaature policy.

// Lists feature policies allowed by the page.
document.featurePolicy.allowedFeatures();

// True if the page allows the feature.
document.featurePolicy.allowsFeature('geolocation');

// True if the origin allows the feature.
document.featurePolicy.allowsFeature('geolocation', 'https://devsite-v2-prod.appspot.com/');

// List of feature policies allowed
// by the browser regardless if they're active
document.featurePolicy.features();

// Lists origins on the page allowed
// to use the feature
document.featurePolicy.getAllowlistForFeature('geolocation');

The idea is that we can use these methods to tailor the code based on what features are allowed or not.

The example below checks if the client supports geolocation and if the feature is allowed for the page it’s hosted on.

if (("geolocation" in navigator) &&  document.featurePolicy.allowsFeature('geolocation')) {
  console.log('Geolocation supported and allowed');
} else {
  console.log('Geolocation not supported or not allowed');
}

We could get more detailed information by nesting the tests to know if it’s not supported or not allowed but, for most cases, one test is enough.

Use Cases

So, we know how Feature Policy works but why would we use them?

I can think of two use cases where having featur policies will help improve application performance.

Image Performance

The first casse is to guard against image bloat and image-caused text jump. Using feature policies, we can ensure that all images on the page have height and width attributes set and that we don’t send images to clients that are too large to display and will take too long to load.

The example below will only work with Apache 2.4.7 where the setifempty header directive was introduced.

The feature policy directives that we use in this case are:

  • oversized-images: On a web page, the number of pixels of a container determines the resolution of an image served inside. It is unnecessary to use an image that is much larger than what the viewing device can actually render. The example will trigger and block images that are twice as large as their dimensions
  • unsized-media: Enforces explicit dimensions for images and videos. If dimensions aren’t specified on the element, the browser sets a default size of 300×150 when this policy is active
  • unoptimized-lossy-images: equires the data size (in bytes) of images using lossy compression to be no more than X times bigger than its rendering area (in pixels). If the images is larger than the desired size, the browser will render a placceholder instead

    A lossy <img /> element should not exceed a byte-per-pixel ratio of X, with a fixed 1KB overhead allowance. For a W x H image, the file size threshold is calculated as W x H x X + 1024 (where X is specified in the policy). Any image whose file size exeeds the limit will be blocked.

  • unoptimized-lossless-images: equires the data size (in bytes) of images using lossless compression to be no more than X times bigger than its rendering area (in pixels). If the images is larger than the desired size, the browser will render a placeholder instead

    A lossless <img /> element should not exceed a byte-per-pixel ratio of X, with a fixed 1KB overhead allowance. For a W x H image, the file size threshold is calculated as W x H x X + 1024 (where X is specified in the policy). Any image whose file size exeeds the limit will be blocked.

<Location />
  Header setifempty Feature-Policy
    unsized-media 'none';
    oversized-images 'self'(2) * (inf);
    unoptimized-lossy-images 'self'(0.5) *(inf);
    unoptimized-lossless-images 'self'(1) *(inf)
</Location>

These policies will also help keep me honest in case I forget to resize or compress images. They will also keep individual developers and the design teams honest by not rendering images that don’t match the criteria that has, hopefully, been agreed upon.

Third Party Privacy

Another aspect of feature policy that I find intriguing is using them to control what browser and computer features sites have access to.

This set of feature questies disable access to the features listed. Some are user-facing privacy considerations like not granting access to camera, microphone or geolocation.

Others have to do with with older web features that have security implications, like being able to programmaticaally write content to the page and wipe existing content.

<Location />
  Header setifempty Feature-Policy
    geolocation 'none';
    camera 'none';
    microphone 'none';
    usb 'none';
    document-domain 'none';
    document-write 'none'
</Location>

This should disable the features for all sites, including our own.

If we want to disable third party access but retain the ability to use the features on our own sites we can change none to self.

To grant access to third party sites you can either replace none with the URL of the sites or sites that you want to give permissioon to or add the URL after self if you’ve given permission to your site.

Conclusion

Feature policy offers an interesting way to keep ourselves honest and limit the damage a rogue site can do to our users.

Support is spotty and uneven, please check https://featurepolicy.info/ and caniuse.com Feature Policy entry for up-to-date information about the policy directives and browser support.

Links and Resources

Categories
Uncategorized

HTTP Headers and Responsive Images

This is an old one but it still worries me and made me search for a possible solution in the context of web APIs without requiring a Node package manager or a Node infrastructure.

No, no one wants to write code like what Brad posted in 2015, but that’s the reality when using responsive images, or is it?

If we don’t mind doing the work on the server rather than the client we can do something like the code below, taken from Adapting to Users with Client Hints, to load WebP images for browsers that support them and JPG/PNG for browsers that don’t.

<?php
// Check Accept for an "image/webp" substring.
$webp = stristr($_SERVER["HTTP_ACCEPT"], "image/webp") !== false ? true : false;

// Set the image URL based on the browser's WebP support status.
$imageFile = $webp ? "whats-up.webp" : "whats-up.jpg";
?>

<img src="<?php echo($imageFile); ?>" alt="I'm an image!">

We can then shrink our responsive images by removing the formats that are not necessary using server-side code, in this case, PHP.

<?php
// Check Accept for an "image/webp" substring.
$webp = stristr($_SERVER["HTTP_ACCEPT"], "image/webp") !== false ? true : false;

$name = "company-photo";
$format = $webp ? ".webp" : ".jpg";
?>

<picture>
  <source srcset="<?php echo($name); ?>-256w<?php echo($format); ?> 256w,
    <?php echo($name); ?>-512w<?php echo($format);?> 512w,
    <?php echo($name); ?>-768w<?php echo($format); ?> 768w,
    <?php echo($name); ?>-1024w<?php echo($format); ?> 1024w,
    <?php echo($name); ?>-256w<?php echo($format); ?> 256w,
      type="image/webp"
      src="company-photo-256w.jpg"
      sizes="(min-width: 560px) 251px, 88.43vw"
      alt="The Company Photo!">
</picture>

The code first checks if the browser supports PHP by testing if the string image/webp is included in the Accept header. We record the result.

We then create two variables. One with the name of the file and the other one with the file extension that we use based on WebP support (or lack thereof).

Then for every image in the srcset attribute, we compose it using PHP echo statements and the variables that we created.

This is great but it requires a server-side script and it mixes business logic with the HTML and it gets cumbersome with more than a few images.

Looking at the client hints article I thought that they may be the solution to writing something like the PHP code in the previous example. Unfortunately, Client Hints have no equivalent to the Accept HTTP header.

Categories
Uncategorized

Centering with Flexbox

One of the most frustrating things for me is how to center things using CSS.

There are many ways to do it and everyone disagrees with everyone else. It is also true that there is no one-size-fits-all solution because what works with modern browsers doesn’t necessarily work in older versions.

That said, my favorite way to center content is using Flexbox.

For example, the code below will center its children both vertically and horizontally on the available screen size (100vw by 100vh).

Because we’re using a right-to-left and top-to-bottom language we’ve set the direction to a column, otherwise second and subsequent elements would appear next to each other.

We can choose whether to align vertically or horizontally and we can also choose the size of the area where we want to center content if using it as a standalone element.

.container {
  width: 100vw;
  height: 100vh;
  display: flex;
  flex-direction: column;
  /* Vertical alignment*/
  align-items: center;
  /* Horizontal alignment */
  justify-content: center;
}

Using this code as a starting point we can experiment with what it would take to center content that is nested or inside grid cells.

Categories
Uncategorized

New code for old browsers: Babel

One of the coolest things that, in my opinion, has happened in front end development is the concept of transpilation. You can transpile third-party languages like Dart or TypeScript or you can take current ES2015+ (currently ES2019 and soon to be ES2020) and make it work in older browsers that don’t support the features of the latest ECMAScript features.

In this post, we’ll worry about converting ES2020 to ES5 using Babel

Setting up Babel

Babel requires a build system. For this post, I’ve chosen Gulp which is what I normally use.

I assume you already have a package.json in your project directory. If you don’t run npm init --yes to get a quick-start version, you can edit it later.

Run the following command to install Gulp, Babel and related plugins

npm i -D gulp \
gulp-load-plugins \
gulp-sourcemaps \
@babel/core \
@babel/preset-env \
gulp-babel

Create a gulpfile.js file and copy the following code into it.

The code takes all the files and does the following:

  • Processes it through Babel using the Babel env preset
  • Create a sourcemap for each script that gets transpiled
  • Put the resulting scripts and sourcemaps in src/js
// Require Gulp first
const gulp = require('gulp');
// Lazy load plugins
const $ = require('gulp-load-plugins')({
  lazy: true,
});

function runBabel() {
  return gulp.src('scripts/*.js')
    .pipe($.sourcemaps.init())
    .pipe($.babel({
      presets: ['@babel/env'],
    }))
    .pipe($.sourcemaps.write('.'))
    .pipe(gulp.dest('src/js/'))
}

exports.babel = runBabel;
exports.default = runBabel;

The idea is that we’ll write the code once and then let Babel convert it to code that will run in older browsers.

Preset-env

For a while, Babel forced people to transpile all features, whether a browser supported modern JavaScript or not. The Babel team introduced preset-env as an alternative.

The idea is that, using a list of browsers you provide and a configuration for what browsers to transpile for. Babel will only transpiles those parts of your code that your target browsers don’t support, making the resulting code slimmer.

Preset-modules

After I originally finished writing this, the Babel team introduced env-preset-modules which provides a better way to generate optimized builds for browsers that support modules.

ECMAScript Modules

As Jake Archibald points out one of the best ways to provide code for both modern browsers (that support all the latest and greatest features) and older browsers without having to write unnecessary code or code that will only run on one version or the other.

Because we used babel to target browsers for transpilation we can be confident that newer browsers will work with our module script and the nomodule version can rely on older technologies.

The HTML looks like this.

<script type="module" src="module.mjs"></script>
<script nomodule src="fallback.js" defer></script>

type="module" indicates that the associated script will be treated as an ES2015 module and will be ignored by browsers that don’t support them.

nomodule will run the code as a traditional script. Browsers that support modules also know to ignore any script tag that has the nomodule attribute.

Categories
Uncategorized

Learning to query and read CrUX data

I’ve decided to take another look at BigQuery in the context of the Chrome User Experience Report or CrUX.

The idea is that Google, through the Chrome team and tools they make available to developers, has collected real user metrics (RUM) for billions of sites around the world and has grouped them according to country.

Warning

In the BigQuery free tier, users can only query 1TB worth of data per month. Beyond that, the standard rate of $5/TB applies. So when Big Query tells you how many gigabytes of data are processed in each request, pay attention.

This is why I picked a smaller country to experiment with rather than use the US or the full dataset… it can get really expensive if you’re not careful.

The first example uses BigQuery to search for all unique origins in Chile as of December 2019.

SELECT
  COUNT(DISTINCT origin)
FROM
  `chrome-ux-report.country_cl.201912`

let’s unpack the query referencing Rick Viscomi’s Using the Chrome UX Report on BigQuery

SELECT COUNT(DISTINCT origin) means querying for the number of unique origins in the table. Roughly speaking, two URLs are part of the same origin if they have the same scheme, host, and port.

FROM `chrome-ux-report.country_cl.201912` specifies the address of the source table, which has three parts:

  • The Cloud project name chrome-UX-report within which all CrUX data is organized
  • The dataset country_cl, representing data from Chile (county code CL)
  • The table 201912, representing December 2019 in YYYYMM format

With slight modifications, we can get a list of the URLs for the unique origins. We remove COUNT and leave the select statement as DISTINCT origin

SELECT
  DISTINCT origin
FROM
  `chrome-ux-report.country_cl.201912`

You can see a detailed preview of the data in BigQuery. Note that BigQuery is moving to the Google Cloud Console and that the URL may change as a result.

Of particular importance are the performance metrics available as part of the report:

The data for each metric is organized as an array of objects that we can capture as [metric].histogram.bin.

The following example will pick up the sum of all First Contentful Paint density values and assign them to the variable fast_fcp.

It will pull the data from the 2019/12 report and will flatten all the values in first_contentful_paint.histogram.bin

The matching conditions are: the origin has to be https://www.vidasecurity.cl/ and the fcp.start value has to be greater than 10000, meaning that the site has to take 10 seconds or more to load.

SELECT
  SUM(fcp.density) AS fcp_density
FROM
  `chrome-ux-report.country_cl.201912`,
  UNNEST(first_contentful_paint.histogram.bin) AS fcp
WHERE
  origin = 'https://www.vidasecurity.cl/' AND
  fcp.start > 10000

The following example, unlike the other ones, selects multiple elements from a specific table to get more fine-grained information from the data.

This example asks the question: in the country_cl.201912 report, how many origins took more than 20 seconds (20000 milliseconds) to load on a phone. List the results by origin

SELECT
  origin,
  fcp
FROM
  `chrome-ux-report.country_cl.201912`,
  UNNEST(first_contentful_paint.histogram.bin) AS fcp
WHERE
  form_factor.name = 'phone' AND
  fcp.start > 20000
ORDER BY
  origin

And this is the equivalent query for desktop connections taking more than 2000 seconds to start.

SELECT
  _TABLE_SUFFIX AS yyyymm,
  AVG(fcp.density) AS fast_fcp
FROM
  `chrome-ux-report.country_cl.*`,
  UNNEST(first_contentful_paint.histogram.bin) AS fcp
WHERE
  form_factor.name = 'desktop' AND
  fcp.start > 20000
GROUP BY
  yyyymm
ORDER BY
  yyyymm

With these two queries, we can start making comparisons and predictions about the data before we jump into more in-depth queries.

Some of the questions I had about the data:

  • How much do numbers change over the years?
  • Is there a significant difference between desktop and mobile values?

Using these questions as a starting point we can dig deeper into the general data or query specific sites for more information. If we want, we can save the data as JSON and use a visualization library like D3 to generate graphical representations of the data or save it as CSV to manipulate on Excel or Google Sheets.

Once you’ve got all the answers you can get out of the CrUX dataset you can move to the HTTP Archive dataset. This dataset is a far more comprehensive both in the breadth of the data it collects and the frequency that it collects the data.

For more information on how to use the HTTP Archive BigQuery dataset see Getting Started Accessing the HTTP Archive with BigQuery by Paul Calvano.