Graceful Degradation of Progressive Enhancement?

Why does it matter?

As a designer I’ve been around the block long enough to have suffered the nightmare of having to code deffensively for multiple browsers. Back when Javascript was new and we were still trying to figure out best practices it was not uncommon to see code that branched according to browser vendor and version. We ended up with insanely hard to read and brittle to maintain CSS and Javascript files and we were at the mercy of vendors who may introduce incompatibilities and new features without caring that other vendors may not support them or may support them differently than they do.

who remembers code like the one below (not 100% accurate but it illustrate the idea)

[javascript]
// define the browsers we will use
ie = navigator.useragent= “Internet Explorer”;
n4 = navigator.useragent= “Netscape Navigator” && useragent.version = 4;
// Netscape and IE do things differently, take that into account in the code below

if (ie) {
// Do the IE specific code here
}

if (n4) {
// Do the Netscape thing here
}

// Do the stuff they both support equally
[/javascript]

we have moved very far from those beginnings.

Defining the terminology

Before we delve too deep into the discussion of which one is btter, let’s define the terms we will use.

Progressive enhancement

Progressive enchancement starts with a base template and begins to add features depending on whether they are supported by each invidual browser. This may involve alternate scripting or alternate ways of displaying content.

In Progressive Enhancement (PE) the strategy is deliberately reversed: a basic markup document is created, geared towards the lowest common denominator of browser software functionality, and then the designer adds in functionality or enhancements to the presentation and behavior of the page, using modern technologies such as Cascading Style Sheets or JavaScript (or other advanced technologies, such as Flash or Java applets or SVG, etc.). All such enhancements are externally linked, preventing data unusable by certain browsers from being unnecessarily downloaded

PE is based on a recognition that the core assumption behind “graceful degradation” — that browsers always got faster and more powerful — was proving itself false with the rise of handheld and PDA devices with low-functionality browsers and serious bandwidth constraints. In addition, the rapid evolution of HTML and related technologies in the early days of the Web has slowed, and very old browsers have become obsolete, freeing designers to use powerful technologies such as CSS to manage all presentation tasks and JavaScript to enhance complex client-side behavior.

From: Wikipedia

Gracefule degradation

Graceful degradation take the opposite approach. Rather tha use a base template to start, it starts with all the bells and whistles and provides ways for browsers that do not support the features to get as close as possible to the original experience as they can with the understanding that it will not be the same as the experience for modern or cuting edge browsers.

The main difference between the graceful degradation strategy and progressive enhancement is where you start your design. If you start with the lowest common denominator and then add features for more modern browsers for your web pages, you’re using progressive enhancement. If you start with the most modern, cutting edge features, and then scale back, you’re using graceful degradation.

From: About.com

So, which one do we use?

The answer as with many other things on the web is: It Depends

From a coder’s perspective we want to always have access to the latest and greatest technology we’ve worked so hard to develop; but the browser market is still too fragmented in the support of standards for designers and developers to say that technologies are the only ones that will be used in a given project.

If using Progressive Enhancement we build a basic site and, as necessary, adding the advanced features to make the site look like we want it by inking additional CSS and Javascript files to accomplish the tasks we want to enhance.

If we go the Graceful Degradation we build the page as we want it to look in modern browsers and then make sure that reduced functionality will not make the site unusable.

As I research more about Progressive Enhancement and Graceful Degradation I will post the findings and results of the research.

OOCSS: What it is and how do you apply it to CSS creation

Looking at my own CSS over the years I’ve come to realize how messy it is and how much (extra) work you have to do when it comes time to clean it up. It has come to light again when trying to work on a default set of stylesheets for Sunshine and I had to do some serious refactoring… It was a real pain in the ass.

An intriguing concept that I’ve been following for a few months is Object Oriented CSS and it refers to how to structure your CSS around objects and breaking common properties into generic classes so we can reuse them rather than have to code the same thing over and over.

The OOCSS project provides a framework: HTML, CSS, Javascript and related assets to give developers a common starting point for large scale web development. While ebooks, with few exceptions, will never fall into the large scale development model suggested by OOCSS the clean separation of CSS functionality into objects makes creation of format specific styles (ePub, Kindle, others) easier and it keeps you DRY (Don’t Repeat Yourself)

Take for example the following set of CSS declarations:

[css]
h1 {
font-family: “Helvetica Neue”, Helvetica, Arial; sans-serif;
font-size: 2.5em; /* 40px if the default font size is 16 */
font-color: #ccc;
}

.header h1 {
font-family: “Helvetica Neue”, Helvetica, Arial; sans-serif;
font-size: 2.5em; /* 40px if the default font size is 16 */
font-color: #ccc;
}

.footer h1 {
font-family: “Helvetica Neue”, Helvetica, Arial; sans-serif;
font-size: 2.5em; /* 40px if the default font size is 16 */
font-color: #ccc;
}
[/css]

I see the following issues with the code above:

  • If the development team decides to change any of the parameters we need to change it in every iteration of the h1 tag
  • Adding a class to define semantic elements (like all the h1 tags that belong in a header) only makes sense if we are making each othe h1 tags different, which we are not in this case.
  • Avoiding duplication makes the CSS file smaller in size and therefore faster to download

Consider the following corrected CSS that is equivalent to the one above

[css]
html {
font-family: “Helvetica Neue”, Helvetica, Arial; sans-serif;
}

h1 {
font-size: 2.5em; /* 40px if the default font size is 16 */
font-color: #ccc;
}

.header h1{
/* Define styles that apply only to h1 that are part of the header
Delete if not needed
*/
}

.footer h1{
/* Define styles that apply only to h1 that are part of the footer
Delete if not needed
*/
}[/css]

The code above is structured in such a way that the following happens:

  • All the text in the page has the same font (and we don’t need to declare the font for every element unless we’re making a change from the default)
  • All h1 elements look the same throughout the document
  • We only make changes where we need to and can delete the empty style declarations where we con’t need them.

Because ebooks can vary so much in the CSS they support and how designers must tweak we can create device specific classes and then select only the classes that are relevant, either by grouping them in multiple files or using SASS as described in SASS, SCSS, CSS and Modular Design

Why XHTML is still the best answer for web publishing

Brad Neuberg wrote an article about making ePub3 “play nice” with HTML5. I read it and initially agreed with it but, upon second reading and some thought I disagree with the essence of his argument.

Brad opens his argument with the following two bullet points:

  • The first is a version of HTML5 that uses XML’s rules: everything has to be well formed and there can’t be any mistakes in how you create your markup. This is commonly known as XHTML5, to indicate the HTML5 standard but based on XML. XHTML5 tends to be used in specialized back end systems to do text processing as there are fairly sophisticated tools for working with it, and it also turns out to be what EPUB3 uses.
  • The second is tag soup HTML5: you don’t have to close your tags and processing is much looser and closer to the older HTML4 standard. Tag soup HTML5 dominates the open web; it’s probably 99.9% of all the content you would access with a conventional web browser.

He further argumes that because tag soup is the dominant use of HTML5 that all tools should be using it and that XHTML 5 should only be used in back end systems.

In twitter conversation he argues, erroneously in my opinion, that it was the strictness and the restrictions it placed on content authors that caused the split with WHATWG and the creation of HTML 5. It was the move to an XML only environment and the creation of a completely separate mime type (application/xhtml+xml) for XHTML applications that caused the split.

Very few, if any, authors actually publish XHTML. They publish HTML in XHTML compatibility mode without actually changng the mime type of HTML documents or setting up the appropriate default namespace as required for XHTML compliance

First, the future is creating high quality markup in HTML5 and then publishing it into different channels.

In that we agree.

We need a common base to create our content, but let’s also remember that HTML or XHTML cannot do everything.

Creating good coding habbits

If name spaces were not an issue, I would still have an issue with tag soup HTML: it promotes laziness and it makes the code hard to read and understand.

Look at the following example:

<html>
<title>Sample document</title>

<h1>Sample document

<p>Lorem Ipsum and some random textLorem Ipsum and some random textLorem Ipsum and some random textLorem Ipsum and some random textLorem Ipsum and some random textLorem <em> and some random textLorem Ipsum and some random textLorem Ipsum and some random textLorem Ipsum and some random text</p>

<p>Lorem Ipsum and some random textLorem Ipsum and some random textLorem Ipsum and some random textLorem Ipsum and some random textLorem Ipsum and some random textLorem <em> and some random textLorem Ipsum and some random textLorem Ipsum and some random textLorem Ipsum and some random text

<p>Lorem Ipsum and some random textLorem Ipsum and some random textLorem Ipsum and some random textLorem Ipsum and some random textLorem Ipsum and some random textLorem <em><b>and some random text</em></b> Lorem Ipsum and some random textLorem Ipsum and some random textLorem Ipsum and some random text

In the example above, a more or less classic example of a tag soup web document we can see the following issues:

  • There is no distinction between the head and the body of the document
  • The charset of the document is not defined
  • There re mutiple instances of incorrectly nested tags

HTML5 provides mechanisms for user agents to work around inconsistencies outline above. It does not encourage authors to use this system to write HTML content.

Why does this new HTML spec legitimise tag soup?

Actually it doesn’t. This is a misconception that comes from the confusion between conformance requirements for documents, and the requirements for user agents.

Due to the fundamental design principle of supporting existing content, the spec must define how to handle all HTML, regardless of whether documents are conforming or not. Therefore, the spec defines (or will define) precisely how to handle and recover from erroneous markup, much of which would be considered tag soup.

For example, the spec defines algorithms for dealing with syntax errors such as incorrectly nested tags, which will ensure that a well structured DOM tree can be produced.

Defining that is essential for one day achieving interoperability between browsers and reducing the dependence upon reverse engineering each other.

However, the conformance requirements for authors are defined separately from the processing requirements. Just because browsers are required to handle erroneous content, it does not make such markup conforming.

For example, user agents will be required to support the marquee element, but authors must not use the marquee element in conforming documents.

It is important to make the distinction between the rules that apply to user agents and the rules that apply to authors for producing conforming documents. They are completely orthogonal.

From: wiki.whatwg.org/wiki/FAQ#Why_does_this_new_HTML_spec_legitimise_tag_soup.3F

Specific requirements for authoring HTML5 documents can be found here: Conformance requirements for authors. It again reinforces the idea that because it is possible to create malformed tag soup documents you shouldn’t do it.

Elements from other XML-based name spaces

We can not add these elements to a tag soup-based html document. Whether this is a major issue will depend on your project needs but unless WHATWG/W3C provide an alternative, name spaces are the way to go

ePub 3 uses elements from the following namespaces in order to add functionality to ePub-based publications.

Prefix Namespace URI
epub http://www.idpf.org/2007/ops
m (MathML) http://www.w3.org/1998/Math/MathML
pls http://www.w3.org/2005/01/pronunciation-lexicon
ssml (Speech Synthesis Markup Language) http://www.w3.org/2001/10/synthesis
svg (Scalable Vector Graphics) http://www.w3.org/2000/svg

An example of XML vocabularies used in ePub 3 documents. Taken from EPUB Content Documents 3.0

Javascript support for namespaced content

While technically it would be possible to convert epub:type to epub-type I fail to see the reason why we should. Both Javascript and CSS allow us to query for ePub namespaced (or any other kind of) attributes. we can either query bu class containing the attribute:

//select the first element that has the "content" class name
  var content = document.getElementsByClassName("content")[0];
  //a returns undefined but it still stores the correct value
  var a = content.getAttributeNS('http://www.idpf.org/2007/ops', 'type');
  alert(a); // shows the value of type for that element

We can also wrap our code in ePub specific containers. ePub make available the navigator.epubReadingSystem property which we can use to execute all our ePub realted code without affecting the rest of our script.

// If the user agent is an ePub Reading System
  if (navigator.epubReadingSystem) {
    feature = touch-events;
    // Test for the touch events feature
    var conformTest = navigator.epubReadingSystem.hasFeature(feature);
    // Output if the feature is supported
    alert("Feature " + feature + " supported?: " + conformTest);
}

See http://code.google.com/p/epub-revision/wiki/epubReadingSystem for more information.

CSS

As far as CSS is concerned namespace support was introduced as a draft document in June of 2013. Support is not widespread but when it is, we’ll be able to do something like this:

@namespace "http://www.w3.org/1999/xhtml"; /* This defines the default namespace*/
@namespace epub "http://www.idpf.org/2007/ops"; /* this defines the namespace "epub" */

table {
  border: 1px solid black
}
epub|type {
  display: block;
}