Visualizing CSS properties

One of my earliest experiments in data visualization was to create visualization of the CSS properties as they are documented in the Web Platform Documentation project. Now that it’s in a shape where I’m willing to let it out in the wild it’s time to write about it and explain the rationale and the technology.


I’m a visual person. Rather than search for something that I may or may not know exactly what it is, I’d rather look at something that, I hope, will make it easier for me to find what I’m looking for.

I’m also lazy. Instead of looking for a property in one place and then manually typing the full URL of the Web Platform Documentation project I’d rather add the URLs for all properties directly to the visualization so that, when the I find the property he’s looking for, I can go directly to it from the visualization tree.

Building the visualization

The data

The first thing I did was to pull the data from the Web Platform Documentation project using their API to generate an initial JSON file. I then had to edit the file manually to produce something closer to the JSON format that I was looking for:

    "name": "CSS",
    "children": [
            "name": "Alignment",
            "children": [
                    "size": 1000,
                    "name": "align-content",
                    "url": ""
                    "size": 1000,
                    "name": "align-items",
                    "url": ""
                    "size": 1000,
                    "name": "align-self",
                    "url": ""
                    "size": 1000,
                    "name": "alignment-adjust",
                    "url": ""
                    "size": 1000,
                    "name": "alignment-baseline",
                    "url": ""

Every time I edited or made a change to the JSON file (the resulting full JSON file is about 2500 lines of code) I ran it through JSON lint to make sure that the resulting content was valid JSON. I haven’t always done this and it has been a constant source of problems: The page appears blank, only part of the content is displayed and other anoyances that took forever to correct.

Once we have the JSON file working, we can move into the D3 code.

The technology

D3.js is a JavaScript library for manipulating documents based on data. D3 helps you bring data to life using HTML, SVG and CSS. D3’s emphasis on web standards gives you the full capabilities of modern browsers without tying yourself to a proprietary framework, combining powerful visualization components and a data-driven approach to DOM manipulation.

What this means is that we can build visual content based on data have collected or arbitrary data we have available. In this case we are visualizing an arbitrary grouping of CSS properties from the Web Platform Documentation project; all properties are listed but the grouping may change as the group comes to a consensus regarding the groups.

D3 follows a fairly straightforward process. We start by defining all our variables at the top of the script to prevent Javascrp variable hoisting. The code looks like this:

// Starting values were:
// width: 2140 - margin.right - margin.left
// height : 1640 - - margin.bottom
var margin = {top: 20, right: 120, bottom: 20, left: 120},
    width = 1070 - margin.right - margin.left,
    height = 820 - - margin.bottom;

var i = 0,
    duration = 750,

var tree = d3.layout.tree()
    .size([height, width]);

var diagonal = d3.svg.diagonal()
    .projection(function(d) { return [d.y, d.x]; });

We create the SVG-related elements that we need in order to display the visualization data. The steps in the code below are:

  • Select the body of the document
  • Append the svg element
  • Set up width and heigh attributes with default values
  • Create a SVG group (indicated by the <g> tag) and translate it (move it by the ammount indicated by the top and left margin)
var svg ="body")
    .attr("width", width + margin.right + margin.left)
    .attr("height", height + + margin.bottom)
    .attr("transform", "translate(" + margin.left + "," + + ")");

We load the JSON file using D3′s JSON loader and set up the root element and its position.

The function collapse makes sure tha elements without children are collapsed when we first open the visualization page. I wanted to make sure that users would not be overwhelmed with all the information available in the visualization and had a choice as to what items they would click and what information they’d access.

Preventing children from automatically displaying also prevents the clutter of the tree. If there are too many children open the vertical space gets reduced and it becomes hard to distinguish which item we are clicking in.

I’ve also set a default height for all elements… 100px sounds good at this stage.

d3.json("json/css-data.json", function(error, css) {
  root = css;
  root.x0 = height / 2;
  root.y0 = 0;

function collapse(d) {
    if (d.children) {
      d._children = d.children;
      d.children = null;
  }  root.children.forEach(collapse);
});"height", "100px");

Because the click will change the nature of the layout and the number of visible elements we need to update the layout everytime the user clicks on a valid element. This wil involve hidding old elements,showing showing new nodes in the tree.

function update(source) {

// Compute the new tree layout.
  var nodes = tree.nodes(root).reverse(),
      links = tree.links(nodes);

// Normalize for fixed-depth.
  nodes.forEach(function(d) { d.y = d.depth * 200; });

// Update the nodes…
  var node = svg.selectAll("g.node")
      .data(nodes, function(d) { return || ( = ++i); });

// Enter any new nodes at the parent's previous position.
  var nodeEnter = node.enter().append("g")
      .attr("class", "node")
      .attr("class", function(d) {
        if (d.children) {
          return "inner node"
        } else {
          return "leaf node"
      .attr("transform", function(d) { return "translate(" + source.y0 + "," + source.x0 + ")"; })
      .on("click", click);

Using D3′s Enter/Append/Exist system we go back into the nodes we created, we append a new circle and set its radius and color (lines 1 – 3 in the code below).

Next I add the text for each node, set up their X (line 5) and Y (line 6) coordinates for the text node. I’ve aligned the text usin a D3 trick where setting the Y value to .35em centers the text vertically.

For each leaf node I set up a link as only elements without children have URL attributes. We do this in two steps:

  • Append a SVG anchor element (svg:a) which is different than our regular HTML anchor (line 7)
  • Add an Xlink, the XML vocabulary for defining links between resources (line 8) using the xlink:href syntax

Finally, we setup and place the linkend attribute for each node in such a way that nodes with children will display their text to the left of the assigned circle and nodes without children will display the text to the right of the circle (line 11)

    .attr("r", 1e-6)
    .style("fill", function(d) { return d._children ? "lightsteelblue" : "#fff"; });
      .attr("x", function(d) { return d.children || d._children ? -10 : 10; })
      .attr("dy", ".35em")
      .attr("xlink:href", function(d){return d.url;})
      .style("fill-opacity", 1)
      .text(function(d) { return; })
    .attr("text-anchor", function(d) { return d.children || d._children ? "end" : "start"; });

Most of the remaining work is to transition elements to and from their current position. This would be so much easier if we were using a library such as jQuery or Dojo but the result is worth the additional code.

The duration for all transitions is hardcoded to 750 miliseconds. Whether duration affects the user experiecne is an area to look further into.

// Transition nodes to their new position.
  var nodeUpdate = node.transition()
      .attr("transform", function(d) { return "translate(" + d.y + "," + d.x + ")"; });"circle")
      .attr("r", 4.5)
      .style("fill", function(d) { return d._children ? "lightsteelblue" : "#fff"; });"text")
      .style("fill-opacity", 1);

// Transition exiting nodes to the parent's new position.
  var nodeExit = node.exit().transition()
      .attr("transform", function(d) { return "translate(" + source.y + "," + source.x + ")"; })
      .attr("r", 1e-6);"text")
      .style("fill-opacity", 1e-6);

// Update the links…
  var link = svg.selectAll("")
      .data(links, function(d) { return; });

// Enter any new links at the parent's previous position.
  link.enter().insert("path", "g")
      .attr("class", "link")
      .attr("d", function(d) {
        var o = {x: source.x0, y: source.y0};
        return diagonal({source: o, target: o});

// Transition links to their new position.
      .attr("d", diagonal);

// Transition exiting nodes to the parent's new position.
      .attr("d", function(d) {
        var o = {x: source.x, y: source.y};
        return diagonal({source: o, target: o});

// Stash the old positions for transition.
  nodes.forEach(function(d) {
    d.x0 = d.x;
    d.y0 = d.y;

The final bit of magic is to use D3′s onClick event to toggle the display of our content.

var svg
// Toggle children on click.
function click(d) {
  if (d.children) {
    d._children = d.children;
    d.children = null;
  } else {
    d.children = d._children;
    d._children = null;

Where to go next?

There are some areas I want to further explore as I move forward with the visualization and learn more about how to visualize data:

  • Does the length of the transitions change the way people react to the data?
  • How can we control the space between items when they are too many open?

I will post the answers to these questions as I find the answers :-)

HTML as a single source format

In this essay I will take what may be an unpopular position: Using HTML with XML syntax (XHTML) is currently the best format to put your content in because it is easier to convert from XHTML/CSS to pretty much any other format. In making this case we’ll explore and work in the following areas and answer the following questions:


When we speak about XHTML in this document we refer to an HTML document using XHTML syntax. I will not change the mime type on the server to fully comply with XHTML restrictions.


The two main reasons I advocate XHTML as an authoring format are

XHTML enforces code clarity and authoring discipline

XHTML limits the freeform structure of HTML5. Documents conforming to XHTML specifications must have, at a minimum:

  • A DOCTYPE declaration
  • An HTML element
  • A HEAD element
  • TITLE element
  • BODY element

The structure written as XHTML tags looks like this:

<!DOCTYPE html>
      <title>Title Goes Here<title>
      <h1>Content Area</h1>

This minimal structure must comply with the requirements below

All XHTML tag names & attribute names must be in lowercase

All XHTML attributes and elements must be in lower case

The following elements are not legal XHTML:

<DIV CLASS="chapter">Chapter 1</div>

<Div Class="chapter">Chapter 1</div>

All XHTML elements must close

All elements must be closed, this includes both our standard tags such as the paragraph tag

<p>This is a paragraph</p>

to empty elements such as images and form inputs elements

<img src="images/test.png" height="800" width="600" alt="Test image" />

<input type="submit" value="Submit" />

All XHTML elements must be properly nested

XHTML insists on proper nesting of the elements on our content. This is no longer legal

<p>This is the content of a paragraph

<p>This is our second paragraph

And it should be writen like this:

<p>This is the content of a paragraph</p>

<p>This is our second paragraph</p>

All XHTML attribute values must be quoted

In addition to being lowercased, attributes must be quoted. Rather than:

<div class=chapter>Chapter 1</div>

It has to be written like this:

<div class="chapter">Chapter 1</div>

Because it is structured, we can use transformation tools to convert to/from XHTML

A lot of the discussions I’ve had with people seem to focus in the drawbacks of XHTML format as end users. One of the strengths W3C cited when moving to XHTML as the default format for the web was how easy it was for machines to read it and covert it to other formats.

I’ll cover two examples using Markdown: straigth transformation and converting Markdown into templated XHTML and an example of using XSLT 1.0 to covert one flavor of XHTML into another using Xsltproc

From markdown to html, straight up

One of the goals of Markdown is to: allow you to write using an easy-to-read, easy-to-write plain text format, then convert it to structurally valid XHTML (or HTML). The original tool and all its ttranslations to other languages are built to allow the conversion; where they are different is in the number of extensions to the core markdown language and the language the tools themselves are written on.

For these examples I chose Python Markdown mostly because it’s the language and the tool I’m familiar with. We will use the markdown file for the Markdown home page at Daring Fireball

Below is a portion of the resulting XHTML code:


<p>Markdown is a text-to-HTML conversion tool for web writers. Markdown
allows you to write using an easy-to-read, easy-to-write plain text
format, then convert it to structurally valid XHTML (or HTML).</p>

<p>Thus, "Markdown" is two things: (1) a plain text formatting syntax;
and (2) a software tool, written in Perl, that converts the plain text
formatting to HTML. See the <a href="/projects/markdown/syntax">Syntax</a> page for details pertaining to
Markdown's formatting syntax. You can try it out, right now, using the
online <a href="/projects/markdown/dingus">Dingus</a>.</p>

<p>The overriding design goal for Markdown's formatting syntax is to make
it as readable as possible. The idea is that a Markdown-formatted
document should be publishable as-is, as plain text, without looking
like it's been marked up with tags or formatting instructions. While
Markdown's syntax has been influenced by several existing text-to-HTML
filters, the single biggest source of inspiration for Markdown's
syntax is the format of plain text email.</p>

<p>The best way to get a feel for Markdown's formatting syntax is simply
to look at a Markdown-formatted document. For example, you can view
the Markdown source for the article text on this page here:
<a href=""></a></p>

The conversion process itself is simple. Using the Perl version it looks like this:

markdown content/ > test.html

From Markdown to templated XHTML

As part of my sunshine-markdown project I’ve researched ways to convert markdown to XHTML. as

[10:30:54] carlos@rivendell sunshine-markdown 4826$ ./sunshine --verbose
processing:  content/
processing:  content/
processing:  content/
processing:  content/
processing:  content/
processing:  content/
processing:  content/
processing:  content/
processing:  content/
processing:  content/
processing:  content/
processing:  content/
processing:  content/
processing:  content/
processing:  content/
processing:  content/
processing:  content/

Sunshine is hardcoded to put the content of each markdown file into a template that looks something like this:

&lt; ?xml version="1.0" encoding="UTF-8"?>
<html xmlns="" xml:lang="en" lang="en"
<!-- this file is auto-generated from %(src_file_name)s. Do not edited directly -->
Copyright 2014, Carlos Araya
See: for details on the license and copyright release
  <meta charset="utf-8"/>

<section epub:type="chapter">


Using XSLT to convert XHTML into ePub-ready XHTML

One of the things we forget is that, because XHTML is structured content we can use XSLT and XPATH to convert it to other XML-based dialects, such as the XHTML dialect required for ePub3 conformance. A basic template to convert a div into a section with the proper attributes for ePub work may look something like this:

&lt; ?xml version="1.0"?>
<xsl:stylesheet xmlns:xsl="" version="1.0">
  <!-- Since we are expecting one chapter per file we can match the root element to begin -->
  <xsl:template match="/">
    <div class="featured">
      <xsl:apply -templates match="div[@epub:type='chapter']"></xsl:apply>
  <!-- The second template matches div with epub:type attribute of chapter -->
  <xsl:template match="div[@epub:type='chapter']">
    <xsl:apply -templates></xsl:apply>

Flex Boxes and the Holy Grail

Content and images taken from Mozilla Developer Network Flexbox Page and A Complete Guide to Flexbox

One of the hardest things to do in web design is to create a fluid 3 column layout, also known as the holy grail. I could talk for hours about how it was goo enough for the technologies available when the model was first developed, how much of a pain it is to implement correctly, how it did not support mobile and small form devices and how long have developers wanted a solution.

Example of Holy Grail Layout
Example of Holy Grail Layout. Courtesy of Litesite

Instead I will talk about a solution that has finally become widely available. With the release of Firefox 28 to wide availability, the last of the big 4 browsers now supportd the full flexible box layouts specificactions. This means we can do some awesome crazy things with layouts that, until now we could only dream about.

In this article I will explain what Flexboxes are, the terminology, syntax, new CSS required to make it work and end with reworking the Holy Grail Layout using Flexboxes.

Getting things setup

Flexboxes don’t use the same terminology as other CSS elements. We need to discuss this terminology before we can fully understand and use the new layout in our own work.

Flex Container

The outermost container for our Flex Layout. We create this by assigning flex or inline-flex as the values of the display property depending on whether you want to use the Flex Container as a block or inline element.

.flex-container {
  display: flex
  /*  If you want the container to be a block element*/


.container {
  display: inline-flex
  /*  If you want the container to be an inline element*/

Each Flex Box container defines two axes:

  • A main axis that defines how the elements flow in the container
  • A cross axis that is perpendicular to the main


The flex-direction attribute defines the main axis (start, direction, end) for the container. It can take one of 4 values:

  • row: The flex container’s main-axis is defined to be the same as the text direction. The elements in the container flow in the same direction as the page content (left to right in the case of English)
  • row-reverse: The flex container’s main-axis is defined to be the same as the text direction. The elements in the container flow in the opposite direction to the page content (right to left in the case of English)
  • column: The flex container’s main-axis is defined to be the same as the block-axis. The main-start and main-end points are the same as the before and after points of the writing-mode (top to bottom in English)
  • column-reverse: The flex container’s main-axis is defined to be the same as the block-axis. The main-start and main-end points are reversed from the before and after points of the writing-mode (bottom to top in English)
.content {
  flex-direction: row;


  • nowrap (default): single-line / left to right in ltr; right to left in rtl
  • wrap: multi-line / left to right in ltr; right to left in rtl
  • wrap-reverse: multi-line / right to left in ltr; left to right in rtl
.content {
  flex-direction: row;
  flex-wrap: wrap;


This is a shortcut for flex-direction and flex-wrap which together define both the main and cross axes. The default value is:

.container {
  flex-flow: "row nowrap"


Justify content takes a special meaning when used in flex boxes: It tells the browser how to arrange the cintent of the container in its main axis. It can take the following values:

  • flex-start (default): items are packed toward the start line
  • flex-end: items are packed toward to end line
  • center: items are centered along the line
  • space-between: items are evenly distributed in the line; first item is on the start line, last item on the end line
  • space-around: items are evenly distributed in the line with equal space around them
.content {
  flex-direction: row;
  justify-content: space-around;


align-items controls the vertical positioning of the row’s element. It is the vertical counterpart to align-items. It can take the following values:

  • flex-start: cross-start margin edge of the items is placed on the cross-start line
  • flex-end: cross-end margin edge of the items is placed on the cross-end line
  • center: items are centered in the cross-axis
  • baseline: items are aligned such as their baselines align
  • stretch (default): stretch to fill the container (still respect min-width/max-width)


Controls the use of the rermaining vertical space within a flex-container’s row. This has no effect if there is only one row on the

  • flex-start: lines packed to the start of the container
  • flex-end: lines packed to the end of the container
  • center: lines packed to the center of the container
  • space-between: lines evenly distributed; the first line is at the start of the container while the last one is at the end
  • space-around: lines evenly distributed with equal space between them
  • stretch (default): lines stretch to take up the remaining space

Flex Item

Every element inside our Flex Container becomes a Flex Item. When we put text on the page as a direct child of the Flex Box it’ll be wrapped on an implicit flex item. There is no need to declare these elements explicitly.


Indicates the order (positive numbers without a unit) in which the items in a row are displayed. The default is to display them in source order.

  order: 1;


Indicates the proportion that an item can grow by if necessary.

If all items have flex-grow set to 1, every child will set to an equal size inside the container. If you were to give one of the children a value of 2, that child would take up twice as much space as the others.

In the example below, item2 will take twice as much space as item1 and item3 if more space becomes available:

#item1 {
  grow: 1;
  grow: 2;

  grow: 1;


Indicates the proportion that an item can shrink by if necessary.

If all items have flex-shrink set to 1, every child will set to an equal size inside the container. If you were to give one of the children a value of 2, that child would take up twice as much space as the others.


This defines the default size of an element before the remaining space is distributed.

Supported values:

  • length
  • auto

The default is auto


This is the shorthand for flex-grow, flex-shrink and flex-basis. The flex-shrink and flex-basis parameters are optional.

Default is 0 1 auto.


If this property is set, it will override the align-item property of the parent.

  • flex-start: cross-start margin edge of the items is placed on the cross-start line
  • flex-end: cross-end margin edge of the items is placed on the cross-end line
  • center: items are centered in the cross-axis
  • baseline: items are aligned such as their baselines align
  • stretch (default): stretch to fill the container (still respect min-width/max-width)

The Holy Grail with Flex Boxes

Working example in Codepen:

See the Pen First Attempt at Flexbox by Carlos Araya (@caraya) on CodePen.

Browser compatibility

Taken from Mozilla Developer Network Flexbox Page

Desktop Browsers

Feature Firefox (Gecko) Chrome Internet Explorer Opera Safari
Basic support (single-line flexbox) 18.0 (18.0)-moz(Behind a pref) [2]
22.0 (22.0) [2]
11 [3] 12.10
15-19 -webkit
6.1-webkit [1]
Multi-line flexbox 28.0 (28.0) 21.0-webkit
11 [3] 12.10
15-19 -webkit
6.1-webkit [1]

Mobile Browsers

Feature Firefox Mobile (Gecko) Android IE Phone Opera Mobile Safari Mobile
Basic support (single-line flexbox) 18.0 (18.0)-moz(Behind a pref) [2]
22.0 (22.0) [2]
? ? 15-19 -webkit 7-webkit [1]
Multi-line flexbox 28.0 (28.0) ? ? 15-19 -webkit 7-webkit [1]



[1] Safari up to 6.0 ( 6.1 for iOS ) supported an old incompatible draft version of the specification. Safari 6.1( 7 for iOS ) has been updated to support the final version

[2] Up to Firefox 22, to activate flexbox support, the user has to change the about:config preference “layout.css.flexbox.enabled” to true. From Firefox 22 to Firefox 27, the preference is true by default, but the preference has been removed in Firefox 28

[3] Internet Explorer 10 supports an old incompatible draft version of the specification; Internet Explorer 11 has been updated to support the final version.

Getting better at Javascript

I first came accross this deck while browsing Rebecca’s blog a few months ago. It made me think a lot about how can I write more Javascript and how can I build projects around my code writing.

Being at Fluent has made me think a lot about the projects I’m working on and how best to continue leveraging my coding with those things I’m interested in.

I’ll copy the expanded quote from Paul Graham that Rebecca uses on her presentation:

It takes confidence to throw work away. You have to be able to think, there’s more where that came from. When people first start drawing, for example, they’re often reluctant to redo parts that aren’t right; they feel they’ve been lucky to get that far, and if they try to redo something, it will turn out worse. Instead they convince themselves that the drawing is not that bad, really– in fact, maybe they meant it to look that way.

Dangerous territory, that; if anything you should cultivate dissatisfaction. In Leonardo’s drawings there are often five or six attempts to get a line right. The distinctive back of the Porsche 911 only appeared in the redesign of an awkward prototype. In Wright’s early plans for the Guggenheim, the right half was a ziggurat; he inverted it to get the present shape.

Mistakes are natural. Instead of treating them as disasters, make them easy to acknowledge and easy to fix. Leonardo more or less invented the sketch, as a way to make drawing bear a greater weight of exploration. Open-source software has fewer bugs because it admits the possibility of bugs.

It helps to have a medium that makes change easy. When oil paint replaced tempera in the fifteenth century, it helped painters to deal with difficult subjects like the human figure because, unlike tempera, oil can be blended and overpainted.

Creating the right kind of wow

I had to see this presentation mulitple times yesterday and today before I realized what she actually meant. I’ve been guilty of wanting the WOW! the immediate surprise at the latest and greates bells and whistles.

But that’s not what she meant and that’s not what we should be concentrating on. We should be concentrating on the wow that makes people change their view of a subject or topic.

Graceful Degradation of Progressive Enhancement?

Why does it matter?

As a designer I’ve been around the block long enough to have suffered the nightmare of having to code deffensively for multiple browsers. Back when Javascript was new and we were still trying to figure out best practices it was not uncommon to see code that branched according to browser vendor and version. We ended up with insanely hard to read and brittle to maintain CSS and Javascript files and we were at the mercy of vendors who may introduce incompatibilities and new features without caring that other vendors may not support them or may support them differently than they do.

who remembers code like the one below (not 100% accurate but it illustrate the idea)

// define the browsers we will use
ie = navigator.useragent= "Internet Explorer";
n4 = navigator.useragent= "Netscape Navigator" && useragent.version = 4;
// Netscape and IE do things differently, take that into account in the code below

if (ie) {
  // Do the IE specific code here

if (n4) {
  // Do the Netscape thing here

// Do the stuff they both support equally

we have moved very far from those beginnings.

Defining the terminology

Before we delve too deep into the discussion of which one is btter, let’s define the terms we will use.

Progressive enhancement

Progressive enchancement starts with a base template and begins to add features depending on whether they are supported by each invidual browser. This may involve alternate scripting or alternate ways of displaying content.

In Progressive Enhancement (PE) the strategy is deliberately reversed: a basic markup document is created, geared towards the lowest common denominator of browser software functionality, and then the designer adds in functionality or enhancements to the presentation and behavior of the page, using modern technologies such as Cascading Style Sheets or JavaScript (or other advanced technologies, such as Flash or Java applets or SVG, etc.). All such enhancements are externally linked, preventing data unusable by certain browsers from being unnecessarily downloaded

PE is based on a recognition that the core assumption behind “graceful degradation” — that browsers always got faster and more powerful — was proving itself false with the rise of handheld and PDA devices with low-functionality browsers and serious bandwidth constraints. In addition, the rapid evolution of HTML and related technologies in the early days of the Web has slowed, and very old browsers have become obsolete, freeing designers to use powerful technologies such as CSS to manage all presentation tasks and JavaScript to enhance complex client-side behavior.

From: Wikipedia

Gracefule degradation

Graceful degradation take the opposite approach. Rather tha use a base template to start, it starts with all the bells and whistles and provides ways for browsers that do not support the features to get as close as possible to the original experience as they can with the understanding that it will not be the same as the experience for modern or cuting edge browsers.

The main difference between the graceful degradation strategy and progressive enhancement is where you start your design. If you start with the lowest common denominator and then add features for more modern browsers for your web pages, you’re using progressive enhancement. If you start with the most modern, cutting edge features, and then scale back, you’re using graceful degradation.


So, which one do we use?

The answer as with many other things on the web is: It Depends

From a coder’s perspective we want to always have access to the latest and greatest technology we’ve worked so hard to develop; but the browser market is still too fragmented in the support of standards for designers and developers to say that technologies are the only ones that will be used in a given project.

If using Progressive Enhancement we build a basic site and, as necessary, adding the advanced features to make the site look like we want it by inking additional CSS and Javascript files to accomplish the tasks we want to enhance.

If we go the Graceful Degradation we build the page as we want it to look in modern browsers and then make sure that reduced functionality will not make the site unusable.

As I research more about Progressive Enhancement and Graceful Degradation I will post the findings and results of the research.