Graceful Degradation of Progressive Enhancement?

Why does it matter?

As a designer I’ve been around the block long enough to have suffered the nightmare of having to code deffensively for multiple browsers. Back when Javascript was new and we were still trying to figure out best practices it was not uncommon to see code that branched according to browser vendor and version. We ended up with insanely hard to read and brittle to maintain CSS and Javascript files and we were at the mercy of vendors who may introduce incompatibilities and new features without caring that other vendors may not support them or may support them differently than they do.

who remembers code like the one below (not 100% accurate but it illustrate the idea)

[javascript]
// define the browsers we will use
ie = navigator.useragent= “Internet Explorer”;
n4 = navigator.useragent= “Netscape Navigator” && useragent.version = 4;
// Netscape and IE do things differently, take that into account in the code below

if (ie) {
// Do the IE specific code here
}

if (n4) {
// Do the Netscape thing here
}

// Do the stuff they both support equally
[/javascript]

we have moved very far from those beginnings.

Defining the terminology

Before we delve too deep into the discussion of which one is btter, let’s define the terms we will use.

Progressive enhancement

Progressive enchancement starts with a base template and begins to add features depending on whether they are supported by each invidual browser. This may involve alternate scripting or alternate ways of displaying content.

In Progressive Enhancement (PE) the strategy is deliberately reversed: a basic markup document is created, geared towards the lowest common denominator of browser software functionality, and then the designer adds in functionality or enhancements to the presentation and behavior of the page, using modern technologies such as Cascading Style Sheets or JavaScript (or other advanced technologies, such as Flash or Java applets or SVG, etc.). All such enhancements are externally linked, preventing data unusable by certain browsers from being unnecessarily downloaded

PE is based on a recognition that the core assumption behind “graceful degradation” — that browsers always got faster and more powerful — was proving itself false with the rise of handheld and PDA devices with low-functionality browsers and serious bandwidth constraints. In addition, the rapid evolution of HTML and related technologies in the early days of the Web has slowed, and very old browsers have become obsolete, freeing designers to use powerful technologies such as CSS to manage all presentation tasks and JavaScript to enhance complex client-side behavior.

From: Wikipedia

Gracefule degradation

Graceful degradation take the opposite approach. Rather tha use a base template to start, it starts with all the bells and whistles and provides ways for browsers that do not support the features to get as close as possible to the original experience as they can with the understanding that it will not be the same as the experience for modern or cuting edge browsers.

The main difference between the graceful degradation strategy and progressive enhancement is where you start your design. If you start with the lowest common denominator and then add features for more modern browsers for your web pages, you’re using progressive enhancement. If you start with the most modern, cutting edge features, and then scale back, you’re using graceful degradation.

From: About.com

So, which one do we use?

The answer as with many other things on the web is: It Depends

From a coder’s perspective we want to always have access to the latest and greatest technology we’ve worked so hard to develop; but the browser market is still too fragmented in the support of standards for designers and developers to say that technologies are the only ones that will be used in a given project.

If using Progressive Enhancement we build a basic site and, as necessary, adding the advanced features to make the site look like we want it by inking additional CSS and Javascript files to accomplish the tasks we want to enhance.

If we go the Graceful Degradation we build the page as we want it to look in modern browsers and then make sure that reduced functionality will not make the site unusable.

As I research more about Progressive Enhancement and Graceful Degradation I will post the findings and results of the research.

OOCSS: What it is and how do you apply it to CSS creation

Looking at my own CSS over the years I’ve come to realize how messy it is and how much (extra) work you have to do when it comes time to clean it up. It has come to light again when trying to work on a default set of stylesheets for Sunshine and I had to do some serious refactoring… It was a real pain in the ass.

An intriguing concept that I’ve been following for a few months is Object Oriented CSS and it refers to how to structure your CSS around objects and breaking common properties into generic classes so we can reuse them rather than have to code the same thing over and over.

The OOCSS project provides a framework: HTML, CSS, Javascript and related assets to give developers a common starting point for large scale web development. While ebooks, with few exceptions, will never fall into the large scale development model suggested by OOCSS the clean separation of CSS functionality into objects makes creation of format specific styles (ePub, Kindle, others) easier and it keeps you DRY (Don’t Repeat Yourself)

Take for example the following set of CSS declarations:

[css]
h1 {
font-family: “Helvetica Neue”, Helvetica, Arial; sans-serif;
font-size: 2.5em; /* 40px if the default font size is 16 */
font-color: #ccc;
}

.header h1 {
font-family: “Helvetica Neue”, Helvetica, Arial; sans-serif;
font-size: 2.5em; /* 40px if the default font size is 16 */
font-color: #ccc;
}

.footer h1 {
font-family: “Helvetica Neue”, Helvetica, Arial; sans-serif;
font-size: 2.5em; /* 40px if the default font size is 16 */
font-color: #ccc;
}
[/css]

I see the following issues with the code above:

  • If the development team decides to change any of the parameters we need to change it in every iteration of the h1 tag
  • Adding a class to define semantic elements (like all the h1 tags that belong in a header) only makes sense if we are making each othe h1 tags different, which we are not in this case.
  • Avoiding duplication makes the CSS file smaller in size and therefore faster to download

Consider the following corrected CSS that is equivalent to the one above

[css]
html {
font-family: “Helvetica Neue”, Helvetica, Arial; sans-serif;
}

h1 {
font-size: 2.5em; /* 40px if the default font size is 16 */
font-color: #ccc;
}

.header h1{
/* Define styles that apply only to h1 that are part of the header
Delete if not needed
*/
}

.footer h1{
/* Define styles that apply only to h1 that are part of the footer
Delete if not needed
*/
}[/css]

The code above is structured in such a way that the following happens:

  • All the text in the page has the same font (and we don’t need to declare the font for every element unless we’re making a change from the default)
  • All h1 elements look the same throughout the document
  • We only make changes where we need to and can delete the empty style declarations where we con’t need them.

Because ebooks can vary so much in the CSS they support and how designers must tweak we can create device specific classes and then select only the classes that are relevant, either by grouping them in multiple files or using SASS as described in SASS, SCSS, CSS and Modular Design

Why XHTML is still the best answer for web publishing

Brad Neuberg wrote an article about making ePub3 “play nice” with HTML5. I read it and initially agreed with it but, upon second reading and some thought I disagree with the essence of his argument.

Brad opens his argument with the following two bullet points:

  • The first is a version of HTML5 that uses XML’s rules: everything has to be well formed and there can’t be any mistakes in how you create your markup. This is commonly known as XHTML5, to indicate the HTML5 standard but based on XML. XHTML5 tends to be used in specialized back end systems to do text processing as there are fairly sophisticated tools for working with it, and it also turns out to be what EPUB3 uses.
  • The second is tag soup HTML5: you don’t have to close your tags and processing is much looser and closer to the older HTML4 standard. Tag soup HTML5 dominates the open web; it’s probably 99.9% of all the content you would access with a conventional web browser.

He further argumes that because tag soup is the dominant use of HTML5 that all tools should be using it and that XHTML 5 should only be used in back end systems.

In twitter conversation he argues, erroneously in my opinion, that it was the strictness and the restrictions it placed on content authors that caused the split with WHATWG and the creation of HTML 5. It was the move to an XML only environment and the creation of a completely separate mime type (application/xhtml+xml) for XHTML applications that caused the split.

Very few, if any, authors actually publish XHTML. They publish HTML in XHTML compatibility mode without actually changng the mime type of HTML documents or setting up the appropriate default namespace as required for XHTML compliance

First, the future is creating high quality markup in HTML5 and then publishing it into different channels.

In that we agree.

We need a common base to create our content, but let’s also remember that HTML or XHTML cannot do everything.

Creating good coding habbits

If name spaces were not an issue, I would still have an issue with tag soup HTML: it promotes laziness and it makes the code hard to read and understand.

Look at the following example:

<html>
<title>Sample document</title>

<h1>Sample document

<p>Lorem Ipsum and some random textLorem Ipsum and some random textLorem Ipsum and some random textLorem Ipsum and some random textLorem Ipsum and some random textLorem <em> and some random textLorem Ipsum and some random textLorem Ipsum and some random textLorem Ipsum and some random text</p>

<p>Lorem Ipsum and some random textLorem Ipsum and some random textLorem Ipsum and some random textLorem Ipsum and some random textLorem Ipsum and some random textLorem <em> and some random textLorem Ipsum and some random textLorem Ipsum and some random textLorem Ipsum and some random text

<p>Lorem Ipsum and some random textLorem Ipsum and some random textLorem Ipsum and some random textLorem Ipsum and some random textLorem Ipsum and some random textLorem <em><b>and some random text</em></b> Lorem Ipsum and some random textLorem Ipsum and some random textLorem Ipsum and some random text

In the example above, a more or less classic example of a tag soup web document we can see the following issues:

  • There is no distinction between the head and the body of the document
  • The charset of the document is not defined
  • There re mutiple instances of incorrectly nested tags

HTML5 provides mechanisms for user agents to work around inconsistencies outline above. It does not encourage authors to use this system to write HTML content.

Why does this new HTML spec legitimise tag soup?

Actually it doesn’t. This is a misconception that comes from the confusion between conformance requirements for documents, and the requirements for user agents.

Due to the fundamental design principle of supporting existing content, the spec must define how to handle all HTML, regardless of whether documents are conforming or not. Therefore, the spec defines (or will define) precisely how to handle and recover from erroneous markup, much of which would be considered tag soup.

For example, the spec defines algorithms for dealing with syntax errors such as incorrectly nested tags, which will ensure that a well structured DOM tree can be produced.

Defining that is essential for one day achieving interoperability between browsers and reducing the dependence upon reverse engineering each other.

However, the conformance requirements for authors are defined separately from the processing requirements. Just because browsers are required to handle erroneous content, it does not make such markup conforming.

For example, user agents will be required to support the marquee element, but authors must not use the marquee element in conforming documents.

It is important to make the distinction between the rules that apply to user agents and the rules that apply to authors for producing conforming documents. They are completely orthogonal.

From: wiki.whatwg.org/wiki/FAQ#Why_does_this_new_HTML_spec_legitimise_tag_soup.3F

Specific requirements for authoring HTML5 documents can be found here: Conformance requirements for authors. It again reinforces the idea that because it is possible to create malformed tag soup documents you shouldn’t do it.

Elements from other XML-based name spaces

We can not add these elements to a tag soup-based html document. Whether this is a major issue will depend on your project needs but unless WHATWG/W3C provide an alternative, name spaces are the way to go

ePub 3 uses elements from the following namespaces in order to add functionality to ePub-based publications.

Prefix Namespace URI
epub http://www.idpf.org/2007/ops
m (MathML) http://www.w3.org/1998/Math/MathML
pls http://www.w3.org/2005/01/pronunciation-lexicon
ssml (Speech Synthesis Markup Language) http://www.w3.org/2001/10/synthesis
svg (Scalable Vector Graphics) http://www.w3.org/2000/svg

An example of XML vocabularies used in ePub 3 documents. Taken from EPUB Content Documents 3.0

Javascript support for namespaced content

While technically it would be possible to convert epub:type to epub-type I fail to see the reason why we should. Both Javascript and CSS allow us to query for ePub namespaced (or any other kind of) attributes. we can either query bu class containing the attribute:

//select the first element that has the "content" class name
  var content = document.getElementsByClassName("content")[0];
  //a returns undefined but it still stores the correct value
  var a = content.getAttributeNS('http://www.idpf.org/2007/ops', 'type');
  alert(a); // shows the value of type for that element

We can also wrap our code in ePub specific containers. ePub make available the navigator.epubReadingSystem property which we can use to execute all our ePub realted code without affecting the rest of our script.

// If the user agent is an ePub Reading System
  if (navigator.epubReadingSystem) {
    feature = touch-events;
    // Test for the touch events feature
    var conformTest = navigator.epubReadingSystem.hasFeature(feature);
    // Output if the feature is supported
    alert("Feature " + feature + " supported?: " + conformTest);
}

See http://code.google.com/p/epub-revision/wiki/epubReadingSystem for more information.

CSS

As far as CSS is concerned namespace support was introduced as a draft document in June of 2013. Support is not widespread but when it is, we’ll be able to do something like this:

@namespace "http://www.w3.org/1999/xhtml"; /* This defines the default namespace*/
@namespace epub "http://www.idpf.org/2007/ops"; /* this defines the namespace "epub" */

table {
  border: 1px solid black
}
epub|type {
  display: block;
}

WebGL: The New Frontier (for some)

Not too long ago there was a demo of the Epic Citadel OpenGL demo translated to JavaScript and played inside a web browser. The code was ported from C++ to ASM, a subset of JavaScript optimized for performance using and then fed into a web browser for rendering and display.

Even if we leave the conversion from C++ to JavaScript aside for the moment. You’re running a high end game inside a web browser… without plugins!

Think about it. We can now do 3D animations that take advantage of your computer’s GPU to render high quality 3D objects, animations and transitions that rival work done with professional 3D software (and you can even import files from Maya, Blender and others into a WebGL environment).

What is WebGL

WebGL is a cross-platform, royalty-free web standard for a low-level 3D graphics API based on OpenGL ES 2.0, exposed through the HTML5 Canvas element as Document Object Model interfaces. Developers familiar with OpenGL ES 2.0 will recognize WebGL as a Shader-based API using GLSL, with constructs that are semantically similar to those of the underlying OpenGL ES 2.0 API. It stays very close to the OpenGL ES 2.0 specification, with some concessions made for what developers expect out of memory-managed languages such as JavaScript.
From: http://www.khronos.org/webgl/

WebGL is a JavaScript API that allows us to implement interactive 3D graphics, straight in the browser. For an example of what WebGL can do, take a look at this WebGL demo video (viewable in all browsers!)
WebGL runs as a specific context for the HTML <canvas> element, which gives you access to hardware-accelerated 3D rendering in JavaScript. Because it runs in the <canvas> element, WebGL also has full integration with all DOM interfaces. The API is based on OpenGL ES 2.0, which means that it is possible to run WebGL on many different devices, such as desktop computers, mobile phones and TVs.
From: http://dev.opera.com/articles/view/an-introduction-to-webgl/

Why should we care?

Besides the obvious implications of having 3D graphics online without having to use plugins imagine how much more expressive we can make our content if we can mix 2d and 3d content. WebGL means that we’re no longer tied to plugins to create AAA-game type experiences for the web platform (look at the Epic Citadel for a refresher), incorporate 3D rendered content into your 2D web content (product configurators like the +360 Car Visualizer come to mind),
web native architectural or anatomical visualizations or just awesome animation work like thisway.js a WebGL remale of an older flash based demo or Mr. Doob’s extensive libraries of demos and examples.

As the WebGL techology stack matures we’ll see more and more examples and creative uses of the technology. I hope we’ll see more editors and tools that will make it easuer for non-experts to create interactive content.

Before we start

Before you try creating your own WebGL It’s a good idea to go to http://analyticalgraphicsinc.github.io/webglreport/ to test whether your browser and computer support WebGL.

Building WebGL content

There are two ways to create WebGL content. They each hve their advantages and drawbacks that will be pointed out as we progress through the diffrent creation processes.

The hard way

The hard way is to create content using WebGL primitives and coding everything ourselves as show in https://developer.mozilla.org/en-US/docs/Web/WebGL/Adding_2D_content_to_a_WebGL_context and https://developer.mozilla.org/en-US/docs/Web/WebGL/Animating_objects_with_WebGL. You can see the ammount of code and specialized items we need to build in order to create the content.

Take a look at the Mozilla WebGL tutorial as an illustration of the complexities involved in creating WebGL with the raw libraries and interfaces.

If interested in the full complexities of writing raw WebGL code, look at the source code for https://developer.mozilla.org/samples/webgl/sample7/index.html to see WebGL code and the additional complexities. We’ll revisit part of the examples above when we talk about custom CSS filters.

The pen below also illustrates the complexities of writing bare WebGL

See the Pen HLxbB by Carlos Araya (@caraya) on CodePen.

For a more complete look at the workings of WebGL look at Mike Bostock’s WebGL Tutorial series:

If you are a game programmer or a game developer this will be the level you’ll be working at and other platforms, libraries and frameworks will not be necessary for your work. If you’re someone who is not familiar with coding to the bare metal or who is not familiar with matrix algebra there are solutions for us.

The easy way

Fortunately there are libraries that hide the complexities of WebGL and give you a more user friendly interface to work with. I’ve chosen to work with Three.JS, a library developed by Ricardo Cabello also known as Mr Doob.

The following code uses Three.js to create a wireframe animated cube:

[codepen_embed height=”516″ theme_id=”2039″ slug_hash=”JlEfG” default_tab=”js”]
var camera, scene, renderer;
var geometry, material, mesh;

init();
animate();

function init() {

camera = new THREE.PerspectiveCamera(45, window.innerWidth / window.innerHeight, 1, 1000);
camera.position.z = 800;

scene = new THREE.Scene();

geometry = new THREE.CubeGeometry(200, 200, 200);
// We have to use MeshBasicMaterial so we can see the color
// Otherwise it's all black and waits for the light to hit it
material = new THREE.MeshBasicMaterial({
color: 0xAA00AA,
wireframe: true
});

mesh = new THREE.Mesh(geometry, material);
scene.add(mesh);

renderer = new THREE.CanvasRenderer();
renderer.setSize(window.innerWidth, window.innerHeight);

document.body.appendChild(renderer.domElement);

}

function animate() {
// note: three.js includes requestAnimationFrame shim
requestAnimationFrame(animate);

// Play with the rotation values
mesh.rotation.x += 0.01;
mesh.rotation.y -= 0.002;

renderer.render(scene, camera);
}

See the Pen Basic WebGL example using Three.js by Carlos Araya (@caraya) on CodePen.
[/codepen_embed]

As you can see, Three.js abstracts all the matrix/linear algebra and all the shader programming giving you just the elements that ou need to create content. This has both advantages and drawbacks.

Mike Bostock writes in his introduction to WebGL tutorial:

Every introductory WebGL tutorial I could find uses utility libraries (e.g., Sylvester) in efforts to make the examples easier to understand. While encapsulating common patterns is desirable when writing production code, I find this practice highly counterproductive to learning because it forces me to learn the utility library first (another layer of abstraction) before I can understand the underlying standard. Even worse when accompanying examples use minified JavaScript.

On the other hand many people, myself included, just want something that works. I am less interested in the underlying structure (that may come in time as I become more proficient with the library way of doing WebGL) than I am in seeing results happen.

See the section WebGL resources for a partial list of other WebGL libraries.

Building a 3D scene

For this example we will use Three.js to build an interactive scene where we will be able to use the keyboard to move a cube around the screen. We will break the scripts in sections to make it easier to explain and work with.

The script is at the end of the <body> tag so we don’t ensure that the body is loaded.

Basic setup

The first thing we do is define the element where our WebGL content is going to live. We follow that with linking to the scripts we will use in the project:

  • three.min.r58.js is the minimized version of Three.js. In developmnet I normally use the uncompressed version
  • OrbitControls.js controls the movement of the camera used in the scene
  • Detector.js is the WebGL detection library
  • threex.keyboardstate.js will help with key down event detection
  • threex.windowresize.js automatically redraws the rendering when the window is resized
<!-- This is the element that will host our WebGL content -->
  <div id="display"></div>

<!-- Script links -->
<script src="lib/three.min.r58.js"></script>
<script src="lib/OrbitControls.js"></script>
<script src="lib/Detector.js"></script>
<script src="lib/threex.keyboardstate.js"></script>
<script src="lib/threex.windowresize.js"></script>

Variables, scene and camera setup

Now tht we setup the links to the scripts that we’ll use in this application we can declare variables and call the functions that will actually do the heavy lifting for us: init() and animate


// General three.js variables
var camera, scene, renderer;
// Model specific variables
// Floor
var floor, floorGeometry, floorTexture;
// light sources
var light, light2;
// Space for additional material
var movingCubeTexture, movingCubeGeometry, movingCubeTexture;
// Window measurement
var SCREEN_WIDTH = window.innerWidth;
var SCREEN_HEIGHT = window.innerHeight;
// additional variables
var controls = new THREE.OrbitControls();
var clock = new THREE.Clock();
var keyboard = new THREEx.KeyboardState();

init();
animate();

Initializing our scene

The first step in rendering our scene is to set everything up. In the code below:

Step 1 does two things. It sets up the scene, the root element for our model and it sets the camera, the object that will act as our eyes into the rendered model.

For this particular example I’ve chosen a perspective camera which is the one used most often. This tutorial covers perspective as used in WebGL and Three.js; it is a technical article but I believe it provides a good overview of what is a complex subject.

Once the camera is created, we set its position (camera.position.set)and where the camera is pointing to (camera.lookAt)

function init()
{
// Step 1: Add Scene
scene = new THREE.Scene();
// Now we can set the camera using the elements defined above
// We can also reduce the number of variables by putting the values directly into the camera variable below
camera = new THREE.PerspectiveCamera( 45, window.innerWidth / window.innerHeight, 0.1, 10000);
scene.add(camera);
// The position.set value for the camera will setup the initial field of vision for the model
camera.position.set(0, -450, 150);
camera.lookAt(scene.position);

Step 2 sets up the renderer for our model. Using Detector.js we test if the browser supports WebGL. If it does then we setup Three.js CanvasRenderer, one of the many renderers available through Three.js.

Also note that even a modern browser can fail to support WebGL if it’s running on older hardware or if the browser is not configured properly (older versions of Chrome required you to enable WebGL support through flags in order to display WebGL content and versions of Internet Explorer before 11 did not support WebGL at all).

At this point we also set up the THREEx.WindowResize using both the renderer and the camera. Behind the scenes, WindowResize will recompute values to resize our rendered model without us having to write code.

// Step 2: Set up the renderer
// We use the WebGL detector library we referenced in our scripts  to test if WebGL is supported or not
// If it is supported then we use the WebGL renderer with antialiasing
// If there is no WebGL support we fall back to the canvas renderer
if ( Detector.webgl ) {
  renderer = new THREE.WebGLRenderer( {antialias:true} );
 }  else {
     renderer = new THREE.CanvasRenderer();
}
// Set the size of the renderer to the full size of the screen
renderer.setSize(SCREEN_WIDTH, SCREEN_HEIGHT);
// Set up the contaier element in Javascript
var container = document.getElementById( 'display' );
THREEx.WindowResize(renderer, camera);
// CONTROLS
controls = new THREE.OrbitControls( camera, renderer.domElement );
// Add the child renderer as the container's child element
container.appendChild( renderer.domElement );

Step 3 is where we actually add the objects to our rendering process. They all follow the same basic process:

  1. We define a material (texture) for the object
  2. We Define a geometry (shape) it will take
  3. We combine the material and the geometry to create a mesh
  4. We add the mesh to the scene

The texture floor has some additional characteristics:

  1. We load an image to use as the texture
  2. We set the texture to repeat 10 times on each plane.

The sphere mesh on step 3b is positioned on the Z coordinates.

// Step 3 : Add  objects
// Step 3a: Add the textured floor
floorTexture = new THREE.ImageUtils.loadTexture( 'images/checkerboard.jpg' );
floorTexture.wrapS = THREE.RepeatWrapping;
floorTexture.wrapT = THREE.RepeatWrapping;
floorTexture.repeat.set( 10, 10 );
floorMaterial = new THREE.MeshPhongMaterial( { map: floorTexture } );
floorGeometry = new THREE.PlaneGeometry(1000, 1000, 10, 10);
floor = new THREE.Mesh(floorGeometry, floorMaterial);
scene.add(floor);

// Step 3b: Add model specific objects
movingCubeGeometry = new THREE.CubeGeometry( 50, 50, 50, 50);
movingCubeMaterial = new THREE.MeshBasicMaterial( {color: 0xff00ff} );
movingCube = new THREE.Mesh(movingCubeGeometry, movingCubeMaterial);
movingCube.position.z = 35;
movingCube.position.x = 0

scene.add(movingCube);

In step 4, we add lights to the model in a way very similar to how we added the objects in step 3.

  1. Create the light and assign a color upon creation
  2. Set the position of the light using an X, Y, Z coordinate system
  3. Add the lights to the scene
// Step 4: Add light sources
// Add 2 light sources.
light = new THREE.DirectionalLight( 0xffffff );
light.position.set( 0, 90 ,450 ).normalize();
light2 = new THREE.DirectionalLight( 0xffffff);
light2.position.set( 0, 90, 450 ).normalize();
scene.add(light);
scene.add(light2);
}

Keyboard Interactivity

Right now we’ve built a cube over a checkered background plane… but we can’t move it. That’s what the animate function will do… it will setup the keyboard bindings.

Animate is a wrapper around 2 additional functions; render and update. The render() function just call our renderer to draw the frame of our application. The update function is where the heavy duty work happens.

function animate() {
  requestAnimationFrame( animate );
  render();
  update();
}

function render() {
  renderer.render( scene, camera );
}

The update() function sets up 2 variables: delta is a time-based value that will be used to calculate the distance of any movement that happens later in the function. The variable moveDistance uses delta to calculate the exact distance of each movement (200 * the delta value)

Now that we’ve set up the value for the distance moved we test which key was pressed and change the coordinate values (x, y or z coordinates) and then prepare for the next update cycle.

The reset value is simpler. We reset all coordinates to 0.

function update() {
  var delta = clock.getDelta(); // seconds.
  var moveDistance = 200 * delta; // 200 pixels per second
  //var rotateAngle = Math.PI / 2 * delta;   // pi/2 radians (90 degrees) per second

  // global coordinates
  if ( keyboard.pressed("left") )
    movingCube.position.x -= moveDistance;
  if ( keyboard.pressed("right") )
    movingCube.position.x += moveDistance;
  if ( keyboard.pressed("up") )
    movingCube.position.y += moveDistance;
  if ( keyboard.pressed("down") )
    movingCube.position.y -= moveDistance;
  if ( keyboard.pressed ("w"))
    movingCube.position.z += moveDistance;
  if ( keyboard.pressed ("s"))
    movingCube.position.z -= moveDistance;
  // reset (let's see if this works)
  if ( keyboard.pressed ("z")) {
    movingCube.position.x = 0;
    movingCube.position.y = 0;
    movingCube.position.z = 50;
  }
  controls.update();
}

Adding audio

In a future project I want to experiment with adding audio to our models. The best audio API I’ve found is the WebAudio API submitted by Google to the World Wide Web Consortium

Getting started with the WebAudio API discusses my experiments with the WebAudio API.

Playing with editors

Three.js editors and authoring tools are just starting to appear. From my perspective some of the most promising are:

Examples

WebGL Libraries and Frameworks

From: Dev.Opera’s Introduction to WebGL article and Tony Parisi’s Introduction to WebGL at SFHTML5 Meetup

Three.js (Three Github repo)
Three.js is a lightweight 3D engine with a very low level of complexity — in a good way. The engine can render using <canvas>, <svg> and WebGL. This is some info on how to get started, which has a nice description of the elements in a scene. And here is the Three.js API documentation. Three.js is also the most popular WebGL library in terms of number of users, so you can count on an enthusiastic community (#three.js on irc.freenode.net) to help you out if you get stuck with something.
PhiloGL (PhiloGL Github repo)
PhiloGL is built with a focus on JavaScript good practices and idioms. Its modules cover a number of categories from program and shader management to XHR, JSONP, effects, web workers and much more. There is an extensive set of PhiloGL lessons that you can go through to get started. And the PhiloGL documentation is pretty thorough too.
GLGE (GLGE Github repo)
GLGE has some more complex features, like skeletal animation and animated materials. You can find a list of GLGE features on their project website. And here is a link to the GLGE API documentation.
J3D (J3D Github repo)
J3D allows you not only to create your own scenes but also to export scenes from Unity to WebGL. The J3D "Hello cube" tutorial can help you get started. Also have a look at this tutorial on how to export from Unity to J3D.
tQuery (Github repo)
tQuery ( is a library that sits on top of Three.js and is meant as a lightweight content and extension creation tool. The author, Jerome Etienne has created several extensions to Three.js that are available on his Github repository (https://github.com/jeromeetienne)
SceneJS (Github repo
This library is an extensible WebGL-based engine for high-detail 3D visualisation using JavaScript.

BabylonJS (Github repo)
The library produces high quality animations. If your comfortable being limited to windows machines to create your libraries and customizations this may be the library for you.
VoodooJS (Github repo)
Voodoo is designed to make it easier to intergrate 2D and 3D content in the same page.
Vizi (Github repo)
The framework that got me interested in WebGL frameworks. Originally developed by Tony Parisi in conjunction with his book Programming web 3D web applications with HTML5 and WebGL it presents a component based view of WebGL development. It suffers from a lack of documentation (it has yet to hit a 1.0 release) but otherwise it’s worth the learning curve.