by Gary Hepting Javascript-video-scrubber Demo

Nike Sustainability Project Screenshot

Turn your web page into a responsive video scrubber

This year we were tasked with creating a unique and engaging user interface for the Nike Corporate Responsibility Report website. We had to come up with something that would intrigue and engage its' visitors, push the limits of what the web can do and pave the way for Nike to make an impact on the world about the company's dedication to sustainable business practices.

With a team of some of the most creative and ambitious UX designers, digital strategists and developers, we began our search for inspiration, collaborating ideas and analyzing what made us all sit back and say "Wow" while we were browsing the Web (such as http://360langstrasse.sf.tv/page/, http://kyan.com/, http://www.socialsummit.cz/en and http://davegamache.com/). There were instantly a few specific technologies that stood out to us all as shoe-ins for Nike (pun intended). Frame-by-frame video (JPEG sequence) scrubbing, parallax effects and of coarse responsive design.

Once we had our goal and a rough draft of functionality requirements, we began the process of creating our first working prototype. Our road to the initial alpha version was almost a single step. Within a day we had extracted a video, converted each frame to compressed JPEG files and setup our canvas to stretch the first image to completely fill the viewport with a method called zoom cropping (zooming to the point of filling a viewport while maintaining aspect ratio). We turned the web page into a virtual timeline and wrote Javascript code that would animate the frames based on how far down the page you traveled. It was a short time between this first initial working prototype and getting the support and approval from the creative and executive teams at Nike. Ambition was high, our goals were clear and we were setting out to build one of the most amazing experiences we could imagine.

The road was long and we came up with a wide variety of ways we thought might work to animate the video frames before we found something that would work reliably in all of the targeted browsers we were supporting. Whenever you push the limits of technology, you must constantly compromise how you want to do something in order for it to actually work.

We owe the success of our final version to the information in an article we discovered that was written by Paul Irish (http://paulirish.com/2011/requestanimationframe-for-smart-animating/). With this method of animation “the browser can optimize concurrent animations together into a single reflow and repaint cycle, leading to higher fidelity animation”. Which is exactly what we needed to make this work and reduce CPU load and increase battery life for our users!

Here's a code snippet:

window.requestAnimFrame = (function(){
    return  window.requestAnimationFrame       || 
    window.webkitRequestAnimationFrame || 
    window.mozRequestAnimationFrame    || 
    window.oRequestAnimationFrame      ||  
    window.msRequestAnimationFrame     || 
    function( callback ){
        window.setTimeout(callback, 1000 / 60);
    };
})();

(function animloop(){
    requestAnimFrame(animloop);
    targetStep = Math.max( Math.round( getYOffset() / 30 ) , 1 ); // what frame to animate to
    if(targetStep != step ) { step += (targetStep - step) / 5; } // increment the step until we arrive at the target step
    changeFrame();
})();

function changeFrame() {
    var thisStep = Math.round(step); // calculate the frame number
    if(images.length > 0 && images[thisStep]) { // if the image exists in the array
        if(images[thisStep].complete) { // if the image is downloaded and ready
            $('#video').attr('src',images[thisStep].src); // change the source of our placeholder image
        }
    }
}
				

I believe the biggest reason humans have been capable of the technological progress we've made is completely due to the process of sharing knowledge and information with others. The development team and executives here at Emerge Interactive share this philosophy and greatly support open-source communities, so we decided to create this demonstration to share with you and others that are interested. And also, a special thanks to Paul Irish for the quick study he provided. That seemingly small article he posted on paulirish.com has brought the capabilities of the web forward by leaps and bounds.

We hope you've enjoyed reading our findings and that you will answer this one question as you're reverse engineering and manipulating this code that we've shared for you.

Download Sample Files (ZIP) Fork on Github

Note: There are several things to keep in mind as you attempt to modify this example to use your own JPEG image sequence and customize it to fit your implementation:

That's it for now! Watch, like, follow or digg Emerge Interactive for more demonstrations, lessons and code snippets that we'll surely be releasing for your enjoyment.

Fork me on GitHub