Originally a guest post on the Google Developer Blog (James Christian and Scott Seaward, March 2011)
In this post we share our experiences developing for this platform – as well as highlighting special considerations for end-users and our video production process.

We’re going to cover:

  • The Concept
  • Our Technical Approach and Challenges (For the devs)
  • Our Design Evolution (For the designers/UX)
  • Video Production Considerations (For the producers)

Have a Play

Point your Google TV device to

You can also try it out in Chrome in the Web Store


Here’s a demo video:

(Link if the video plays up)

So, why NET-A-PORTER TV? And why Google TV?

Our users want to engage with us wherever it suits them, and on whatever medium. We have a video section on the website that was designed for the desktop, but it is awkward to use with a pointing device at a distance. Internet-connected TV allows us to showcase our high-definition, beautifully produced content in full-screen, and supports new T-commerce opportunities. We have found that targeting specific media to the most suitable devices has led to an improved user experience, and analytics have proven increased engagement.

From our appraisal of the Internet TV landscape, Google TV proved to be one of the best-performing, standards-based TV platforms. Our in-house HTML5, CSS3 and JavaScript experts were able to produce a compelling user-experience, incorporating animated graphics and UI elements that could be laid over high-definition H.264 streaming video. As web-developers and designers, we’ve moved on from the ugly old days of WebTV and felt confident we could produce something that incorporates the company’s design-led style.

The Concept

We wanted to create a full-screen experience that is true to TV, with a linear, passive playback mode for the living-room as well as lean-forward, category-based navigation. Importantly, user interaction is optimized for the ‘10-foot’ experience by focusing on simpler, larger UI elements and efficient arrow-key navigation.

Each video is ‘shop-able’ and featured products can be viewed in detail before committing to purchase. T-commerce is supported with a visual stream of related products synchronized to cue-points within the video. Once the user chooses to interact with a product, its details are displayed in a specially designed details page.

Technical Approach and Challenges

The app is split into two major sections – the video and the user interface. The video portion is handled by a lightweight JavaScript MVC wrapper around a chrome-less Brightcove player running as a Flash plugin. The Model is concerned with fetching video information, the View with managing the video’s playback state, and the Controller with tying the two together.

The User Interface is a separate beast altogether, written mostly using jQuery to provide the custom event system and wealth of DOM traversal methods needed to make navigating the app with a keyboard easier.

Handling Keyboard Input

There are a few ways to interpret keyboard input, and none of them are head-and-shoulders above the others when it comes to designing a web app.

One approach is to treat the direction keys a user presses very literally, performing a sort of “visual search” from the element that currently has focus to any elements that may be selectable in the direction the user pressed. Another way is to give each selectable element specific instructions about where the arrow keys will take a user from that point.

We adopted a core keydown handler attached to the global window object that then interprets the key’s code and dispatches the relevant “upKey”, “downKey”, “leftKey”, “rightKey” and “enterKey” custom events to the currently selected element. If the currently selected element doesn’t have a specific behaviour attached to it for the user’s key press, then the event can be passed to the active parent (which is the menu the selected element is in). This way, a general rule can be written for a menu full of selectable elements, then specific elements can be singled out and given their own custom logic if they need it. It’s worth mentioning that, at the time we were building our app, Google had not yet released the Google TV jQuery UI Library.

This body of code makes up the core of our keyboard dispatcher:

// Capture a keydown and map it to one of the custom
 // keyboard events.  Trigger any default behaviours and
 // then see if the selected element or active menu wants
 // to handle it.
 var handleKey = function(e) {
 var $currentActive = $(".active");
 var $currentSelected = $(".selected");
switch(e.which) {
 case 37: // left arrow
 e.type = "leftKey";
 case 38: // up arrow
 e.type = "upKey";
 case 39: // right arrow
 e.type = "rightKey";
 case 40:  // down arrow
 e.type = "downKey";
 case 13:  // enter / ok
 e.type = "confirmKey";
 case 178: // stop
 case 32:  // spacebar
 case 179: // play / pause toggle key
 case 176: // skip next key
 case 177: // skip back key
// check if the currently “selected” element
 // wants to handle the keyboard event
 // and trigger it if so
 if("events" in $ &&
 e.type in $"events")) {
 // otherwise, see if the “active” menu wants
 // to handle it instead
 else if("events" in $ &&
 e.type in $"events")) {
 $(window).bind("keydown", handleKey);

One of the lessons we took from developing for Google TV was that, when there are no generally accepted names for the UI concepts you’re talking about, it’s a good idea to label them with something and stick to it. For example, there is no “hover” state when the user is working solely with a keyboard. Instead, we say that the current element which the user is on is the “selected” element and the menu in which the selected element sits is “active”. Getting those kinds of naming conventions down earlier rather than later made the process of creating CSS classes and selector-based JavaScript queries much clearer, and helped us to talk more precisely about the app as we built it.

Handling Mouse Input

Once the keyboard has been nicely mapped out across the app, mouse input needs to be rethought. Remember, there’s no such thing as a :hover pseudo selector for keyboard input, and we’re largely dealing with custom events dispatched through jQuery. As such, mouse hovers and clicks haven’t yet been accounted for. Additionally, a mouse is a direct-pointing device and can often skip steps that normally have to be taken with a keyboard. For example, a user moving with the arrow keys to get to the top menu from the bottom of the video list would need to press up four or five times (or hit the escape key), whereas a user with a mouse can just drag their pointer across the screen in a single motion and click. Consequently, we needed a way of handling mouse clicks that would account for the three or four actions a user would often take to perform the same action with the keyboard. To achieve this, mouse presses are treated as a sequence of keyboard events. Clicking on an item in the top menu would send the “escapeKey” custom event followed by the “enterKey” custom event to the menu item that was clicked.

CSS Transitions

One of the benefits of working with a modern browser like Chrome is the availability of CSS3 transitions. By transitioning the standard CSS position properties (i.e. top, right, left, bottom), it’s possible to deliver some good-looking animation. Opacity must be animated sparingly, and ideally only on small elements, but it still works quite well. The product ribbon on the right side of the app is a great example of CSS transitions helping the user experience. All of the scrolling behaviour, from the image resizing to the border thickness to the element’s position, is handled by CSS transitions.

This tiny snippet of code enables most of the animation seen on the product ribbon:

-webkit-transition-property: opacity, width, height, border-width, margin;
 -webkit-transition-duration: 0.5s;

One gotcha we did run into toward the end of the project was in animating HTML Video elements. While it’s entirely possible to do so, the element will stutter and skip as it moves. A possible solution to this that we investigated was to copy the current frame of the video on to a canvas element behind the video, then hide the video and move the canvas up and out of the user’s focus. This worked great on desktop Chrome, but unfortunately wasn’t possible on the Google TV boxes at the time we released the app. Hopefully, this functionality will be added to Chrome on Google TV in the near future.

Other Technical Considerations

We use the Brightcove video platform to manage and deliver our content through a customized, full-screen version of their player. On a resource-constrained platform such as Google TV, we would have preferred not to rely on a browser plug-in, but instead use an HTML5 <video> element. However, at present only the Flash version of the Brightcove player offers rich, second-by-second analytics and advertising support out-of-the-box.

Trying to stay true to the platform, many of our product details page prominently feature a video of a model wearing the item that plays immediately; on our desktop site, this only plays back when the user asks it to. As highlighted in the Google TV documentation, the nature of Staged video means you cannot control the z-index of overlaid video elements. To work around this, when we display a product details screen, we pause the main TV content and temporarily shift the player element off the screen by manipulating its horizontal location in CSS.

We hit a strange issue with our product videos that played back at double-speed on certain Google TV devices. These are H.264 videos with no audio track that are featured in the product details screen. Re-encoding these with a silent, stereo audio track rectified the playback issue.

The Design Evolution

The First Design

Following the advice from, and reviewing established UI paradigms for TV embedded systems (PVR menus, Digital TV applications, media centers etc.), helped shape our approach. We were particularly impressed with the design as it immediately exposes users to full-screen playback (which instantly feels more TV), but also expects the user to learn one new, simple behaviour: “navigate UP to search and DOWN to drill-down to other videos and categories”.

We identified three core user-interactions for our site; “I’m watching and controlling playback of the current video”, “I’m looking for other videos” and “I’m looking for related product details”. We needed to make it quick and easy for our users to jump between these modes, so we designed a two-dimensional navigation system that was explained at startup:

LEFT for more video options, RIGHT to interact with products and DOWN to control the current video. Seems simple enough, right? Well, we also wanted to allow the user to optionally hide the products when viewing the video, and only show the video controls if actually viewing the video. Our mental model looked something like this, where the blue blocks represent the screen mode and the green arrows show the directional arrow-key presses required to navigate between the screens:

Image: Current Video & Related Products Screen

Image: Current Video Controls

Image: Other Videos Navigation Screen

Our product details screen contains a lot of textual information (designer, fit, available sizes, etc.) that helps to better inform our customers before they purchase. This large volume of text, combined with an larger font intended to be read at distance, meant that we had to rethink the page design to avoid the clumsy experience of scrolling on a TV. Instead, we adopted a collapsible accordion component for the text content and used the considerable width of the screen to include a scrollable set of product videos and images.

Results of User Testing

User testing proved challenging as we first had to educate the users on how to use this new device and controller. (Surprisingly, less-technical folk seemed to get to grips with it faster.)

Users found navigating between screens difficult as they often forgot where they were, and had no easy way to return to a ‘home’ screen. Importantly, once users had finally made it to their desired screen, they found the UI intuitive and simple enough to comprehend quickly and use efficiently.

Related products were displayed in a ribbon overlaid to the right of the video. As the video progressed, the products would switch out to the next set of products. Often, users didn’t make the link between the product and the video, and sometimes didn’t notice the ribbon at all. People often gave up trying to navigate to more information about the featured products because they were lost in the UI (major commercial fail!)

Also, when browsing to our site after surfing the plain-old web, users found it jarring to be forced to use the keyboard controls for navigation. None of our user-testers wanted to access the on-screen video controls, as the short-form nature of our content limited its usefulness.

The Revised Design and Subsequent User-Testing Results

Reducing the number of screen states and dropping unused functionality simplified navigation significantly.

A consistent, annotated navigation menu was added to the top of every screen. This bolstered user confidence when exploring the site, and allowed them to easily return to a known ‘home’ state. During user testing, we witnessed some users hunting down the Escape key to return ‘home’, so mapped that key as a shortcut to this menu.

The unused video control bar was dropped and replaced by simple on-screen notification icons that momentarily confirmed playback controller presses. The product ribbon was redesigned to prominently feature the currently visible product, and product transitions were animated to catch the user’s attention. Every interactive element was made to be operational with the mouse.

Here’s a screen grab, featuring all these additions, that shows when the user has pressed the Pause button on their controller:

This site supports a passive, linear-playback mode that automatically plays the next video in the category when the current video ends. We also allow the user to take control of the related products ribbon to scroll back and forth between items that take their interest. Naturally, it would be a bad user experience to remove the products from screen on transition to the next video if the user is interacting with them at that moment. So, we added a smart feature that presents a special menu in place of the video if we detect this scenario. This allows the user to start the next video when they are ready.

Considerations for Video Production

While we always try to adhere to established broadcast standards in our video-production process, the transition to television really highlighted the importance of consistency in loudness levels, white-balance, flashing imagery, and so on. The web, with its small video players and laptop speakers, is a forgiving place; only when you’ve had to jump for the mute button, as an exceptionally loud video causes the dog to bolt, will you know my pain! The traditional broadcasters have been doing this for years, so I’d recommend browsing the resources at sites like the European Broadcasters Union

The ‘title-safe’ and ‘action-safe’ areas in a video frame are well-established broadcasting concepts that ensure any graphical or text elements included in the video are not chopped off at the edges, or overlaid with a broadcaster’s own screen furniture. We’ve never really had to consider this on the web, as we control the framing of our players. While it is rare for modern displays to suffer from title-safe concerns, we now consider how our overlaid interactive elements (such as our product ribbon and top-navigation menu items) may obscure full-screen video content. I think we just invented the term ‘app-safe’ ;)

Looking Forward

One key improvement to navigation would be to animate the transition between pages – perhaps sliding them on from the sides – to better illustrate our intended mental model, and to improve familiarity.

The current library of videos available on our Google TV site is only a small selection of what we have available, so we’re working on bringing a more dynamic set of content to the device as part of a standardised publishing work-flow. Importantly, we’re constantly reviewing exactly what kind of content we produce to delight our customers. Internet TV offers a well-suited outlet for traditional broadcaster-style, long-form programming, supported by an advertising model that takes advantage of web ad-placement targeting and campaign analytics.

Print Friendly

Leave a Reply