Skip to content...

Insights about the Firefox Tweet Machine

This journal entry was filed under Sharing. Show all journal entries.

Firefox Tweet Machine

Introduction

As some of our friends and followers might already know, our Tori’s Eye labs project has got - and still gets - a lot of attention. The good guys over at Mozilla stumbled upon it and challenged us to present them with something both aesthetically and technically inspiring.

Although Tori’s Eye (also a Twitter visualization) was the initial inspiration, it was clear from the beginning that this new project had much higher stakes.

Mozillas initial briefing asked for a visualization that “captures the unique and multi-dimensional nature of our community, their conversations, and the energy that flows through their tweets”.

The result is Firefox Tweet Machine, an experiment with exclusively open web tools and technologies, that works on all modern browsers supporting HTML5 and CSS3.

Conceptual challenge

Right from the start we had the idea that information would flow into the screen through specific filters, allowing the user to control what tweets are ultimately displayed.

This is where we started playing with the usage of pipes representing the channels through which data flows, partly influenced by Mozilla’s use of pipes in their graphic language.

After some initial experiments we pursued the idea of a central machine where all pipes converge and then show the actual tweets in form of bubbles, floating through the screen.

The bubbles themselves could then have variations in size, colors or even in the data they show, giving place to relevant, beautiful or just interesting information.

In later versions, after discussing the project scope and timing, it was decided to minimize the filtering ability (bringing it down to the search box). So the need to have one pipe for each filter (four filters had been planned initially) disappeared, but we kept the graphical concept anyway since it was working really well.

We took inspiration from the 60’s and 70’s handcrafted illustration style for the artwork, which is close to most of Mozillas style throughout their communication.

DOM or Canvas/SVG

There was a big question that took us some time and some experiments to answer: bring the machine to life with regular DOM animation (as we had previously done with Tori’s Eye) or go a step further and do it all in one big Canvas or SVG element (Raphaël was the option to build the whole thing in SVG).

A relevant context to keep in mind is that although experimental, this visualization should still perform relatively smoothly on less state-of-the-art processors. In some way we wanted to find the compromise between experimenting with new, usually more processor-intensive technologies on the one hand, and a acceptable number of visitors that would still be able to experience the result with some acceptable requirements on the other hand.

Arguments in favor of Canvas or SVG were: experiment with new frameworks, smoother animations, and have one big single canvas to draw and animate everything, instead of various DOM elements including images and SVG objects.

There were two main reasons for which we ended up going for DOM animation: First, the simplicity of developing user interaction with the bubbles (hovering, flipping, selecting text and clicking links) along with the relatively complex text rendering is that is hard to achieve using either Canvas or SVG. Second, we found out that there are considerable performance issues when drawing large Canvas/SVG objects with a lot of complex shapes, forms, text rendering and high frame rates at resolutions as big as Full HD.

Physics engine

Most gravity and collision scripts we found would work well with both DOM animation or Canvas/SVG drawing, since it’s all abstract math. Basically the whole 2d world is processed in the scripts, and frame by frame you just loop through all the objects making up your 2d simulation and draw/update them in whatever way you want, be it updating the DOM element’s CSS position or redraw the whole canvas.

At an early stage we started writing a very simple and stripped down version of our own gravity and collision prevention (different then collision detection) script, which basically cut the whole stage into a large matrix and then kept track of what squares of it were currently occupied with objects. But the lack of smoothness, collision impacts, acceleration, friction and other basic physical behaviours made it unfit for duty.

That’s where we chose between Box2DJS and Blob Sallad. The first is a port to javascript (by ANDO Yasushi) from the famous Box2D c++ physics engine, also used by Mr. Doob in a Chrome Experiment. Blob Sallad (by Swedish Björn Lindberg) on the other hand was only applied to drawing directly on canvas, and seemed to us more of a (very enlightening and well documented) academic experience than a ready to use script. So we went for Box2DJS, which allowed us to build a quick prototype using large parts of the code used by Mr. Doob on his Ball Pool Chrome Experiment.

SD/HD switch

The FTM has a SD and a HD version that automatically switch depending on the screen size. This was needed to allow it to work both on regular notebook screens and as a wallpaper on a HD TV, while still being readable from a reasonable distance. The scenario is, for example, a HD TV showing the visualization in a conference reception room. The SD version is what most of users will see on their laptops.

CSS3 Media Queries could have been used instead of creating two resolution-specific CSS files, but since the physics had to be reset and reconfigured when switching resolutions, we opted for a standard javascript-based resolution detection, which then loads the resolution specific settings and CSS files.

HTML5, CSS3

HTML usage follows the most recent HTML5 guidelines. On some of our projects we use html5shiv to turn the newest DOM elements compatible with IE, and even most of the CSS3 properties would at least fail gracefully to allow for an acceptable experience on IE. But the reason to leave IE out of the supported browser list was the animation process and it’s underlying javascripts. Not that it wouldn’t be doable, but at a certain point the implied effort was far bigger than the need to make it compatible with less modern browsers. After all the objective for this specific project was to make use of modern browser’s capabilities.

When we first tested the HD version on a Full HD screen running from a standard laptop, the animation performance was terribly sluggish. After some debugging we learned that this was due to the relatively low performance of the background-size property when heavily used (as in: a dozen of big objects re-rendering at a high frame-rate).

Plugins & tools used

We were desperately looking for an excuse to use Raphaël JS (part of Sencha Labs) for one of our projects. And the little gauge right below the search box, indicating the level of tweet activity for the current search, was the perfect reason. Whenever you do a new search, the gauge (a simple png image) smoothly animates through the Raphaël’s SVG magic.

jQuery is our javascript framework of choice. Even though the pgysics script doesn’t rely on it, jQuery is what allows easy coding for all the interactions. And it allows for most of the following plugins to be used.

ColorBox (by Color Powered) is one of our favorite jQuery plugins to handle light-boxes. Although there are probably hundreds of light-box plugin alternatives, this one is, to our knowledge, the most well-written and robust, and allows for full and easy styling.

Whenever you do a custom search on the machine, a correspondent hash is added to the URL and therefore to the browser history. This is achieved using the history plugin by mikage, and allows you not only to go back and forward through your search history, but also to share a link to the visualization containing your specific search. By looking at tweet mentions of the FTM we noticed that a lot of the URLs shared are actually links to custom searches.

Preload CSS Images by the Filament Group is a jQuery plugin we use to make sure all elements are complete before you get to see the visualization. The “loading” wheel you see at the beginning is actually an animated background-image on the body element. This loading process was only added at the end of the project - until then you would see all images gradually load, and bubbles floating around without background. Funny, but not optimal!

Timeago by Ryan McGeary is a jQuery plugin used on the tweets datetime information. We used this one previously on Tori’s Eye, and it does what it promises: in goes a standard datetime, out comes a human-readable relative date info (X minutes ago).

Topsy provides a simple API and a good data set to keep track of all mentions of a specific URL (and most of it’s shortened versions) on Twitter, giving full control over it’s the styling. We initially tried using tweetmeme, but their retweet functionality requires their service to have Twitter authentication and their mentions page is not as straight-forward as Topsy’s.

Google Font API provides open-source fonts and code, and serves the files directly from their servers. It had just been announced by the time we were making choices, and we had to try it out. We’re happy with it’s results, even if we didn’t use it extensively: Kaffeesatz by german Yanone is being used here and there, and since we already were looking at the Google Font API directory we chose The Lobster Font by Pablo Impallari for the logo.

Server-side

All the data required for the visualization, except custom searches, are obtained through a server that delivers both the settings (which information to show on the countdown, default search, keyword to highlight) and the data (@firefox timeline, default search, number of Twitter mentions, Facebook shares, @firefox followers) in a single json request.

The server also acts as a proxy to allow on-demand requests to Twitter (to get each tweets author profile info) using OAuth. Server-side development was done in PHP, using memcached to cache common requests sent to third party servers.

A simple management console allows the configuration of various settings: default search term, keywords to highlight, keywords to censor, datetime or @firefox followers countdown target and description, versions (allowing to force a refresh on all open visualizations), etc.

The PHP proxy script is based on Ben Alman’s open source php-simple-proxy, the Twitter OAuth requests are made using Jaisen Mathai’s twitter-async php lib, and we are using YAML (via YAML Loader) to save the configuration options.

Conclusions

During the whole project we had the opportunity to interact with dozens of scripts, techniques and resources, always in the rein of Open Source, that allowed us to mash them all up in - we hope - an effective and elegant and beautiful way.

These kind of projects, by their nature of experimenting and pushing forward the capabilities of modern browsers, have a relatively limited life-span as such - but if you put the technical aspects aside, you’ve still got a beautiful and inspiring aesthetic experience.

There’s constant room for improvement, and as we write this we’re still twisting and tweaking things here and there. And that’s what makes this project so interesting: pushing the boundaries on what is doable means you’ll never reach a point of sitting back and saying “this is perfect, let’s stop here”.

We hope you enjoy the final result and are interested in your feedback, don’t hesitate to share your thoughts and ways to improve what is already done.

Last but not least, we had a lot of fun working on such an inspiring project with a great team on Mozilla’s side. All our client’s projects are unique, but doing a project for such a very good friend of ours called Firefox is an awesome honor. Special thanks to Tara Shahian and Michael Morgan for their inspiration and constant support.

http://firefoxtweetmachine.com

Date
Saturday, 2nd October 2010
Author
Leo Xavier
Topics
visualization, twitter, javascript, html5, css3