PhotoSynth Shows Vast Potential to Fundamentally Change Information Navigation

Following on from yesterday’s post about interactive television, I was delighted to see the BBC picked up on the PhotoSynth project. I first heard about this via Alex Barnnett’s blog and downloaded the Channel 9 video and the high-res Live Labs promo package. Clever stuff from the effervescent chaps in Seattle, must be all that caffeine. Once again, like the previous post, the technology is the first step; it is the sheer scale of the possibilities that this opens up that interest me.

The tool, such as it is, takes collections of photographs or images with a common reference point and links those points together taking into account angle, distance, scale, resolution, ambient light etc. etc. rather like a jigsaw to create a 3D virtual world. The environments they present so far are tourist spots navigated at high-speed (all due to progressive resolution encoding I think) and almost seamlessly. I am not explaining it brilliantly but the videos do a great job so take a look at them.

It does not take a genius to see the power that this technology has in terms of its application for information architecture on the web. The algorithm effectively links commonalities and provides a space into which communities can delve to enrich it even further. Imagine, if you will, that pictures are tagged (like in
Flickr) or given hotspot areas such that if you submit your photos to the engine for recognition, suddenly the random picture of a statue you took in Florence comes alive in a 3D context with information about the sculptor, the location, the social context and so on…

Imagine too that a piecing together a crime scene from shots taken by members of the public at a given time or that a terrorist propaganda video can be interrogated for common visual landmarks to place it accurately. It’s a case of blending
Flickr and Google Earth with Wikipedia and SETI and suddenly you're really unleashing distributed computing power and social networking to provide a virtual earth. At this point, I am sure people are considering the protection and privacy issues but, for the moment, I’m nothing but excited to see where and when I can get my hands on PhotoSynth and start adding every photo I’ve ever taken to the project.
Technorati Tags: + +


Using Ambient Audio To Serve-Up Social and Interactive Television

What I plan to do in the forthcoming days is post a regular sequence of smaller posts, all of which will hopefully cover a user-centred topic. In the past couple of weeks I’ve amassed a Targus case full of printed blog articles (I know, I know, think of the trees…), magazine columns and jotted notes about podcasts all of which I’ve intended to post about.

Regular readers and colleagues will know I’m not the greatest ‘completer finisher’ on the planet and am full of good intentions so here’s my chance to produce a consistent, topic-focussed output.

Using Ambient Audio To Serve-Up Social and Interactive Television
Xan passed me an article this month that (despite being a little too CompSci for my simple brain) introduced me to the work of Michael Fink (Univ. of Jerusalem), Michele Covell and Shumeet Baluja (Google Research). The premise, in a nutshell, is to use your laptop’s on board microphone to record the audio of the TV programme you’re watching and match that to a database of broadcasted material to serve up relevant media content from the web. This is quite clever stuff. In the same way that you can point your mobile to a speaker and send music audio to Shazam for identification, this system identifies broadcast content in real-time. There are a host of privacy issues and technical problems that the paper covers, and I’m not going to. I wanted to think more about the social element of this tool. There’s a host of personalisation, tagging and social networking stuff that can be thrown into the mix here. Imagine, for example, watching your favourite sitcom/drama/film and identifying that at that moment 20 other people were watching it too, you could chat about it via Skype or instant messaging or you could add tags to the content to identify places, objects, fashion etc. ABC drama ‘Lost’ is a great example, the online experience for the show is vast and tying this directly to the action feels neater in this model than it would via existing interactive TV channels (red-button etc.) Have a read of their paper and add your comments to this post by clicking ‘comments’ below.