After officially relaunching last week, the entire Reporters’ Lab team traveled to St. Louis over the weekend for the 2012 Computer-Assisted Reporting conference, where we got the opportunity to share our progress so far with investigative journalists and developers from around the world (see the presentation slides embedded in this post).
Two of the tools we demonstrated — TimeFlow and Reporters’ Lab Reviews — are already available on the site. But our Friday session afforded us the first opportunity to showcase another exciting project still in the works.
Something that, at least for now, we’ve dubbed the Video Notebook.
The lab’s lead developer, Charlie Szymanski, is heading up the project. His goal is to create an application to index, search and analyze recorded video by syncing notes and data feeds from sources like Twitter, Storify and live blogs.
Essentially, it will allow reporters to save hours of time normally spent wading through video by jumping right to the segments they’re looking for. We’re hoping a tool like this will be especially helpful to reporters planning to live tweet recorded events, from city council meetings to political stump speeches.
It’s still early in the development cycle, so there’s a great deal of functionality we’re looking to add (and if you’ve got suggestions on how technology like this can help your reporting, make sure to share them with us). Expect to hear more in the coming months.
What I think is most exciting about this project is its potential to tackle the problem of indexing video from a completely different angle.
We’re very interested in ongoing research to make digital video, one of the fastest growing areas of data expansion on the Web, more searchable and thus more accessible. Think of what we could learn, for example, if we were able to effectively search the 60 hours of video uploaded to YouTube every minute (try visualizing that staggering amount of data, by the way).
A lot of this research is focused on voice transcription or the use of optical character recognition to pull out text that appears on signs and overlay graphics. But by syncing recorded video with information like Twitter streams, we can integrate potential indexing data users are already supplying — complete with time stamps — that can approximate what’s happening in the frame.
It won’t ever be a perfect solution. Even 140-character messages take a little time to craft, meaning there will always be a slight lag between an event or a statement and when someone can describe it or quote it. But it can get us close. And because this project, like all of the tools we create, will be open source and free to the world, our hope is that it will be one more step toward a solution.