For a while now, Arizona State Journalism Professor Steve Doig has been on the hunt for a statistical black box of sorts.
Feed in data, the kind reporters commonly confront on the beat, and Doig’s long-sought solution would automatically apply all the relevant analysis a journalist would need to ferret out story ideas. It would replace careful statistical modeling — a barrier to entry for many — with sheer brute-force calculation.
That’s exactly what Wolfram|Alpha Pro is built to do. The newest product from Wolfram Research, available for $4.99 a month, allows users to input about 60 different types of files, from images and audio files to spreadsheets, and run them through a statistical wringer. The result, in theory, is to “derive the most important conclusions from data, and generate an organized report for the user,” according to CEO Stephen Wolfram. More from his blog post on the product:
The concept is to imagine what a good data scientist would do if confronted with your data, then just immediately and automatically do that—and show you the results.
After running it through some initial tests with real newsroom data, Doig and I found the software has long way to go before it could be a practical part of a journalist’s toolbox. But Wolfram developers say this iteration is only the beginning.
As it stands, Wolfram|Alpha Pro just isn’t designed to handle large datasets — at least of the size investigative journalists often deal with.
Developers set the size limit on files to 1MB, but spreadsheets we uploaded under that threshold still have trouble depending on the number of columns and rows.
When I tried testing Wolfram|Alpha with a dataset from the federal New Market Tax Credits program — a spreadsheet that resulted in this Bloomberg piece by the late David Dietz — the limit varied from about 107 rows (with 12 columns) to 1015 rows (with 3 columns). Anything beyond that threw error messages.
When I narrowed down the data enough, I was able to get a nice heat map of the distribution of loans and investments by state, which Wolfram|Alpha automatically mapped based on the two-digit state code. You could then quickly normalize those findings by state population or gross state product from a pulldown list without having to do the math.
That could lead reporters to ask questions about why some states are getting disproportionately more funding than others. Of course, without running the full dataset (which is more than 4,000 rows long), it’s hard to draw those kinds of conclusions.
Before cutting down the number of columns, Doig said he saw similar errors with school testing data he uses to teach statistics at boot camps with Investigative Reporters and Editors.
Taliesin Beynon, a developer in Wolfram Alpha’s advanced research group, said this bottleneck has more to do with the system’s ability to make guesses about the data than its ability to perform analysis.
“We have the unenvious task of trying to teach our software what’s interesting in the data,” Beynon said in a phone interview.
In short, because Wolfram|Alpha Pro is designed to deal with messy data by hunting for linguistic cues, he said it gets tricky when the data is both messy and large.
“If you exceed a couple of thousand rows, then we start to not reliably parse it in the time limit,” Beynon said, noting that they’re aiming to return results in around 10 seconds.
However, he said they’re moving toward a framework that will allow users to wait a little longer for larger datasets.
No corrections yet
For all its focus on guessing about the nature of the data, there are still plenty of situations when Wolfram|Alpha gets things wrong. That’s not surprising, but there’s no way yet for users to manually make post-processing fixes or specify what they’re looking for.
That’s the problem Doig ran into when trying a regression analysis of the school data. It flubbed the data’s poverty variable, leaving Doig searching for a way to correct it.
“I can find no documentation for how [Wolfram|Alpha Pro] makes such choices, but I thought it might have something to do with the order of the variables. So I moved Poverty, uploaded again — and got back the ‘Wolfram|Alpha doesn’t know how to interpret your input’ message,” Doig said in an email exchange.
Beynon said the development team hopes to add the ability to edit the results in the future. Although he couldn’t give a specific timeline, he noted that they’re rolling out updates every week.
“There’s a lot of richness coming,” Beynon said.
It’s important to note that Doig and I focused on Wolfram|Alpha’s performance with spreadsheets. But xls, csv and txt files are just some of many types of files it’s designed to analyze (albeit the most common for investigative reporters).
While it’s certainly more than a novelty, Wolfram|Alpha Pro is far from a must-have for journalists — at least at this point.
Doig said he’d like to see a bit more documentation and a fewer generic error messages. But he’s not counting it out completely as his long-awaited statistical solution.
“Bottom line is that [Wolfram|Alpha Pro] isn’t the black box of my dreams. It’s also fluky in what it will accept. Text variables seem to confuse it, but even one of the all-numbers datasets also bombed,” Doig said. “I’ve got a year subscription to this, so I’ll try it periodically to see if it gets better.”
Until then, check back here at the Reporters’ Lab for updates as this product evolves. You can try out the full features of Wolfram|Alpha Pro free for 14 days. Find anything interesting during your test drive? Contact me with your results or share your experience in the comments.
Front-page thumbnail image courtesy of Flickr user Andrew Magill.