Research

Evolution of security, privacy needs journalism’s help

Accessibility problems often make encryption and authentification technology difficult for average users.

Guest post by Justin Troutman

Whether you’re reporting from the trenches of a war-ravaged, oppressive regime-controlled nation or picking away at the multi-billion dollar financial scandal of the decade, confidentiality and integrity are fundamental to the continued flow of wholesome information.

Failing to secure that information has real consequences for journalists trying to protect their sources, guard their reporting and ensure the privacy and authenticity of their digital conversations. Although free, open-source solutions exist, they’re far from ideal for average users.

But ongoing research could change the way we interact with encryption and authentication technology — and journalists can help.

Practical problems

Without encryption and authentication, journalists interacting with sources face realistic threats of espionage and forgery if data is intercepted or manipulated. And as outsourcing data storage to cloud services like Carbonite and Mozy becomes the norm, the problem will only get bigger.

The unfortunate reality is providers of these services lack the know-how to get the cryptography right; there’s just not enough security and privacy expertise behind the scenes. Even if there was, the ball can still be dropped – and hard.

Last June, Dropbox suffered a four-hour security failure that allowed access to any of its 25 million accounts just by typing in any password for any username.

Not to pick on Dropbox, though – many of the other major so-called secure cloud storage services are making design decisions that were state of the art more than a decade ago.

However, because security and privacy aren’t journalists’ forte, buzzwords and branding can lull them and other users under the false pretense of guaranteed effectiveness.

A tangled Web

How encryption works

Consider a key that both locks and unlocks a deadbolt. If I want someone else to be able to unlock the deadbolt, they need the same key or an identical copy of it — and getting a copy to them could pose a problem.

The same is true for network communication. If Alice and Bob want to communicate securely and privately, they must find a safe way to agree on a key. But since network communication channels are insecure, the only way to ensure their key isn’t intercepted is to meet in person.

Cryptologists solved that problem by producing a related pair of keys: a public key and a private key. For example, if Alice wanted to send Bob a message, he could simply e-mail her his public key, or even publish it on a social networking site. It’s public, so it doesn’t provide any advantage in eavesdropping on Alice and Bob’s communication. Once Alice has Bob’s public key, she’ll combine it using a special algorithm with the message she wants to send to Bob, and out comes what looks like a garbled mess. This is called ciphertext. Alice then sends this ciphertext back to Bob, and he decrypts it back into plain text form using his private key that he never shares with anyone.

However, simply preventing eavesdropping isn’t enough, since clever attackers can manipulate that seemingly garbled ciphertext. Without some way to detect tampering, you’ll likely never realize that it is happening.

This is where digital signatures come into play. In somewhat of a reverse order, if Alice wants Bob to have no doubt the message she’s sending him is definitely from her, she’ll sign it using her private key. To verify the signature, Bob can authenticate it using Alice’s public key.

It’s crucial that you encrypt and authenticate. Encryption prevents someone from reading your messages, but if you want to ensure they can’t tamper with them, you’ll need to authenticate too. In fact, in real-world systems, it’s safe to say that without authentication, encryption doesn’t cut it alone.

For the stuff that works, there’s an entirely different problem – accessibility.

Tools that get the security and privacy part right have interfaces with steep learning curves. And because convenience trumps security and privacy every time, if it’s not usable, it’s not useful.

One example is the open-source GPG, or GNU Privacy Guard, which has been around for more than a decade. It operates as the back-end for applications like GPA, or GNU Privacy Assistant, to make its functions mostly point-and-click accessible. There are even plug-ins for most major e-mail clients to help automate the process.

While GPG is free and effective, configuring it can leave you scratching your head. A 1999 study of a similar tool by Carnegie Mellon and Berkeley found the interface was largely unusable — even to computer science and graduate students with instruction manuals in hand.

It’s not impossible to use, but unfortunately a false sense of security from an improperly used tool can be even more dangerous than not using it at all.

It’s important to understand GPG is just the poster child for a long history of disconnection between security/privacy and usability. And for encryption software in general, it’s easier for the layman user to mess things up than to get them right.

Journalism’s threat model

Never has a solution been needed more than today.

The merger of security and privacy with usability is a fairly new undertaking, and it requires what we’ve yet to see: security and privacy design that’s as much about the interface as it is the implementation.

That’s where journalists can contribute.

Carnegie Mellon, one of the first to look at this problem, is pioneering a lot of what needs to be done. I, along with Professor Vincent Rijmen, co-designer of the Advanced Encryption Standard, am working on a project dubbed “Mackerel” that defines all real-world security and privacy problems as deficiencies in the interfaces between humans and technology.

One of the ideal test beds for Mackerel is journalism, where information is the bread and butter of the profession, and failure to ensure its confidentiality and integrity can have grave consequences. The security and privacy community needs to reach out to you — the journalism community — to find out exactly what threats you face and how we can tailor technology to address them in a manner that makes sense to you.

So here’s an open call to journalists. What types of concerns do you have when communicating between one another and with sources? What are the implications of losing security and privacy? What do you need to protect, and how difficult is it to do so? What would you like to see that you currently don’t?

It’s our hope that by answering these questions, we can work with journalists to create more secure information, and solve a problem that will only grow more urgent in the future.

Justin Troutman is a security and privacy researcher focusing on the socio-technical interfaces between humans and technology within the context of security and privacy.  He’s rather evangelistic when it comes to preaching the wonders and woes of security and privacy, and has found many an interesting pulpit for doing so, in both print and speech, from analytical articles for Microsoft TechNet Magazine, to original research for IEEE Security & Privacy, to guest lecturing at Duke University. He lives in Asheville, N.C., along with his wife and daughter.

About Guest Posts

The Reporters' Lab welcomes guest posts on a range of topics involving the use of technology in the process of public affairs journalism. Send pitches to tyler@reporterslab.org.
comments powered by Disqus

The Reporters' Lab welcomes relevant discussion from readers, but reserves the right to remove comments flagged as inappropriate or spam. The lab is not responsible for the content of user comments and cannot guarantee their accuracy.