Re: [liberationtech] was: Forbes recommends tools for journalist; is now: depressing realities
Danny O'Brien:
On Wed, Dec 19, 2012 at 05:26:05AM +0000, Jacob Appelbaum wrote:
Hi,
frank@journalistsecurity.net:
But if
you're getting information security advice from a Forbes blog, that will be the least of your worries.
Where would you suggest we get information security advice from?
This is an interesting question and I admit, I feel like it leaves a bad ring in my ears...
What kind of security advice? Who is following the advice? Does their context change while they follow this advice? Do they have resources of a user without more than a casual interest or are they well funded and dedicated? What are their requirements? What are their temporal tolerances? Do they understand safety plan or threat model without further explanation? What are the stakes for failure?
The answer to each of those questions would shift my answers to subsequent questions around, I guess.
Just to add some notes to Jake's excellent points to broaden the discussion. I hope I'm not thread-jacking, but it's Jake's comments unlocked a lot of points that I've been thinking about recently.
I'm glad to hear it. :)
Protecting Sources -- changing the relationship between reporter and source
One social act that journalists can adopt which has nothing to do with technology, but everything to do with how technology has changed both the threats and the opportunities of journalism, is to consider what *has* to be known about a source. Traditionally, the role between a source and a journalist has been that there's an inner sanctum of shared information, and then a set of carefuly managed publically released data.
There is also a potential notion about the relationship itself being part of that inner sanctum. That is a rather unrealistic expectation without serious care.
For certain beats, there's all kinds of problems with this model at this point. One is that technically and politically, it's getting harder to protect the data in the inner sanctum, even within supposedly stable open societies. Without exaggeration, we've accidentally built a data-collection system that the Stasi would marvelled at, and then put all the pressure against its misuse on statutory protections that have little oversight, poor incentives and almost no track record of punitive action.
To make matters worse, we've created secret law, secret interpretation and have essentially zero accountability, except to bust up the people talking about it so that the public may learn about it. John Kiriakou, Bill Binney, Thomas Drake and Jesselyn Radack come to mind here.
Second, the management of the released information in order to protect an identity is now practically a full-time security job in itself. Forget protecting data that source and journalist agree is confidential; even the information that has been agreed to be made public can be compromising in ways that neither party could anticipate. This isn't a question of ignorance, this is a question of how skillful we can now collectively pool open source[1] info to deduce hidden data.
I think it is more than one thing - so in some cases, it is a matter of simply ignoring the facts; try talking to people about NSA Warrantless wiretaping program and the data (which by the way, I'm confident has been used in the WikiLeaks investigation) produced by it of US citizens on US soil. Eyes will glaze over and people will simply refuse to discuss it. Quite depressing. In some cases, I agree that even if we know that is/has/will/etc happen - some people don't really understand the magnitude of the surveillance state.
It's a precept of the security professionals I know that you simply can't de-anonymise mass databases of information; what's unknown is how little you can add to the wealth of already public information before a single identity is uncovered.
I think you need to come to PETS and the CCC Congress more often Danny! :)
In that sense, I'd welcome this Forbes piece, because it's the first time that I've seen wide public discussion of this problem -- that this journalist revealed information about their source through what both agreed should be made public. I'm pretty sure McAfee didn't even realise that this was a threat, let alone the editors and writers at Vice.
I'm not sure that I agree that it is the first time that this has happened. I think that it is also a stretch to say that Vice was totally clueless. WikiLeaks discussed these kinds of document issues long long ago - see some of the early CCC talks. Perhaps that doesn't count as wide? I'd say that all of the hubub about redacted PDFs in the last ten years is perhaps more important and has received wider attention.
My point here is that among all of these threats, there's also opportunity. Some of the Net-savvier journalists I know now take a minimal-knowledge approach to sources; you don't need to know who the source is in order to verify the information you've been provided. This is a situation that is I think historically unusual, but is increasingly common. You work with the data itself to confirm its veracity. You don't need to know whether quarter of a million diplomatic cables were leaked by a particular security analyst, because you can externally verify the accuracy of the data.
Indeed. Scientific Journalism.
There are a lot of challenges to this approach, but there advantages too. It apparently increases the risk of being fed false flag info: but it also prevents accepting false information through simply believing authorities. It decreases the value of personal contacts in journalism, but it increases the value of data analysis. But most importantly, it helps both of the major problems in journalist-source protection. It eliminates the requirement to preserve the inner sanctum, and aligns the incentives of the journalist with the source to test and validate the safety of revealing data to the public.
I think that it changes the relationship to the so-called authorities; now they're perhaps just an anonymous person, where previously, they were a specific person. Or they're an unknown person and a special person is quoted as interpreting what it means. The latter is quite common and historically quite common, I think.
[1] in the old fashioned sense of open source intelligence [2] speaking as someone who was asked by McAfee about how cellphones triangulate location (I didn't answer) -- even if you have your name on a security product doesn't mean you're an expert in all security.
"Very carefully Mr. McAfee, very carefully."
Revealing our methods
I'm really really happy that Jake has talked a little about his own procedures, because we're really bad at this as a community. There are a couple of reasons for this, I think. The first is that despite all of our talk about the dangers of security through obscurity, we're all scared that revealing public information about our setup exposes us to increased risk. Second, we're scared of looking stupid, or being exposed to condemnation.
I'm not sure that I agree strongly with the first part - when someone suggests that they use Tails, that is pretty specific! The second is certainly true - and often - reasonable! Lots of people make really bad choices and they hardly understand why they made those choices. Certainly from a technology standpoint but also from a social standpoint.
I think both of these concerns are valid. If I told you that I used FreeBSD 7.4 on my server, say, and that I'm a big fan of libpurple-driven OTR clients, it's possibly made it somewhat more convenient to find out a way of attacking me, even though there's at least a couple of ways that I'm emitting those facts almost constantly.
Sure, I generally agree on all counts: user-agent: Mutt/1.5.21 (2010-09-15)
Second, if I *did* tell everyone I use libpurple, or that I have Skype on my machine, I'd be extremely vulnerable to people pointing out that libpurple is not exploit-free and that Skype is used as a vulnerabilty distribution vector.
That I think is exactly the right discussion to have - specifically because there is a reason to use libpurple (ahem, pidgin-otr) and a reason to use Skype (ahem, fuck, what was it again?).
I don't know what to do about this. As a community, we jump on people who publically reveal less-than-perfect security practices. But we've all -- even if it is in retrospect -- realised risky things we've done in the past. We can't learn from our mistakes, and worse, others can't learn from our mistakes, unless we admit to them. We can't berate coders for not exposing their programs to security audits, unless we have a better way of sharing the practical knowledge we ourselves use every day, and we're not going to do that if we just spend our time pretending we anticipated the latest zeroday years before it actually came about. --
I'm not entirely sure but I think one answer is not to assume things secure by default. Another is probably to understand that things written in C are likely to have lots of specific memory corruption issues - regardless of *which* codebase is in use. Lots of the zeroday in use today is really lame - not just buffer overflows that are known to exist or are patched but not shipping (ahem, pidgin on Windows) - rather, simply not caring about metadata, location privacy, or stuff said over phones, or sent over HTTP... All the best, Jake -- Unsubscribe, change to digest, or change password at: https://mailman.stanford.edu/mailman/listinfo/liberationtech ----- End forwarded message ----- -- Eugen* Leitl <a href="http://leitl.org">leitl</a> http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE
participants (1)
-
Jacob Appelbaum