On Fri, 29 Dec 2000, Greg Broiles wrote:
But - several, if not many times - the security we've achieved has been broken, because of implementation errors on the part of creators, installers, or users.
That's right - that's part of the fact that cryptographic engineering (as opposed to "cryptographic science") is still in its infancy. This is the downside of the current approach, which focuses on getting the protocol right first, and only later considers the "real world." Bruce Schneier had another way of putting it - something along the lines of "The math is perfect, the hardware is so-so, the software is a mess, and the people are awful!" (not an exact quote, but I remember it from one of his DEF CON speeches). That being said, there is some benefit to considering the protocols in an ideal, polite model - because in the past we haven't even been able to get security in *that* model. So in some sense this is a case of "publishing what we can prove." It's only comparatively recently that we've had protocols which we can prove secure, even in weak models -- the first real definitions of security from Yao, Goldwasser and Micali, and probably others weren't until the early to mid 1980s. Truly practical cryptosystems which meet these definitions of security didn't arrive until the 1990s. (Some would argue that they still aren't here - Bellare and Rogaway's Optimal Asymmetric Encryption Padding (OAEP) satisfies a strong definition of security, but only if you buy the "random oracle assumption.") Now on the "science" side we can and should extend the model to deal with more of the real world. You might find the recent paper I posted a link to by Canetti interesting - he sets out to deal with an asynchronous network with active adversaries. I didn't see torture included yet, but maybe next version. Birgit Pfitzmann and Michael Waidner are considering something called "reactive systems" which may also yield results. http://citeseer.nj.nec.com/297161.html On the engineering side -- well, there's a long way to go. Ross Anderson has a new book coming out which may help a little bit. http://www.cl.cam.ac.uk/~rja14/book.html The fact remains that I don't think we have enough experience implementing protocols beyond encryption and signatures. At least not on a wide scale. Take digital cash and voting protocols as an example. Digital cash has been implemented and re-implemented several times. It's even had a "live" test or two. But how many people have managed to buy something tangible with it? and how does that compare to the amount cleared by credit cards? Electronic voting seems to be on the upswing - at least with votehere.com and the recent election debacle hanging over our heads. Still, who has implemented, tested, and deployed a truly large-scale voting system based on cryptographic protocols? The one which comes to mind is the MIT system built on the FOO protocol - and while that *works* (modulo operator error), that's only a few thousand undergrads. It's at times like this that I wish I knew more about formal verification of protocols...
Consider the computing power assembled for the DES or RC5 cracks, instead applied to dictionary attacks versus a PGP keyring, or SSH keyfile. How long until the average user's passphrase is recovered?
If the passphrase is in the dictionary, nearly no time at all. Some take this to mean that now we should write passphrases down, and use the opportunity to pick long random ones unlikely to be in any dictionary... -David