Why I have a 512 bit PGP key
[Speaking of dumb things: when I added my PGP key to the bottom of this file a few minutes ago, I attached it to the pre-proofread version and sent that instead of this one. Sigh. Here's the correct version.] A while back, I generated a PGP key pair for use on my machine at work, a Sun SparcStation sitting on the reasonably-well-protected- from-outside-attack AT&T internal research network. I selected a key length of 512 bits. My number theory friends tell me that this is weak by modern standards; cracking my key would probably require within an order of magnitude of the total computational effort expended in the recent attack on RSA-129. I even volunteered my key as a ``target'' for the next such attack. Still, I'm happy with my choice, or rather, I've got so many other security things to worry about that compromise of my private mail based on cryptanalysis of my dinky little public key to obtain my private key is the last thing on my mind. In fact, I kind of like it that my key doesn't advertise pretensions of high theoretical security when, in fact, there is very little at all in practice. The first problem, of course, is secret storage. Modern networked computers are awful at storing secrets. (This, after all, is one of the problems that crypto software like PGP aims to solve). I suspect my situation is reasonably typical, if not better than most. My computer at work sits on my desk (in my locked office), has its own local disk, only I know the root password, I try to keep up with the latest security patches, and I keep most of my files in encrypted form under CFS. I'm the only regular user of my workstation, and while I'm at work I access it directly from the console. The network to which it is attached is AT&T's ``R&D Internet'', the same one that sits behind the firewall described in Cheswick and Bellovin's great new book. I probably have at least average system administration and general computer security skills, and I'm reasonably good about practicing what I preach. Sounds like a pretty secure machine from which to run PGP, right? I don't think so. While my machine's operating system is pretty self-sufficient, my own home directory sits on a remote file server administered by people who are good at and have time to perform essential services for me like taking backups. This means that, no matter how hard I try, it's impossible for me to be sure that none of my files have been tampered with. We use off-the-shelf NFS, which means that for all practical purposes anyone with access inside the firewall (that's about 50k people in my case) can replace any of my files. Furthermore, even though my office has a lock, I'm not the only person in the world with the key (Bell Labs escrows office keys, after all), and I've managed to pick the lock once or twice on days when I left my key at home. I really have no idea where my machine has been or what software I'm typing at when I run PGP. So where should I store my private key? Well, I could, as some have suggested, keep it on a floppy disk that I carry around with me everywhere I go, but first of all, that's too inconvenient. It also sounds dangerous in practice. A floppy disk is about the size of a US passport, and I've already lost two of those. That means I'd have to replicate the key somehow anyway, so I might as well rely on the reasonably well established backup procedure that protects me from loss of the rest of the files in my home directory. For all practical purposes, I have to assume that my secret key file is public. That leaves the passphrase to protect the secret key. According to Shannon, English text contains just over 1 bit per letter of real information. Even if we assume twice that to account for the added twists and turns of phrase I'm inclined to add to a passphrase, I just can't remember (or type) a phrase with anywhere near enough entropy to approach the level needed to do justice to even a little 512 bit RSA key. I think the simplest cryptanalytic attack against me would be to go after the passphrase-based encryption of the secret key file. (You'd need a way to enumerate the most likely keys based on a hashed passphrase, which is a problem not yet well studied in the unclassified literature. I suspect a solution is not out of reach of a determined adversary, however). An even simpler attack would be to break in to my machine and replace my copy of PGP (or my kernel, or my shell, or whatever) with one that records the passphrase as I type it. (No, I don't leave this as an exercise to the reader!) The next problem is with PGP itself. While I haven't looked carefully, it seems to be a well-engineered program, and it has a number of design features that I admire. However, I think the basic model it implements sits at too high a level, making it inherently unreliable for really sensitive traffic. It's just too hard to use. (Most of the problems could be fixed by pushing things to a lower level, and I understand a number of people are working on this). In particular, I'm forced to have too much involvement in each PGP operation, and it's just too easy for me to do stupid things like: - encrypting messages with the wrong public key - sending the cleartext file instead of the ciphertext file - leaving the cleartext file around in an unprotected file system - including the passphrase in the message (especially when your fingers are so used to typing it all the time...) - forgetting to use PGP at all - typing a passphrase over a network connection (especially easy when you've got several windows open on several machines). Systems that use hardware keys (smartcards, etc) are less vulnerable to this I've done each of these dumb things at least once, and probably others as well. Don't get me wrong - I advocate the use of strong encryption as much as the next nerd. I'm just concerned about focusing so narrowly that we lose sight of the larger security picture. Perry Metzger once made reference to cryptographic "size queens" who worry about key size and nothing else - it's a phrase that rings true. There's something to be said for systems that offer security parameters that provide about the same strength across various attacks. DES is a good example - a 128 bit key DES could be designed that is at least as secure as the current 56 bit version - by at least few bits. The engineering triumph is that the "advertised" DES security parameter - the key size - tells close to the truth about the overall security of the system. (Of course, in RSA-based systems, there's an added variable - advances in factoring - that may make it prudent to include a significant margin for error, especially for keys that must retain their strength over time). I have a 1024 bit key at home. -matt -----BEGIN PGP PUBLIC KEY BLOCK----- Version: 2.6.1 mQBNAitm4zgAAAECANYaL7K5Ca5B4Sq3udKKkFasQNrgfKGoWRUjwB/10lAFVrhN aKz/b6iJXFxZ6g+YlCvdQTu/EUO6JkBihshIRgEABRG0D21hYkBiaWcuYXR0LmNv bbQhbWF0dCBibGF6ZSA8bWFiQHJlc2VhcmNoLmF0dC5jb20+iQCVAgUQLIG7Ga1S SlGFGX+1AQEt7QQAtwhDbN/799e763LrbhB5ItoR1r2ud+nWBZi0S64OVnVkLjnd zVwMouPiaiEs/ORWQfoVPmME6fMYlUeh+uLonSDymjzosWyU6yJRs8lcAy2MMBR+ De00mHk8+nDAuY0j4udH2oqvd6V6IEgsN8bQeme8CkNj3uULmzNMrYtns7aJAJUC BRAr3LtEs25wSZyVhQ8BAXQwA/wIxBaxEM/DER96N6o00qzzJog8nbKGH2S4achc P0/96N9FXgnnoKybARfG+ZfliuuMRyt40MIkg1/Z5PzLg0m5dVzXgkYv7B98bI+8 dVuzENJRzBbbmSDemcTaF2KWdtW7U66xFSP6S86RDOuQHzg6uCi2tmoJhvdWroWz VVxGj4kAlQIFECtzeqp9h9s63RlgUQEB8UoD/ilKx2sUFzQwkM3DSRQZun5FoR1N ujmt710NHkn3BFcRcBAU1o6VEHg0MlQXYEDk16YnhUEZDy0QuMrxXWcLee1UP2jl k0+ezNP5NMsSMt7HVjGJ+xi+exc6+Clyl/WjSEhpears1kBWAI12eVbO1uI/uGr5 vksZqkPoT8a1WaumiQCNAgUQK21uiULwpfyXKdSbAQFErAO/TsSmabCpT4Uzi/zQ 14yBiDqwatj8mhaE60nG8wiqQv4W4hmDXjrxGRr0LQNM3eBLCkoEpKIDmL1RuwtB Z4AUsqoJTC2Yq46KnSznfqgY0F+C4kSptxo0p1KJ79FKFlW9dyTKVBB1WOBzbsw1 Kx/oog+DmUH0VIbYiQCVAgUQK21W3FTdX6I8ZiRnAQHErAP+P+WalKGRrgM/v8pp o4YKYmXxjsLUx89WJXMkxkoSzB7/ny7ITHo9i42qR+aXlsa+gqxdwRDrpI6k9FFF AhZ8s/bdZKpnXOJOjaj4P5LRbYem2VOZ8e9omXhHfz7a7NRUTimLA/q6lphy3Ulp byrua5Q8BkzQzI3RgbKEPshuOjeJAJUCBRArZuU6hr7UV33/hTMBATvGA/42wy/x BEVb5bOQiFTpEuB80Df53zt+b4TmfeueMMiDvvj7A5joLk7X/7x6HaBxHN/thbd6 S9NncWJfvy/PMnsQEmKarn45kwn/2xxDu2Po7pUN6Uj9DyA9uY+ilzqfk7ZA3RwH cbZA0Qv6LDNbapJXgFANwOC1tRB6yLtSG3T0iw== =V30c -----END PGP PUBLIC KEY BLOCK-----
On Dec 23, 3:39pm, Matt Blaze wrote:
essential services for me like taking backups. This means that, no matter how hard I try, it's impossible for me to be sure that none of my files have been tampered with.
Some time ago I looked at tripwire, and wondered if a personal version could be produced which would allow a similar function to be performed on a system which had a hostile sysadmin (a position I was in a few years ago, and it's not pleasant). The problem is that although you can protect the data file of hashes (by using a pass phrase to encrypt it), protecting the binary which does the checking is rather more difficult. Sure, you can checksum it and lock that checksum in the encoded file, but a hostile attacker could make the self-check a special case which always returns good, and then snaffle the pass phrase. Once they had the pass phrase, the protection is dead. Over lunch (ie. warning, not a lot of thought given to this :), I wondered if you could do something like this: Have a simple bootstrap loader, and the encrypted main program. The bootstrap loader asks for the pass phrase, and decrypts the main program and runs it. The main program checks the loader for modification, and if there is a problem, refuses to go further with a indication to the original account owner (eg. overwrite the main program with one which simply prints "Main prog hacked at <date/time>". If all is well, the main program asks for a further pass phrase to the data file, and goes off to check all of the files in the listed areas (for the moment, the details of how it does that is not particularly relevant - all I am concerned about in this post is the protection of the binary). Obvious Attacks: 1. Attack the main program. You can't, because it is encrypted (presumably with some sort of hash in there too), and so you can only trash it. 2. Attack the loader. This is possible, because it is in plaintext form. Dangers: a. The attacker may get your pass phrase. However, when the main program then sums the original loader, it will notice that it had changed and won't go further. Your pass phrase for the main program has been compromised, but the data file remains ok. b. The attacker may get a copy of the main program. That's fine, because the main program won't run (see above), and the user will be warned. c. (The main danger). The loader program loads the main program, but before copying itself back it replaces the trojan version with the original image (possibly even resetting the timestamp on the file). This is a problem. 3. Attack the datafile. Same a (1), really. 4. Attack the running image. Both 2(c) and 4 are the main problems. Using gcore or procfs the malicious system admin can grab a running copy of the binary, and do what they like. There are many tricks to avoiding the danger of 4 (which applies to all crypto code running on hostile systems), but all are just that: tricks. They can be overcome given enough time and motivation. But 2(c) is the hassle, as there is no obvious way around this, and it is quite easy to do. Anyone got any good ideas? I have a lurking suspicion that there are no solutions to this problem, and we're down to the same issue of securing the transport system which delivers a binary (which is not possible in this case). Just an interesting diversion over a very boring lunch.... Ian.
From: "Ian Farquhar" <ianf@sydney.sgi.com> re: personal account tripwire The problem is that although you can protect the data file of hashes (by using a pass phrase to encrypt it), protecting the binary which does the checking is rather more difficult. Why not recompile the binary? All it needs to be is something like md5.c. Eric
Eric wrote: | From: "Ian Farquhar" <ianf@sydney.sgi.com> | | re: personal account tripwire | | The problem is that although you can protect the data file of | hashes (by using a pass phrase to encrypt it), protecting the | binary which does the checking is rather more difficult. | | Why not recompile the binary? All it needs to be is something like | md5.c. Or leave the binary on a floppy (assuming you can access floppies, or some other removable media.) The problem reduces pretty quickly to a variant of trusting trust. root can hack the kernel, the math libraries, your shell, or several other points to make life difficult. Can you go through a set of steps so convoluted as to catch this? Probably. But in all likelyhood, its easier to get a personal machine on which to store private files. Adam -- "It is seldom that liberty of any kind is lost all at once." -Hume
Eric Hughes says:
From: "Ian Farquhar" <ianf@sydney.sgi.com>
re: personal account tripwire
The problem is that although you can protect the data file of hashes (by using a pass phrase to encrypt it), protecting the binary which does the checking is rather more difficult.
Why not recompile the binary? All it needs to be is something like md5.c.
Read Ken Thompson's Turing Award lecture for why that isn't sufficient. Its quite amusing. Lets face it -- if you are truly paranoid, you have to carry your machine around with you at all times and chain it to you. Its all a question of threat model. For national security type attacks nothing less than "chain machine to wrist" will do. For stopping a casual attack, much less is needed. Its all in the threat model... Perry
Read Ken Thompson's Turing Award lecture for why that isn't sufficient. Its quite amusing. I'm quite familiar with the work. [For those who aren't, it's about compilers that compile in self-perpetuating bugs from their own source code.] The question, however, is not one of possibility but timeliness. Attacks against persistent information are easier than attacks against transient information. If the sysadmin is going to go modifying compilers, it's no longer annoyance. Eric
On Dec 27, 6:40pm, Eric Hughes wrote:
The problem is that although you can protect the data file of hashes (by using a pass phrase to encrypt it), protecting the binary which does the checking is rather more difficult.
Why not recompile the binary? All it needs to be is something like md5.c.
I take it you mean recompile the binary every time? Because you'd need to have source around to recompile it from, and the attacker could modify that source even more easily than he or she could hack the binary. The idea is to make tampering with the binary detectable. Ultimately, the aim is to make it too difficult to break and thus cause most people to give up. I am pretty much certain that to make such a system perfectly secure under these conditions is impossible. What I am aiming for, I suppose, is to make sure that there are no trivial attacks which could compromise security. If you've got a system admin who is willing and capable of hacking exec in the kernel, then it's time to move systems. :) Ian.
From: "Ian Farquhar" <ianf@sydney.sgi.com> I take it you mean recompile the binary every time? Because you'd need to have source around to recompile it from, and the attacker could modify that source even more easily than he or she could hack the binary. The idea is to make tampering with the binary detectable. Recompile the binary from newly uploaded source each time. MD5 source isn't more than about 10K long. That's all of a few seconds of upload time. I am pretty much certain that to make such a system perfectly secure under these conditions is impossible. That's right. Eric
On Dec 27, 8:54pm, Eric Hughes wrote:
I take it you mean recompile the binary every time? Because you'd need to have source around to recompile it from, and the attacker could modify that source even more easily than he or she could hack the binary. The idea is to make tampering with the binary detectable.
Recompile the binary from newly uploaded source each time. MD5 source isn't more than about 10K long. That's all of a few seconds of upload time.
Irritating, and also insecure (system admin intercepts the upload and replaces it with source of his or her own). As has been stated, it's a matter of defining a threat model. IMO, the most likely threat is from pass phrase grabbing (from a sniffer, annex box or whatever), which destroys the security of almost all of these schemes. Modification attacks are possible, although I doubt that the lengths I have described would be useful. As a serious project, though, a personal version of tripwire would not be a bad cypherpunk project, and possibly a nice testbed for working out some anti-tampering techniques.
I am pretty much certain that to make such a system perfectly secure under these conditions is impossible.
That's right.
Is there a standard proof for this, though? I suspect that there is, but have not discovered it. Ian.
From: "Ian Farquhar" <ianf@sydney.sgi.com>
Recompile the binary from newly uploaded source each time. MD5 source isn't more than about 10K long. That's all of a few seconds of upload time.
Irritating [...] ??? An upload can be automated, just like anything other solution. [...] and also insecure (system admin intercepts the upload and replaces it with source of his or her own). _Every_ solution to this problem is insecure, when it comes down to it. What you asked for is something that makes things more difficult. Interception can be made quite difficult. Make the "upload" consist of simulating a keyboard typing the source code into emacs. Change the file name each time. Obfuscate the source by redefining variables each time. Pipe the output directly into the compiler; hell, compile straight from stdin! You can't go about protecting against the modification of binaries by relying upon one of your binaries being better protected than the rest. There's an infinite regress involved here. The solution is to go outside the regress. Recreating the binary from scratch is one way. I'm sure there are others.
I am pretty much certain that to make such a system perfectly secure under these conditions is impossible.
Is there a standard proof for this, though? I suspect that there is, but have not discovered it. Get the essay that Perry mentioned and start there. Keep in mind that object code can be interpreted in many different ways, only one of them typically expected. Eric
Eric Hughes writes:
From: "Ian Farquhar" <ianf@sydney.sgi.com>
Recompile the binary from newly uploaded source each time. MD5 source isn't more than about 10K long. That's all of a few seconds of upload time.
Irritating [...]
??? An upload can be automated, just like anything other solution.
Then the automated part (script or whatever) simply becomes another piece that needs to be protected.
You can't go about protecting against the modification of binaries by relying upon one of your binaries being better protected than the rest. There's an infinite regress involved here. The solution is to go outside the regress. Recreating the binary from scratch is one way. I'm sure there are others.
No -- in the absence of other measures, recreating the binary from scratch is not such a way. You've merely added the compiler and its associated utilities to your regression list. Nothing is gained -- other than additional irritation and delay. -- Jeff
From: Jeff Barber <jeffb@sware.com>
??? An upload can be automated, just like anything other solution.
Then the automated part (script or whatever) simply becomes another piece that needs to be protected. There need be no part of the script/etc. that relies upon persistent information on the target machine. You can simulate the whole thing as typing, if need be. You've merely added the compiler and its associated utilities to your regression list. It occurs to me that there's no need even to use the compiler, if you're willing to upload binary images directly. And if you want to use the compiler, the effort involved in making a recognizer for an ever mutating source is not trivial. Variable names can change, parse trees can change, control structures can change. Nothing is gained -- other than additional irritation and delay. Additional cost of subversion is _exactly_ the issue here. We're not talking about perfect security; that's impossible in this case, and has been acknowledged as impossible. What is at issue is making it difficult for a not-completely-dedicated-to-your-destruction sysadmin to subvert personal files. Furthermore, the pragmatics of a personal tripwire are that it only needs to indicate failure once. As soon as I found out that my files weren't safe in their place of residence, I'd leave. The practical question should not be one of fighting a running battle with a hostile root; root always wins, period. A useful outcome of this discussion would be a feasible way of detecting the first modification. Almost always this will not be a full-scale effort. Eric
Eric Hughes writes:
From: Jeff Barber <jeffb@sware.com>
Nothing is gained -- other than additional irritation and delay.
What is at issue is making it difficult for a not-completely-dedicated-to-your-destruction sysadmin to subvert personal files.
But you're advocating what are non-trivial measures in an attempt to solve a problem which is not the easiest attack anyway. You have been arguing that it might be possible to download a new MD5, then modify it in unusual ways to prevent hacking of the local compiler to recognize it. Then, when folks point out other ways to subvert your integrity check, you complain that you're not trying to solve ALL the problems, only a certain subset. I think the subset you've selected is arbitrary and not particularly realistic. Let's face it, creating the compiler-to-recognize-MD5 is quite a difficult problem, and if I were your system administrator and wanted to obtain access to your files, creating a special compiler version or otherwise attempting to cause your integrity check to fail would be one of the last forms of attack I'd try.
Furthermore, the pragmatics of a personal tripwire are that it only needs to indicate failure once. As soon as I found out that my files weren't safe in their place of residence, I'd leave. The practical question should not be one of fighting a running battle with a hostile root; root always wins, period. A useful outcome of this discussion would be a feasible way of detecting the first modification. Almost always this will not be a full-scale effort.
I agree that would be useful. But the problem with this whole argument is that the number of things whose modification you need to detect is large and their detection is non-trivial. One of the easiest ways to subvert your security is simply to record your keystrokes. It doesn't take a rocket scientist to hack your kernel (or whatever it's called on your OS) to do this. And how do you detect it? The original kernel can be restored after booting with a hacked kernel so you can't use modification times. Perhaps you can then detect that the system was rebooted? Well, maybe, but hiding that is not so difficult either, and a reboot may not necessarily seem suspicious in any case. The bottom line is that, as an ordinary user, you are relying completely on your trust in the system administrator. -- Jeff
On Wed, 28 Dec 1994, Jeff Barber wrote:
Let's face it, creating the compiler-to-recognize-MD5 is quite a difficult problem, and if I were your system administrator and wanted to obtain access to your files, creating a special compiler version or otherwise attempting to cause your integrity check to fail would be one of the last forms of attack I'd try.
Infact you'd need a totally secure OS to try to achieve this goal. You can have the loader recognize the MD5 or other integrity measures. The loader could even contact an authorization server to see if you have paid the license fee to use the program... -Thomas
participants (7)
-
Adam Shostack -
eric@remailer.net -
Ian Farquhar -
Jeff Barber -
Matt Blaze -
Perry E. Metzger -
Thomas Grant Edwards