Key length security (calculations!)
Tim Mays writes:
I refer readers to the sci.crypt FAQ, the RSA FAQ, or books such as "Applied Cryptography." (Hint for those who don't want to: one time pads (Vernam ciphers) and things like RSA with 1000-digit moduli.)
("Enough effort" can be interpreted in a circular way to ensure the answer is 'Yes," as a truism. This is meaningless, if "enough effort" is impossible to achieve, as with OTPs, or is beyond the energy in the universe. If "enough effort" is interpreted to mean theft or rubber hose crytanalysis, all bets are off. But most people who ask the question I cited don't mean these loopholes.)
I have seen Tim posting statements to this effect many times, and because he is one of the more well respected and listened to voices on the list, I feel it important to examine this in some detail. While I agree that 1000 bit moduli in RSA is adequate protection *in all probability*, for even national security secrets, I think it is far from clear that this will definitely be true 10, or even 5 years from now. Instead of just waving vague generalities around, though, let's do some nitty gritty calculations: The people who cracked RSA-129 themselves have stated that they believe a 1024 bit modulus is at most 20,000 to 2,000,000 times more difficult to crack than RSA-129. For example, I recall Derek Atkins posting that he estimated a 1024 bit key to be 40,000 times harder than a 512 bit key, although I didn't save the posting. And Paul Leyland of Oxford posted:
RSA-129 is 425 bits; rather harder than 384-bit numbers. We estimate that 512-bit keys are about 20 times harder than RSA-129, if a more efficient but available algorithm is used. No-one knows how much harder 1024-bit numbers are, but they will be no where near a trillion times harder than 384-bit keys. Best estimates suggest that 1024-bit numbers are about 10^4 to 10^5 times harder than 512-bit numbers.
OK, so the people in the civilian world working on this today say it is possible that a 1024 bit key is only 20,000 times harder than RSA-129 *using known algorithms*. Now let's really get our hands dirty: cracking RSA-129 was estimated to take 5000 mips years. The NAL NWT 2/140 computer installed at the National Aerospace lab in Tokyo is estimated at 357 Cray YMP equivalents. I estimate this to be equivalent to 200 Gips for the purposes of this computation (this is possibly where I am most off). 5000 mips years = 1.58 X 10^17 instructions. This comes out to 9.13 days on the NAL NWT 2/140. If my estimates above are correct, scaling up to the 7400 Cray equivalent computer due to be installed 4Q95, from the 357 Cray equivalent above, we go down to 10.5 hours. This is all for the RSA-129, of course. Still sounds pretty safe so far... if it really takes at least 20,000 times as long to crack a 1024 bit modulus, then it would still take the 7400 C.E. (Cray Equivalent) computer 24 years to crack a 1024 bit number. BUT, the biggest worry is that no one knows how good the NSA's factoring algorithms are. I read recently that the NSA is the world's largest employer of mathematicians. The relative improvement in factoring algorithms since the introduction of the RSA-129 problem, to its factoring almost 20 years later, far exceed even the exponential increase in computer speed over that same period of time. (5 orders of magnitude? more?) We have no way of knowing how many orders of magnitude leeway we have, because as the moduli get larger, the factoring algorithm gets more and more important. Suppose the NSA has four orders of magnitude on us in the efficiency of their factoring algorithms. In that case, they might be able to crack a 1024 bit key as early as the end of 1995. (20,000 X 10.5)/10^4 hours = 21 hours required). Granted, this may not be likely, but I think we have to take the possibility seriously. At this point, 1024 bit keys cease to be secure for matters of critical national security (but still good for everything else). Now let's continue with our worst case scenario... suppose that computer speed doubles every 3.3 years over the next decade, and that further algorithmic breakthroughs continue to at least match this rate of doubling (not likely, perhaps, but *possible*). Then just one decade later, in 2005, the computer power of the NSA is 8 times greater, and the algorithms are 8 times faster, for a total speed increase of 64. At this point, they could crack a 1024 bit key in just 20 minutes (using all their resources), or 72 keys per day. At this point, I start to be uncomfortable trusting my security to a 1024 bit key length. So, it seems *possible*, even if by no means probable, that a 1024 bit key length is only good for the next decade or so. My intent is not to foster paranoia, but cypherpunks, of all people, should take as critical a view of key length security as possible. I suggest that people who state that the want 1200 bit or even 2000 bit key sizes in PGP be no longer ridiculed... the issue is subjective, as we have no way of knowing what the NSA's factoring algorithms are like. Doug ___________________________________________________________________ Doug Cutrell General Partner doug@OpenMind.com Open Mind, Santa Cruz ===================================================================
Doug Cutrells writes:
Tim Mays writes:
Singular, but no matter.
I refer readers to the sci.crypt FAQ, the RSA FAQ, or books such as "Applied Cryptography." (Hint for those who don't want to: one time pads (Vernam ciphers) and things like RSA with 1000-digit moduli.)
("Enough effort" can be interpreted in a circular way to ensure the answer is 'Yes," as a truism. This is meaningless, if "enough effort" is impossible to achieve, as with OTPs, or is beyond the energy in the universe. If "enough effort" is interpreted to mean theft or rubber hose crytanalysis, all bets are off. But most people who ask the question I cited don't mean these loopholes.)
I have seen Tim posting statements to this effect many times, and because he is one of the more well respected and listened to voices on the list, I feel it important to examine this in some detail. While I agree that 1000
Before going further, let me emphasize my mention in my section above of one-time pads, or Vernam ciphers. These are *information-theoretically secure*, which means that no amount of computer power can *ever* break them. Period. (In my characteristic way, I included a sidebar mention of stealing the key and or using rubber hose cryptanalysis, which some may think finessed my point about not being able to break OTPs. It does not, as far as "breaking" the cipher has cryptographic meaning.) As for RSA, that is only computationally secure, and depends on advances on factoring, as we all know. Many of us think there will not be "dramatic" advances in factoring, for various reason, but this of course cannot be proved (can't prove the nonexistence of some clever approach, logically). Factoring is suspected to be in the class NP (or even harder, some suspect), but it has not yet been proved to be so. If factoring is NP-complete, and if P = NP, then fast factoring methods may be found (fast = polynomial in length). Crypto books deal with this issue better than I can here.
Still sounds pretty safe so far... if it really takes at least 20,000 times as long to crack a 1024 bit modulus, then it would still take the 7400 C.E. (Cray Equivalent) computer 24 years to crack a 1024 bit number. BUT, the biggest worry is that no one knows how good the NSA's factoring algorithms are. I read recently that the NSA is the world's largest employer of mathematicians. The relative improvement in factoring algorithms since the
Not to attack Doug's point, which has validity here (that we don't know what factoring advances NSA may have made), but I personally think the combined capabilities of "public domain mathematicians" are now far greater than what NSA has. Shamir, Odzylko, Blum, Micali, Rackoff, Goldwasser, Solovay, Berlenkamp, etc., are top-flight researchers, publishing many papers a year on these topics. It is unlikely that some GS-14 mathematicians at the Fort, not able to publish openly, have made much more progress. I think the resurgence of crypto in the 70s, triggered by public key methods and fueled by complexity theory breakthrough, caused a "sea change" in inside NSA-outside NSA algorithm expertise.
So, it seems *possible*, even if by no means probable, that a 1024 bit key length is only good for the next decade or so. My intent is not to foster paranoia, but cypherpunks, of all people, should take as critical a view of key length security as possible.
I suggest that people who state that the want 1200 bit or even 2000 bit key sizes in PGP be no longer ridiculed... the issue is subjective, as we have no way of knowing what the NSA's factoring algorithms are like.
I have never ridiculed them (in fact, I use 1280 bits or somesuch), and I think the whole recent matter of Phil Zimmermann charging that "amateur cryptologists" are tainting his reputation and that of PGP to have some supreme ironies. Seems to me I heard a guy named Bidzos making the same points..... (I'm not attacking Phil, just noting the ironies of Phil now attempting to control the evolution of "his" intellectual property. The "naming" issue is minor--and that's what digital signatures are for, anyway.) A 3000-bit key may very well require more total energy to break than is available in the universe. Barring P = NP sorts of breakthroughs, of course. (I did a post on this last week.) The bottom line is sometimes lost in the debate: * It is just not true that "any cipher can be broken if the NSA really wants to." (This was the original point I was responding to.) * Some ciphers are absolutely unbreakable, and others are effectively unbreakable, or soon will be. Increased key length is computationally "cheap" to use, but "expensive" to break. (The current imbroglio about key lengths of PGP 2.6 is a passing implementation detail, having to do with how PGP does math. By Version 3.0, speculatively, it will likely be increased dramatically. No big deal. People should generate new keys and flush the old ones, anyway.) --Tim May -- .......................................................................... Timothy C. May | Crypto Anarchy: encryption, digital money, tcmay@netcom.com | anonymous networks, digital pseudonyms, zero 408-688-5409 | knowledge, reputations, information markets, W.A.S.T.E.: Aptos, CA | black markets, collapse of governments. Higher Power: 2^859433 | Public Key: PGP and MailSafe available. "National borders are just speed bumps on the information superhighway."
To quote you: <<Not to attack Doug's point, which has validity here (that we don't know what factoring advances NSA may have made), but I personally think the combined capabilities of "public domain mathematicians" are now far greater than what NSA has. Shamir, Odzylko, Blum, Micali, Rackoff, Goldwasser, Solovay, Berlenkamp, etc., are top-flight researchers, publishing many papers a year on these topics. It is unlikely that some GS-14 mathematicians at the Fort, not able to publish openly, have made much more progress. I think the resurgence of crypto in the 70s, triggered by public key methods and fueled by complexity theory breakthrough, caused a "sea change" in inside NSA-outside NSA algorithm expertise.
You mention Shamir, etc. However I would point out that even if any of the original RSA mathematicians found a better factoring algorithm, they'd be more than likely to keep it under lock and key. The obvious reason is that their money supply depends on such an algorithm being suppressed. Now, someone outside of their circle with a little less to worry about the impact of such a factoring algirthm would be likely to publish it, but I doubt that PKP's founders would.
Arsen Ray A. writes:
To quote you: <<Not to attack Doug's point, which has validity here (that we don't know what factoring advances NSA may have made), but I personally think the combined capabilities of "public domain mathematicians" are now far greater than what NSA has. Shamir, Odzylko, Blum, Micali, Rackoff, Goldwasser, Solovay, Berlenkamp, etc., are top-flight researchers, publishing many papers a year on these topics. It is unlikely that some GS-14 mathematicians at the Fort, not able to publish openly, have made much more progress. I think the resurgence of crypto in the 70s, triggered by public key methods and fueled by complexity theory breakthrough, caused a "sea change" in inside NSA-outside NSA algorithm expertise.
You mention Shamir, etc. However I would point out that even if any of the original RSA mathematicians found a better factoring algorithm, they'd be more than likely to keep it under lock and key. The obvious reason is that their money supply depends on such an algorithm being suppressed.
Now, someone outside of their circle with a little less to worry about the impact of such a factoring algirthm would be likely to publish it, but I doubt that PKP's founders would.
Several points: 1. Adi Shamir sold out what little share he had some years back. He has no financial links to PKP or RSADSI. 2. Shamir is Israeli. (This has led to more than one humorous situation in which Shamir has received notification from the U.S. government that he cannot "export" something he's working on--as an Israeli, living in Israel.) 3. Shamir was the coinventor (with Biham), or at least the recent rediscoverer, of differential cryptanalysis. He apparently felt no constraint to not publish. 4. Some of the others I listed, such as Odzylko, are in fact the known leaders of making improvements in factoring. (Not that various linear factors matter much, in the long run, of course.) It's only speculation as to the relative competence of mathematicians inside vs. outside the NSA; my main point remains that the outside community is very dynamic and robust and shows no signs that I can see of holding back on reporting breakthroughs. Nor could a major breakthrough be contained, I think. --Tim May -- .......................................................................... Timothy C. May | Crypto Anarchy: encryption, digital money, tcmay@netcom.com | anonymous networks, digital pseudonyms, zero 408-688-5409 | knowledge, reputations, information markets, W.A.S.T.E.: Aptos, CA | black markets, collapse of governments. Higher Power: 2^859433 | Public Key: PGP and MailSafe available. "National borders are just speed bumps on the information superhighway."
To quote you: <<Not to attack Doug's point, which has validity here (that we don't know what factoring advances NSA may have made), but I personally think the combined capabilities of "public domain mathematicians" are now far greater than what NSA has. Shamir, Odzylko, Blum, Micali, Rackoff, Goldwasser, Solovay, Berlenkamp, etc., are top-flight researchers, publishing many papers a year on these topics. It is unlikely that some GS-14 mathematicians at the Fort, not able to publish openly, have made much more progress. I think the resurgence of crypto in the 70s, triggered by public key methods and fueled by complexity theory breakthrough, caused a "sea change" in inside NSA-outside NSA algorithm expertise.
You mention Shamir, etc. However I would point out that even if any of the original RSA mathematicians found a better factoring algorithm, they'd be more than likely to keep it under lock and key. The obvious reason is that their money supply depends on such an algorithm being suppressed.
What about Shamir's triple pass key exchange protocol (explained briefly below). Its the perfect key exchange algorithm. It obsoletes Public key systems entirely as long as you only need to exchange keys and not authenticate. I'd say that is pretty decent evidence that he does still do things to help the field when it might hurt RSADSI. (although I wouldn't say the same thing about all of them) Triple pass key exchange: Choose a commutative symetric encryption algorithm. Step 1: A encrypts the session key in his personal symetric key (he doesn't share it with anybody) and sends the message to B: Ea(K) Step 2: B encrypts this in her personal symetric key and sends it back to A: Eb(Ea(K)) Step 3: A decrypts the message and sends it back to B: Da(Eb(Ea(K))) Since we chose a commutative algorithm, this is Eb(K). Step 4: B decrypts with her key and Eve (ala Scheier) has no clue. Mallet can't intercept your communication, but he can talk to you and unless you have some sort of authentication impersonate Eve. Example commutative algorithm out of Schneier by Shamir based on the hardness of factoring: Choose a large prime, p. Choose an encryption key e that is a large prime less than p. Choose a d so that d*e mod (p-1) = 1 (i.e. the muliplicative inverse of e in mod (p-1)). C = P^e mod p P = C^d mod p Cheers, Jason W. Solinsky
Timothy C. May writes
Factoring is suspected to be in the class NP (or even harder, some suspect), but it has not yet been proved to be so.
Those who have studied the matter generally believe that factoring is NP, but is not NP complete. Factoring cannot be "even harder than NP" since a simple minded brute force attack is 2^(n/2), which is only NP As Timothy May points out, if factoring is NP, then modest increases in key length can easily defeat enormous improvements in factoring.
... if P = NP, then fast factoring methods may be found (fast = polynomial in length).
In the highly unlikely event that P = NP then we have also solved, as an almost trivial special case, the problems of true artificial intelligence, artificial consciousness, and artificial perception, and the failure of one particular form of crypto will not be noticed in the midst of such radical changes. -- --------------------------------------------------------------------- We have the right to defend ourselves and our property, because of the kind of animals that we James A. Donald are. True law derives from this right, not from the arbitrary power of the omnipotent state. jamesd@netcom.com
First Tim wrote:
Factoring is suspected to be in the class NP (or even harder, some suspect), but it has not yet been proved to be so.
NP is nondeterministic polynomial time, meaning that you can verify the answer in polynomial time. You need not be able to derive the answer in P time. The 'nondeterministic' part means that the machine guesses the reason for the correct answer and then verifies that it has the right answer. The reasoning is encoded in a piece of data called a witness. Since one can multiply two numbers together quickly, factoring is NP-hard. (X-hard means that the answer comes from a 'short' sequence of decision questions in complexity class X.) The verification, multiplication, is in P, so factoring, the inverse of multiplication, is NP-hard. Since every P problem can be verified in P time (by running the P time algorithm without the need for a witness), P is a subset of NP. The unknown question is whether it is a proper subset. Then James wrote: Those who have studied the matter generally believe that factoring is NP, but is not NP complete. Factoring isn't in NP. Factoring is NP-hard. Problems in P and NP are decision problems, i.e. problems which have true or false answers. NP-hard means that the problem can be reduced to answering a short list of NP problems. In this case, those questions might be "Is the second-lowest bit of the smallest factor a 1?" and so on, questions about specific properties of the factorization. Note that a factorization makes a suitable witness for every such NP question. Factoring cannot be "even harder than NP" since a simple minded brute force attack is 2^(n/2), which is only NP 2^n problems give you E, exponential time. There's also NE, nondetermistic exponential time, problems which have witnesses verifiable in E time. Merely having an exponential time algorithm does not mean that the problem is in NP. NP is a subset of E, however. The easy algorithm is exhaustive search of the space of possible witnesses, which in exponential in the length of the P time verification method, and therefore exponential in the length of the input. As Timothy May points out, if factoring is NP, then modest increases in key length can easily defeat enormous improvements in factoring. Also not quite true. Consider a putative problem whose provably best algorithm is O(n^(log log n)). This algorithm dominates every polynomial (and hence is _not_ in P), but grows extremely slowly. How extremely? Take the log base at 10 and n = 1 googol. The calculation yields O(n^2). No such algorithms or problems are known, I might add; neither is their existence firmly denied. Eric
Still sounds pretty safe so far... if it really takes at least 20,000 times as long to crack a 1024 bit modulus, then it would still take the 7400 C.E. (Cray Equivalent) computer 24 years to crack a 1024 bit number. BUT, the biggest worry is that no one knows how good the NSA's factoring algorithms are. I read recently that the NSA is the world's largest employer of mathematicians. The relative improvement in factoring algorithms since the
Not to attack Doug's point, which has validity here (that we don't know what factoring advances NSA may have made), but I personally think the combined capabilities of "public domain mathematicians" are now far greater than what NSA has. Shamir, Odzylko, Blum, Micali, Rackoff, Goldwasser, Solovay, Berlenkamp, etc., are top-flight researchers, publishing many papers a year on these topics. It is unlikely that some GS-14 mathematicians at the Fort, not able to publish openly, have made much more progress. I think the resurgence of crypto in the 70s, triggered by public key methods and fueled by complexity theory breakthrough, caused a "sea change" in inside NSA-outside NSA algorithm expertise.
I disagree with this, and I would site as a case and point the fact that differential cryptanalytic attacks were not "discovered" until 1990 while a relatively small team of IBM cryptologists had it back in 1974 when they made DES. NSA apparently had it before then. This is why I would rather find a fast secure mulitple DES method based on spliting and not have to use IDEA which us so new. Before I was born, NSA knew all of these things which were not figured out by the academic community until this decade. (of course they could also know of some sort of back door, but I think that the fact that NSA knew of differential cryptography and let an algorithm immune to it pass while they lowered the key size says something about DES's security against attacks the academic community hasn't figured out yet. The bottom line is that NSA has demonstrated that they can outperform academia without public reviews of their method (LEAFs aside for the moment [government agencies are after all required to do several stupid things each year]) Cheers, JWS
participants (6)
-
doug@OpenMind.com -
hughes@ah.com -
jamesd@netcom.com -
rarachel@prism.poly.edu -
solman@MIT.EDU -
tcmay@netcom.com