Re: Photuris Primality verification needed
From: "Brian A. LaMacchia" <bal@martigny.ai.mit.edu>
Recently, someone asked for a smaller prime of only 512-bits for speed. This is more than enough for the strength of keys needed for DES, 3DES, MD5 and SHA. Perhaps this would be easier to have more complete and robust verification as well.
Our practical experiences with discrete logs suggests that the effort required to perform the discrete log precomputations in (a) is slightly more difficult than factoring a composite of the same size in bits. In 1990-91 we estimated that performing (a) for a k-bit prime modulus was about as hard as factoring a k+32-bit composite. [Recent factoring work has probably changed this a bit, but it's still a good estimate.]
Thanks. I have added the [from Schneier] estimate e ** ((ln p)**1/2 * (ln (ln p))**1/2) and number field sieve estimate e ** ((ln p)**1/3 * (ln (ln p))**2/3) to the Photuris draft, with a small amount of explanation. Hilarie Orman posted that 512-bits only gives an order of 56-bits strength, 1024-bits yeilds 80-bits strength, and 2048 yields 112-bits strength. I do not have the facilities to verify her numbers. As most of us agree that 56-bits is not enough (DES), the 512-bit prime seems a waste of time and a tempting target. I'd like to drop it, but Phil is inclined to keep it with a disclaimer. Bill.Simpson@um.cc.umich.edu Key fingerprint = 2E 07 23 03 C5 62 70 D3 59 B1 4F 5E 1D C2 C1 A2
Hilarie Orman posted that 512-bits only gives an order of 56-bits strength, 1024-bits yeilds 80-bits strength, and 2048 yields 112-bits strength. I do not have the facilities to verify her numbers.
As most of us agree that 56-bits is not enough (DES), the 512-bit prime seems a waste of time and a tempting target. I'd like to drop it, but Phil is inclined to keep it with a disclaimer.
Well, since we already require 56-bit DES in ESP in the interests of promoting basic interoperability, wouldn't a 512-bit prime be similarly sufficient? Again, I'm *not* going to recommend that people use it, only provide it for those who simply cannot use larger moduli for whatever reason (export controls or CPU limits). Phil
Well, since we already require 56-bit DES in ESP in the interests of promoting basic interoperability, wouldn't a 512-bit prime be similarly sufficient?
If you are willing to accept that in all likelihood, one year from now, some group will announce that can "crack" all key exchanges that using the published modulus, then sure, call it sufficient. There is certainly precedent; it was my understanding that Sun did not change their SecureRPC modulus when informed of LaMacchia and Odlyzko's work.
Well, since we already require 56-bit DES in ESP in the interests of promoting basic interoperability, wouldn't a 512-bit prime be similarly sufficient?
*NO*, because you have to break the 56-bit DES separately every time, whereas doing the precomputation for the 512 bit prime is a one-time job. Once anyone has done the precomputation, *all* communications will be open to whoever is in possession of the database. I think there is good reason to believe that if the 512 bit prime is allowed, it will be widely used, and even if it is found breakable, it will not be easily changed (just think about the experience with Sun's "secure" rpc, and how quickly their primes have been changed - and it still has much narrower deployment than what is hoped for ipsec). Let me include below a message I sent to Bill Simpson.
If it is kept, the commercial vendors will probably start using it as default because it is faster than the others, and the state department will pressure them to do so. Then we are again left with too weak aprotections (in other words, pseudo-security which makes people believe they have protection, when they actually don't). After the precomputation, it is apparently cheap enough to crack the exchange that it can be done on a mass scale to all exchanges between a very large number of hosts. I find this very harmful, as it again provides no protection against mass surveillance. We are already too close to an Orwellian society.
The remarks there apply equally well to organized criminals, large corporations, and hostile governments. Or, suppose some group manages to get access to enough idle time, computes the database, and posts it on the Internet. I for one would be willing to contribute CPU time on machines where I have access to help such a group, because I think it is better that it is widely known and publicized when there is little security and privacy. Including the provision for the 512 bit prime is *HARMFUL* and *DANGEROUS*. Export control is not really an issue here, because if companies in the United States cannot provide secure networking, there are other companies in the world that can. Tatu Ylonen <ylo@cs.hut.fi>
Including the provision for the 512 bit prime is *HARMFUL* and *DANGEROUS*. Export control is not really an issue here, because if companies in the United States cannot provide secure networking, there are other companies in the world that can.
You've convinced me. I remove my proposal to include a recommended 512-bit modulus. The smallest standard modulus will remain 1024-bits. Phil
"William Allen Simpson" writes:
As most of us agree that 56-bits is not enough (DES), the 512-bit prime seems a waste of time and a tempting target. I'd like to drop it, but Phil is inclined to keep it with a disclaimer.
I agree with your approach, Bill -- it seems worth dropping. Something this dangerous isn't worth leaving around for people to accidently use. Perry
Our practical experiences with discrete logs suggests that the effort required to perform the discrete log precomputations in (a) is slightly more difficult than factoring a composite of the same size in bits. In 1990-91 we estimated that performing (a) for a k-bit prime modulus was about as hard as factoring a k+32-bit composite. [Recent factoring work has probably changed this a bit, but it's still a good estimate.]
This is also my understanding, which I got from you in the first place. I take it there have been no dramatic breakthroughs in the last few years in the discrete log problem? How heavily has it been studied in comparison with factoring? Yes, in theory once an attacker spends enough time precomputing a table for a particular modulus he can then attack individual DH key exchanges with ease. This seems entirely analogous to attacking RSA. If you spend the time up front to factor my public RSA key, then you can also easily attack individual messages to me. So if I am willing to rely on a PGP key of, say, 1024 bits then I should be equally willing to rely on a 1024-bit DH modulus. Now there is admittedly a practical difference here -- people *can* change their PGP RSA keys occasionally, though this is hard to do when you have a lot of signatures. And each user has his/her own PGP RSA key, and cracking that gives you only the traffic to that user. A public DH modulus will be shared by many more people -- making it a much more tempting target. Still, requiring support of a fixed modulus for shared public use is important to promote a basic level of interoperability. This has its risks, but it should be okay *provided* it's a strong prime of sufficient strength to preclude the precomputation of the discrete log tables by even a highly motivated and resourceful attacker. And as a backup the protocol should provide for the optional use of private moduli between consenting parties. Sound reasonable? Phil
Date: Tue, 7 Nov 1995 17:43:49 -0800 (PST) From: Phil Karn <karn@qualcomm.com> Cc: cypherpunks@toad.com, ipsec-dev@eit.COM
Our practical experiences with discrete logs suggests that the effort required to perform the discrete log precomputations in (a) is slightly more difficult than factoring a composite of the same size in bits. In 1990-91 we estimated that performing (a) for a k-bit prime modulus was about as hard as factoring a k+32-bit composite. [Recent factoring work has probably changed this a bit, but it's still a good estimate.]
This is also my understanding, which I got from you in the first place. I take it there have been no dramatic breakthroughs in the last few years in the discrete log problem? How heavily has it been studied in comparison with factoring? Factoring has received more attention than discrete log; certainly when it comes to net-wide computations it's all factoring. But that's partly due, I think, to a lack of targets to attack. Still, requiring support of a fixed modulus for shared public use is important to promote a basic level of interoperability. This has its risks, but it should be okay *provided* it's a strong prime of sufficient strength to preclude the precomputation of the discrete log tables by even a highly motivated and resourceful attacker. And as a backup the protocol should provide for the optional use of private moduli between consenting parties. Sound reasonable? You definitely should allow any modulus between consenting parties. As for what moduli the standard says "must be" (vs. "should be") supported, I don't know. Maybe the right thing to do is require conforming implementations to support a large modulus but include recommended smaller moduli. Then Alice can always force Bob to use the large modulus but, if both agree, they can use something smaller from the standard or even their own home-grown modulus. --bal
participants (6)
-
Brian A. LaMacchia -
Hilarie Orman -
Perry E. Metzger -
Phil Karn -
Tatu Ylonen -
William Allen Simpson