At 01:18 AM 10/17/00 -0700, Tim May wrote:
For a message encrypted (or signed, a related problem) with a PKS cipher, recovering the plaintext involves factoring the modulus...so far as we know. This factoring may be done with conventional computers, special-purpose computers, or even exotic computers (tanks of DNA computers, billions of Net-connected computers, superconducting geodes orbiting around Neptune, quantum computers, whatever.)
[snip]
A look at the work factors (cf. Rivest's paper of circa 1993-4, or Schneier's book, or any of several other books, or one's own calculations) will show the pointlessness of throwing more computer power at sufficiently large moduli.
Absent a breakthrough in factoring (and I mean a _major_ breakthrough, not a polynomial factor speedup), a modulus of thousands of decimal digits will never be factored. The "RSA-129" challenge becomes the "RSA-1000" challenge. Moore's Law won't do any good, nor will using ASICs or gate arrays or even nanotechnology.
A quantum computer _might_ make a difference (though this is unproven).
Rivest and Schneier's work factor discussions assume brute force or streamlined brute force such as GNFS. These remain exponential in time. Now hypothesize the effect a new factoring or Feistel cipher attack would have on these tables. Too many crypto pundits spout extrapolations of exponential work factor as proof that these ciphers are unbreakable. These are merely postulates based on an assumption of a sort that has generally proven wrong throughout the history of science. "X requires Y, but Y is impossible, so X is impossible." Until Z comes along, and 20 years later its demonstrated in science or math classrooms as yet another example of bad logic.
Talking about SOS and ECL and 1 GHz and all is nonsensical. All of those technologies are as nothing when in comes to problems with work factors exponential in key length!
The exact point at which brute force becomes economically infeasible depends on technologies, improvements in algorithms, etc., but the broad outlines remain as described.
One of the points I believe is sorely missing in these discussions is how important "improvements in algorithms" can be. In the narrowest sense, I agree with your statements - but I have also seen what elegant alternative approaches can do to systems that were presumed to be vulnerable only to brute force, and I've also seen how nicely they may be placed into custom hardware.
Then I'd have to say your analytical abilities are shallow. If you think one of these ciphers with work factors exponential in modulus size (or "key length," approximately) will fall to custom chips, you don't understand exponential time/space.
I'm not stating that brute force silicon can be scaled to the point it can attack a 256 bit key in reasonable time today. What I do know is that alternative attacks, implemented in silicon or sapphire, are another matter. Your position is predicated on the assumption that because no such attacks are in the public domain, none must exist. I believe this is faulty logic, and advances a common, yet dangerous position.
I don't think its unreasonable to extrapolate that a sufficiently high priority message can be cracked by the NSA in near realtime, regardless of the cipher strength used, without significant knowledge of the nature of the plaintext.
And you believe this?
Most people who have worked with military crypto systems do, off the record. The difference between what is public and what has been developed with decades of unlimited resources is staggering. How many cryptographers or discrete math experts work in the public domain? Now how many work for the NSA? That's how many orders of magnitude? And how many orders of magnitude difference in budgets, ect., even with bureaucratic and civil service overhead. Call it threat analysis - I think it is reasonable to assume they know a few tricks that aren't public yet. And any trick related to factoring or Feistel networks is sufficient to obsolete those "age of universe" extrapolations.