Reply to Tim May's comments
Tim May wrote (reply comments offset by leading '***'): Subject: Re: 'Black' budget purchases Michael Wilson writes:
The data from the Maryland Procurement Office that is stored in certain databases (and removed from others, as I have just discovered when I checked) provides the complete 'black' budget purchases of the intelligence community, not just their purchases of supercomputers. Such raw data goes a long way towards confirming other bits of intelligence, such as the establishment by NSA of its own chip manufacturing facility owing to a lack of trust in undocumented sections of commercial silicon. This data is useful beyond knowing the numbers
That the NSA contracted National Semiconductor to build a facility on-site has been common knowledge since 1989-90. The fab is not state of the art (i.e., is not 1.8 micron or better) and is believed to be used for the very reasonable purpose of producing keying material in a secure environment (ROMs, PROMs, fuse-linked micros, PLAs, etc.). It is unlikely--but possible--that high-performance micros are being manufactured there. *** We were tracking NSA purchases of material over a decade ago; as for their usage of the technology, my statement was simply that they felt, after serious analysis, that they couldn't trust commercial silicon. The issue was trust, not computation power.
of supercomputers available (although it does help provide an upper boundary on raw processing power, useful for quantifying tolerances).
What we find interesting regarding the number of supercomputers at NSA is what they do to the keyspace; a supposition of ours from the early period of commercial public key was an attack on the domain of potential keys. Given a known keylength, a powerful systematic search for primes that fit that range can, over time, begin to damage the strength of the system. Careful analysis of
This is nonsense. A typical 1024-bit RSA system uses p and q close to 512 bits each, e.g., 511 and 513. Whatever. Now a 512-bit number is a 150-plus decimal digit number. About .5-1% of all of these numbers are prime (by the Prime Number Theorem, or somesuch...about 1/N of all N-digit numbers are prime, as I recall). How big a keyspace is this to start searching "systematically"? Considering that there are "only" about 10^73 particles of all kinds in the entire universe (based on our best estimate of the size of the universe, the density of galaxies, gas clouds, etc.), this means that if every particle in the universe were searching for and recording the primes they discovered, each particle would have to store 10^77 primes! So much for "a powerful systematic search for primes that fit that range." *** You assume that your selection of primes is random; it is the case, particularly in the initial usages of public-key systems, that attacks could be made on keyspaces based on the prime generation method. A point that number-crunch jockeys tend to forget is that psychology and systems analysis provide greater in-roads against secure systems than brute force.
technical resource also allows one to speculate--are CM platforms (pardon the pun) used for exhaustive systematic search for keys, while Cray systems are used for attacks on the keyspace? Differentiation of parallel versus scalar processing towards attack domains is interesting.
"Parallel versus scalar processing"? Parallelism means nothing at these scales...see the above point. *** Your point is orthogonal to our point. The two systems are used for different attacks--parallelism can be used for exhaustive search, such as for DES keys, while scalar processing can be used for testing primality.
Michael Wilson Managing Director, The Nemesis Group The Adversary
--Tim May *** TNG
This'll have to be my last reply to Michael Wilson. No offense meant, but we are not even close to speaking the same language.
*** You assume that your selection of primes is random; it is the case, particularly in the initial usages of public-key systems, that attacks could be made on keyspaces based on the prime generation method. A point that number-crunch jockeys tend to forget is that psychology and systems analysis provide greater in-roads against secure systems than brute force.
Your phrasing is Greek to me. The primes are generated by picking a very large random number, of 150 digits or so (depends on key length chosen), and then iterating-and-testing until a prime is found. (I wrote a version of this for my own crude version of RSA, in Mathematica...not very fast, but immensely educational for me.) So I run this and start with a random number of: 3865018936355867.....38587493661988826448627 (152 digits) I run this process a second time and get: 193648376263874....8747487458364253 (152 digits) And I could keep running this as many times as I like, with the numbers being different every time. (These are just examples, not real numbers.) Now tell me, even granted that my RNG is not "perfect" (in the sense we talk about so often here), how could an attacker--even one using the "psychology and systems analysis" Wilson cites--know where to start? Which number I generated? The search space is just too large. Just too much entropy. PGP, for example, asks for keyboard input to get enough entropy. (I assume some of the collected entropy goes directly into the prime generation process, of course.) Even all the world's supercomputers are not going to be able guess (in any number of trials in a million years) the specific 140- or 150- or 160-digit number I generated. (Caveat: Unless the RNG is a brain-dead seeded generator. But that's why MailSafe, PGP, and other programs ask for keyboard input as a source of entropy. Even if the distillation of entropy results in "only" 250 bits of entropy, it's still hopeless to try to enumerate the primes.) I agree with Graham Toal: it's time Michael Wilson either _tells us_ what his magical schemes are, or shuts up. Pompous language is no substitute for meaningful information.
"Parallel versus scalar processing"? Parallelism means nothing at these scales...see the above point.
*** Your point is orthogonal to our point. The two systems are used for different attacks--parallelism can be used for exhaustive search, such as for DES keys, while scalar processing can be used for testing primality.
Gobbledegook! A "parallel" machine with 1024 nodes is at most 1024 times faster than a single node...no magical gains. The RSA-129 challenge did use lots (hundreds, maybe thousands) of nodes, but this was--as expected--a proportionate gain. Saying an intractable problem becomes tractable with "parallel processing" is simply wrong. I suppose one could magically hypothesize a machine with "10^100 nodes" and say "See, parallel processing allowed us to factor this and such number," but this is pure fantasy. Exponential blowup (non-polynomial time) means just that...a few factors of 16 or 4096 or whatever just don't make a difference. Please provide us with specifics of your methods. If you say they are "proprietary" or that you are seeking a patent on them, I won't be surprised. --Tim May -- .......................................................................... Timothy C. May | Crypto Anarchy: encryption, digital money, tcmay@netcom.com | anonymous networks, digital pseudonyms, zero 408-688-5409 | knowledge, reputations, information markets, W.A.S.T.E.: Aptos, CA | black markets, collapse of governments. Higher Power: 2^859433 | Public Key: PGP and MailSafe available. "National borders are just speed bumps on the information superhighway."
participants (2)
-
Michael Wilson -
tcmay@netcom.com