Perhaps a simpler example. Let's look at a 'fair' coin and what that means in the real world. A normal coin (or any nDx for that matter [1]) for short sequences is random. In other words if you record a game sequence and then replay the game the die sequence won't have any statistical correlation. Knowing what happened last time won't help you this time, the 'window of opportunity' with respect to statistical bias isn't large enough, so the game is 'fair'. But!, if you throw that coin once a second for a billion years you will find that -no- coin is really -fair-. This goes back to k-sequences and Knuth. Go back and then start throwing it again, and if your sequence is long enough you can use this known bias from the first experiment to increase your percentage of 'hits' in the second sequence. In other words you can now prove experimentaly the coin isn't fair and what that bias is. This is related to 'Hypothesis Testing'. It's rather strange, but I happen to be rereading a book, "The Mathematical Sciences: A Collection of Essays" (LoC# 69-12750) put out by some group called COSRIMS in about 1969. I remember the book because somebody gave it to me (I was about 9 or 10 at the time) to read, and it has an insane bright yellow cover. I recently came across it again in a used bookstore for $10 so I bought it. It's basically a bunch of chapters on various issues of math research with the intent of focusing high school and undergrads to pursue mathematical careers by giving examples of what you might be working on. The chapter "Statistical Inference" (by J. Kiefer) uses an example of a coin and a 3-run sequence to determine the actual bias of the coin (the example is very simple, the coin is very biased). You should be able to still find the book in public libraries and college libraries. I'm sure more modern texts on hypothesis testing will be just as relevant. The vast majority of RNG's that we use are really PRNG's, we just don't collect enough data on them to be able to demonstrate that. Or the sequence of interest is so short we dont' care. [1] A coin is a 1D2, two coins would be 2D2, for example. Who said wargaming was worthless ;) On Sat, 13 Jul 2002, Mike Rosing wrote:
On Sat, 13 Jul 2002, gfgs pedo wrote:
can u pls explain how they have statistical signatures,pls-
may be using SPN's, i have tried ANSI X9.17 key generation with GOST-it did have a negligably small skew-it makes me wonder what statistical signature they have.The negligable skew is a weakness but not high enough to compramise the security of the key used from the ANSI x9.17 key gen method. pls explain. thank u veru much.
You're on the right track. Take several encryption algorithms of your choice, then use a fixed IV, and the same sets of keys, and encrypt blocks of 0's. For each algorithm, compute several sets of staticstics (a la NIST or DIEHARD). With 100 blocks of 10 Megabytes (100 different keys) you should see some interesting differences.
Remember, your question originally was "how can you tell which algorithm", not "how do you find the key". Let us know what you find out :-)
Patience, persistence, truth, Dr. mike
-- ____________________________________________________________________ When I die, I would like to be born again as me. Hugh Hefner ravage@ssz.com www.ssz.com jchoate@open-forge.org www.open-forge.org --------------------------------------------------------------------