Tim writes:
But can he ever say "I can prove the number is random"? No. There's always some chance an even-cleverer puzzle solver will find the pattern, the key that unlocks the randomness. For example, most ciphertexts pass nearly all statistical tests for randomness, "look" random, and even _act_ like random numbers (recall the Blum-Blum-Shub pseudorandom number generator and how good it is). But simple application of the key turns the seemingly random "100010001010110010101" into "ATTACK."
But can we say that "100010001010110010101" has been ``compressed'' into "ATTACK"? How do we know? Let IC(x) stand for the amount of information storage used by x. Is IC(100010001010110010101) > IC(ATTACK) + IC(key) + IC(algorithm)? It is not at all clear that this relationship would hold. (in fact, I don't think it will even begin to work out unless the cyphertext is much longer than the plaintext) So in fact, cryptorandom numbers can be considered incompressible if you take into account the algorithm required to perform the operation -- just as if I had used a 100 terabyte dictionary to compress via lookup, or better yet, a one time pad. -Ray All of this is meaningless anyway. Information theory was proven wrong by WEB technologies when they invented a compression program that can recursively compress any input data down to 64k. Harddrives are now obsolete. -- Ray Cromwell | Engineering is the implementation of science; -- -- rjc@gnu.ai.mit.edu | politics is the implementation of faith. --