Another way to describe a successful steganography system... I am the opponent. I possess a collection of files that might contain hidden encrypted messages. My task is to determine if they do contain hidden encrypted message. A casual inspection of the files does not reveal any bit patterns that deviate significantly from patterns found is most examples of these kinds of files. However, I suspect these files contain hidden messages that were deposited using a steganography algorithm initialized from a public-key generated initialization vector. To test my hypothesis, I will reverse the steganography process using a large collection of public-keys and then examine the resulting bit sequences. -------- If the steganography algorithm is a good one, reversing the steg process will produce a sequence of bits that appears relatively random, even if there is *no* hidden message. What does "appears relatively random" really mean? How do you measure the randomness of a sequence of bits? I'm not an expert in this field, but I would guess you could measure the randomness by attempting to compress the bit sequence. If the bit sequence does not compress much, it is relatively random. How much is "not much"? In other words, what threshold compression percentage value should you use to declare one bit sequence random and another not random? I don't know. To generalize, an opponent will perform some kind of test to determine if the result of reversing the steg process produces a random bit sequence or a non-random bit sequence. The test will have some threshold value below which indicates a random sequence. If the output of the reverse steganography step always falls below the threshold, even if there is no hidden message, then the opponent will not be able to determine if a file contains a hidden message. Jim_Miller@suite.com