Will the information obtained from the 2^32 tests have a zero compression rate? If one of the occurance should yield all heads and one occurance yields all tails-there appears to be scope for compression. In that one case, yes. The compression will vary based on the deviation from an ideally compressable sample - for *any* bit pattern 0x0000 to 0xFFFF you would expect to see it once in 2^32 trials (by definition) therefore you will get a mix of compressable and uncompressable patterns, with uncompressable
If the output is random,then it will have no mathametical structure,so I shouldn't be able to compress it at all. randomness is a funny thing. a truely random file can be *anything* that is the right length - including a excerpt from william shakespere (provided you have enough monkeys) it is most likely to be random garbage, for a large sample - but for a very small sample the odds of it being an ordered sample are surprisingly good.
Sarad AV wrote: patterns being more likely (simply because there are more of them in the full range of available patterns 0x0000 to 0xFFFF the obvious example here is a double coin test - two bits. an ordered sample would be both heads or both tails. a disordered sample would be one head and one tail. in practice, you would expect to see half the results from a larger trial (say 32 double-throws) with a ordered sample, and half disordered. as you reach three coins, the odds of a completely ordered result decrease (from 2 in 2^2, to 2 in 2^3). for four coins, you still have the same two compressable results. consider: HHHH HHHT HHTH HHTT HTHH HTHT HTTH HTTT THHH THHT THTH THTT TTHH TTHT TTTH TTTT in a large trial, you would expect to see each of these once every 2^4 tests, on the average. obviously HHHH and TTTT are very compressable. assuming a runlength encoding, I don't think any of the others are compressable at all.....