Cryptography of a sort - redux
If anyone remembers my original postings from a couple weeks ago (my first-ever on The Net), I described a method to "shuffle" bits in a text-stream, using simple random-number generators, to insure that the text cannot be descrambled by brute-force methods. It has occurred to me only after this time that there was significant misinterpretation of what I proposed. I do not change any bits of text, I merely reposition them, therefore there is no applicability of standard analysis techniques (XOR masking, whatever) to the decoding process. The result file contains the same number of zero and one bits as it started with, through any number of encryption layers. The only way to recover the original text is to reposition the shuffled bits correctly, which requires brute-force guessing of the pseudo-random-number output. This guess is very simple for the first encoding layer, but compounds exponentially in subsequent encodings, so that after half a dozen or a dozen passes, where the executable program(s) is called from scratch for each pass, the shuffling rapidly approaches true randomness, and cannot be decrypted in practice except through the exact mirror-image reversal of the encryption passes. An example: How long would it take for you to guess the number (between 0 and 32000) I'm thinking of, if you could guess 16 billion numbers per second? Would it be .000001 second, on average? If you had to guess the ten different numbers I'm thinking of, and get all ten correct sequentially, it should take an essentially infinite amount of time, yes? And remember, since computer bits have such low differentiation (ones and zeros), looking for "patterns" and so forth just doesn't apply in this type of encryption. As to the Public Key part of the argument, once there is general understanding on the above points at large, it might then be worthwhile to discuss the advantages and disadvantages of how to make/distribute Private keys, etc. One gentleman on this forum made an argument recently, something to the effect that it wouldn't be worthwhile for Hacker X to try to break into datastream Y, assuming datastream Y is encoded with such-and-such a key, that datastream Y is sufficiently unimportant, and motive for such a breakin would not be great enough to justify the expected effort. To such arguments, all I can say is, this is the computer age, and enough mundane transactions can add up to something significant, or, one could lower the expected-effort ratio, you get the picture.
Dale Thorn wrote:
The only way to recover the original text is to reposition the shuffled bits correctly, which requires brute-force guessing of the pseudo-random-number output.
Even if I know the PRNG algorithm? And just what is it that you propose to use for the PRNG?
This guess is very simple for the first encoding layer, but compounds exponentially in subsequent encodings
Exponentially? Could you provide the math to explain how your composition of PRNG's gives this exponential increase in difficulty?
, so that after half a dozen or a dozen passes, where the executable program(s) is called from scratch for each pass, the shuffling rapidly approaches true randomness, and cannot be decrypted in practice except through the exact mirror-image reversal of the encryption passes.
So what do the encryption keys look like? And what's this "true randomness" stuff? ______c_________________________________________________________________ Mike M Nally * Tiv^H^H^H IBM * Austin TX * For the time being, m5@tivoli.com * m101@io.com * <URL:http://www.io.com/~m101> * three heads and eight arms.
participants (2)
-
Dale Thorn -
Mike McNally