Re: On Digital Cash-like Payment Systems
From: cyphrpunk <cyphrpunk@gmail.com> Sent: Oct 27, 2005 9:15 PM To: "James A. Donald" <jamesd@echeque.com> Cc: cryptography@metzdowd.com, cypherpunks@jfet.org Subject: Re: On Digital Cash-like Payment Systems
On 10/26/05, James A. Donald <jamesd@echeque.com> wrote:
How does one inflate a key?
Just make it bigger by adding redundancy and padding, before you encrypt it and store it on your disk. That way the attacker who wants to steal your keyring sees a 4 GB encrypted file which actually holds about a kilobyte of meaningful data. Current trojans can steal files and log passwords, but they're not smart enough to decrypt and decompress before uploading. They'll take hours to snatch the keyfile through the net, and maybe they'll get caught in the act.
Note that there are crypto schemes that use huge keys, and it's possible to produce simple variants of existing schemes that use multiple keys. That would mean that the whole 8GB string was necessary to do whatever crypto thing you wanted to do. A simple example is to redefine CBC-mode encryption as C[i] = E_K(C[i-1] xor P[i] xor S[C[i-1] mod 2^{29}]) where S is the huge shared string, and we're using AES. Without access to the shared string, you could neither encrypt nor decrypt.
CP
--John
By my calculations, it looks like you could take a keypair n,e,d and some integer x and let e'=e^x and d'=d^x, and RSA would still work, albeit slowly. Reminds me of blinding, to some extent, except we're working with key material and not plaintext/ciphertext. Since I'm on the topic, does doing exponentiation in a finite field make taking discrete logarithms more difficult (I suspect so), and if so, by how much? Is there any similar property that could be used on e' and d' to make computing e and d more difficult? Of course whatever algorithm is used, one would need to feed e' and d' to it en toto, but a really clever attacker might be able to take the xth root prior to exfiltrating them. Also, application of a random pad using something like XOR would be useful; could be done as a postprocessing stage independently of the main algorithm used to encrypt the data, or done as a preprocessing stage to the plaintext. I prefer the latter as it makes breaking the superencryption much more difficult, and fixed headers in the ciphertext could give away some OTP material. However, the preliminary encryption in something like gpg would suffer, so it would have the effect of making the ciphertext bigger. Perhaps this is an advantage in your world. An alternate technique relies in specifying, say, 256 bits of key, then using a cryptographically strong PRNG to expand it to an arbitrary length, and storing that for use. Pilfering it then takes more bandwidth, but it could be reconstructed based on the 256-bit seed alone, if one knew the details of the PRNG. So the key could be "compressed" for transfer, if you know the secret seed. Search for the seed would still be expensive, even if PRNG details are known. Alternately, in a message encrypted with gpg-like hybrid ciphering, one could apply a secret, implicit PRNG to the message key seed before using it as a symmetric key. For example, you could take a 256-bit message key, run it through the PRNG, create 3x256 bits, then use triple-AES to encrypt the message. In this case, the PRNG buys forgery resistance without the use of PK techniques. The PRNG expander could not be attacked without breaking the PK encryption (which supports arbitrarily large keys) of the seed or the triple-AES symmetric encryption of the message. You know, they specify maximum bandwidth of covert channels in bits per second, I wonder if you could use techniques like this to prove some interesting property vis-a-vis covert channel leakage. It's remarkably difficult to get rid of covert channels, but if you inflate whatever you're trying to protect, and monitor flows over a certain size, then perhaps you can claim some kind of resilience against them. *shrug* -- http://www.lightconsulting.com/~travis/ -><- "We already have enough fast, insecure systems." -- Schneier & Ferguson GPG fingerprint: 50A1 15C5 A9DE 23B9 ED98 C93E 38E9 204A 94C2 641B
In message <d4f1333a0511041709j48703e7aqe2d694b0366ccccc@mail.gmail.com>, "Trav is H." writes:
By my calculations, it looks like you could take a keypair n,e,d and some integer x and let e'=e^x and d'=d^x, and RSA would still work, albeit slowly. Reminds me of blinding, to some extent, except we're working with key material and not plaintext/ciphertext.
Don't ever encrypt the same message twice that way, or you're likely to fall to a common modulus attack, I believe. --Steven M. Bellovin, http://www.cs.columbia.edu/~smb
On 11/4/05, Travis H. <solinym@gmail.com> wrote:
By my calculations, it looks like you could take a keypair n,e,d and some integer x and let e'=e^x and d'=d^x, and RSA would still work, albeit slowly. Reminds me of blinding, to some extent, except we're working with key material and not plaintext/ciphertext.
Your point would be to make the encryption key very large? Unfortunately, making it large enough to present any kind of challenge to an attacker who is plucking files off a trojaned computer would make it far too large to be used, with this system.
Since I'm on the topic, does doing exponentiation in a finite field make taking discrete logarithms more difficult (I suspect so), and if so, by how much?
This doesn't make sense. The discrete log operation is the inverse of exponentiation. Doing exponentiation is a prerequisite for even considering discrete log operations. Hence it cannot make them "more difficult".
Is there any similar property that could be used on e' and d' to make computing e and d more difficult? Of course whatever algorithm is used, one would need to feed e' and d' to it en toto, but a really clever attacker might be able to take the xth root prior to exfiltrating them.
That's a new word to me. What is your goal here, to make something that is "even stronger" than RSA? Or is it, as in the context of this thread, to inflate keys, making them bigger so that an attacker can't download them easily?
Also, application of a random pad using something like XOR would be useful; could be done as a postprocessing stage independently of the main algorithm used to encrypt the data, or done as a preprocessing stage to the plaintext. I prefer the latter as it makes breaking the superencryption much more difficult, and fixed headers in the ciphertext could give away some OTP material. However, the preliminary encryption in something like gpg would suffer, so it would have the effect of making the ciphertext bigger. Perhaps this is an advantage in your world.
That's not feasible in most cases. If you really have a OTP handy, why are you bothering with RSA? Or are you planning to use it as a two-time-pad? That generally doesn't work well. (The fact that you are worried about "giving away" OTP material is not a good sign!)
An alternate technique relies in specifying, say, 256 bits of key, then using a cryptographically strong PRNG to expand it to an arbitrary length, and storing that for use. Pilfering it then takes more bandwidth, but it could be reconstructed based on the 256-bit seed alone, if one knew the details of the PRNG. So the key could be "compressed" for transfer, if you know the secret seed. Search for the seed would still be expensive, even if PRNG details are known.
So where do you store this 256 bit seed? You want to distract the attacker with the smoke and mirrors of the big file for him to download, hoping he will ignore this little file which is all he really needs? I think we are assuming the attacker is smarter than this, otherwise you could just use regular key files but give them obscure names.
Alternately, in a message encrypted with gpg-like hybrid ciphering, one could apply a secret, implicit PRNG to the message key seed before using it as a symmetric key. For example, you could take a 256-bit message key, run it through the PRNG, create 3x256 bits, then use triple-AES to encrypt the message. In this case, the PRNG buys forgery resistance without the use of PK techniques. The PRNG expander could not be attacked without breaking the PK encryption (which supports arbitrarily large keys) of the seed or the triple-AES symmetric encryption of the message.
What is forgery resistance in this context? A public key encryption system, by definition, allows anyone to create new encrypted messages. Your technique is complicated but it is not clear how much security it adds. Fundamentally it is not too different from RSA + counter mode, where CTR can be thought of as a PRNG expanding a seed. This doesn't seem to have anything to do with the thread topic. Are you just tossing off random ideas because you don't think ordinary hybrid RSA encryption is good enough?
You know, they specify maximum bandwidth of covert channels in bits per second, I wonder if you could use techniques like this to prove some interesting property vis-a-vis covert channel leakage. It's remarkably difficult to get rid of covert channels, but if you inflate whatever you're trying to protect, and monitor flows over a certain size, then perhaps you can claim some kind of resilience against them. *shrug*
I'm not sure conventional covert-channel analysis is going to be that useful here, because the bandwidths we are looking at in this attack model are so much greater (kilobytes to megabytes per second). But broadly speaking, yes, this was Daniel Nagy's idea which started this thread, that making the key files big enough would make it more likely to catch someone stealing them because it would take so long. CP
Don't ever encrypt the same message twice that way, or you're likely to fall to a common modulus attack, I believe.
Looks like it (common modulus attack involves same n, different (e,d) pairs). However, you're likely to be picking a random symmetric key as the "message", and Schneier even suggests picking a random r in Z_n and encrypting hash(r) as the symmetric key. More generally, I wonder about salting all operations to prevent using the same value more than once. It seems like it's generally a bad idea to reuse values, as a heuristic, and applying some kind of uniquification operation to everything, just as it's a good idea to pad/frame values in such a way that the output of one stage cannot be used in another stage of the same protocol.
Since I'm on the topic, does doing exponentiation in a finite field make taking discrete logarithms more difficult (I suspect so), and if so, by how much?
This doesn't make sense. The discrete log operation is the inverse of exponentiation. Doing exponentiation is a prerequisite for even considering discrete log operations. Hence it cannot make them "more difficult".
What I really meant was, if it wasn't computed in a finite field, how difficult would it be to compute the logarithm? I'm just curious about how much work factor is involved in reducing modulo n. I also wonder about some of the implications of choosing a message or exponent such that not enough reductions take place during exponentiation.
I'm not sure conventional covert-channel analysis is going to be that useful here, because the bandwidths we are looking at in this attack model are so much greater (kilobytes to megabytes per second).
Well, it depends on how you define the attack, which wasn't defined. If the attack is to smuggle out a key using a covert channel, it may apply. If the attack is to download the key on a conventional network, it wouldn't make much difference. Unless, of course, you're auditing network flows over a certain size or lasting a certain amount of time. -- http://www.lightconsulting.com/~travis/ -><- "We already have enough fast, insecure systems." -- Schneier & Ferguson GPG fingerprint: 50A1 15C5 A9DE 23B9 ED98 C93E 38E9 204A 94C2 641B
participants (4)
-
cyphrpunk
-
John Kelsey
-
Steven M. Bellovin
-
Travis H.