One-shot remailer replies
The remailers need a one-time reply mechanism.
This would enable many other things, including "persistent" anonymous entities, without using broadcast techniques. The current remailers encourage hit-and-run anonymity, like the recent burst of anonymous nastiness, and discourage conversational anonymity and persistent anonymous entities. Sending a one-way message is easy and fairly secure.
Bill Stewart pointed out some of the problems with one-shot reply addresses, although he seemed to be analyzing them as features which the remailers provided against the users's will. I think Mike's idea was that this is something which remailer users would like. Still, Bill's comments seem valid. How useful is a single-use reply address? If you posted a message to a mailing list or newsgroup only the first person would get through to you. You could post a message with a list of reply addresses but that would open up some traffic analysis problems.
Yes, they are supposed to be voluntary and created by the user in advance. I don't want mandatory replyability, just to make conversation easier. As for replies from a list or newsgroup, use the pigeonholes. Anonymous reply is an enabling primitive for all kinds of servers and anonymous mechanisms.
One way to do this: each remailer has a list of secret (symmetric) keys. Each secret may have an expiration date. By some method (problem discussed later) the user and the remailer establish a shared secret, adding it to the list, while the remailer does not find out who the user is. The reply ticket contains a series of nested hops, each encrypted with that remailer's secret plus all the others after it.
As you have seen, this model is very similar to Chaum's 1981 paper except for where the secret keys come from. This is not to disparage your ideas but it's just that as long as we have giants around, we might as well stand on their shoulders. Chaum's system was considerably simpler as it used ordinary PK decryption of the address at each stage, with the header including a secret key that would encrypt the body to maintain unlinkability. As you point out this has a certain kind of vulnerability to coercion that your scheme is less sensitive to.
Chaum's system isn't too different if the remailers generate new keys on a regular basis. That would forcably expire reply tickets when the keys were changed, whether they had been used or not.
The catch: how do we establish a shared secret with the remailer, without identifying ourselves to it? If the first remailer (the one the replyer sends the ticket to) is corrupt, and it knows who established the secret contained in the ticket, it knows the end-to-end path of the message.
Problem: remailers are coerced or hacked to decrypt a captured secret- establishing message, before the secret key is expired. Trail of a reply ticket can then be followed. Solution: no good one that I can think of.
We tend not to worry so much about this forward vulnerability as we do about the reverse one. Partially this is because our current remailers don't implement Chaum's scheme, but partially too we sense that an interesting public pseudonym is a more inviting target than the hopefully anonymous true name behind it. I'm not really sure how good an assumption this is, though. So I am less inclined to view Chaum's scheme as broken since the remailer network inherently suffers the same vulnerabilities. We hope to develop enough independent remailers that the coercion issue will not be a major problem.
True, outside traffic analysis is the major problem, as long as there are enough hops to withstand a few bad remailers. Forward (source capture) vulnerability is harder to stop.
Tim May has advocated hardware, tamper-proof circuits to hold the keys so that coercion is impossible.
Yes, but I actually want to build this thing. Fairly soon even.
Plus, I think an important part of the picture which is not currently being implemented is remailer key changes. This can provide forward secrecy similar to your scheme. Once last week's key is gone, there is no longer any danger of your message ever being traced (as long as you trust the remailer to truly erase it, just as in your scheme). This would be useful both for ordinary remailing and for Chaum-style reply blocks, which as I say are both vulnerable to the reply-with-coercion attack.
Better is perhaps a three-day key with one overlap, that is, a current key and one "last key" kept around at all times.
There is one attack on all these schemes which you didn't mention, which is that the bad guys are the first one to try the return address and coerce each remailer along the way. This might be especially dangerous in the case of your "pigeonhole" described below, where the pigeonhole account makes for a tempting target for the snoopers, giving them a chance to intercept the reply message back to you and be the first ones to be using it.
True, the path has to be there, or the message can't go. I can't think of a fix for that one, can you? Mostly I just don't want an endlessly growing amount of information out there. I want old information to die after a while, as keys are erased or expired. [ DH exchange / Key broadcast approach ] Broadcasting a list of keys is one possibility; what if someone else uses the same key? Birthday theorem makes this hard to prevent.
Given a secure two-way messaging mechanism, persistent anonymous identities are established using a "pigeonhole service". This is a service, with a publicized address, that will accept public-key encrypted mail and store it in a "pigeonhole". The owner of the pigeonhole anonymously sends a request (with authentication) and a reply ticket. The pigeonhole service sends the owner his mail using the ticket.
This is a good idea, although there is a tradeoff between frequent polls of the pigeonhole, which might allow some traffic analysis particularly if there is a suspected link between persona and true name, and less frequent checks, which may cause high priority messages to be delayed.
Pigeonhole holds a one-time reply address. Every week or two it expires and you send a new one. If a mail comes in, it uses it, and you send a new one.
The non-anonymous user could reply with any mail reader, send the message back to the remailer that sent it to him, and the message would be transported securely back to the anonymous user that sent it.
Yes, well, we do this already with our current remailers. Many people have written clients to create these reply blocks, along with little instructions to the baffled recipient to cut and past the reply block at the front of the reply message. Once in a while these even work, I think.
With your pigeonhole idea you don't need this, you can just have a Reply-To that points at the pigeonhole, which is one of its biggest advantages.
Methinks I'd make it a little more robust than the existing systems (easy with perl) like being able to grep out a reply header anywhere in the message, ignore > indentation, and similar safety precautions.
For reliability in a large remailer network, end-to-end reliability is better than point-to-point reliability. Messages should be m-of-n secret shared before transmission, and reassembled at the terminal end. For clientless reception, the terminal node remailer could do the reassembly and splitting of replies.
I agree with this. This also relates to issue of message size quantization with cryptographically strong padding. I don't suppose the RSAREF library could do that...
Yes, this is a good idea. I first read about this in the 1993 Crypto conference proceedings, in a paper called "Secret Sharing Made Short" by Hugo Krawczyk. You might find the paper useful although it sounds very similar to what you have in mind already.
RSAREF is useful for public key and DH. Secret sharing we have to get for ourselves. I looked at Shade v1.0, and it seems to be broken on little-endian machines. It works on an HP-UX machine, but fails on a PC running linux with small-endian enabled in shade.h. The half-hour setup delay is not encouraging, either. Your SECSPLIT is nice and simple, but each shade is the size of the message. What I need is an error-correcting protocol to build a no-growth secret splitter.
Considering all the pros and cons, I am afraid that even the security of the one-shot return address is probably insufficient, especially when the simple "post replies to usenet encrypted with this key" is so easy and safe. Granted it will be a problem once everybody starts doing that, but flooding is going to be hard to beat for safety.
Yes, broadcast is the most secure, but it has a fundamental problem: security scales linearly with bandwidth. If you have a pool of 100 users and one of them gets a message, your uncertainty is 1 in 100. I've tried without success to figure out a broadcast mechanism where security scales faster than linearly with bandwidth. Any system with a unique path is subject to an attack where each element of the path is examined in turn. If the path forks and sends to several people, the security is enhanced only to the extent that more people are annoyed. We need a mechanism where there is either a circulating data stream or a large file on a server. An incoming message alters the data somehow, diffusing the changes over a large area. A request for information selects out some transformation of the selected data in such a way that the server cannot correlate the incoming message with the outgoing message. I don't see any way to do this. Elimination of the replay traffic-analysis problem is major progress. As for step-by-step coercion back to the source, I don't see a fix, and we will probably have to live with that unless there is a major breakthrough. Mike
Mike Ingle <MIKEINGLE@delphi.com> writes:
True, the path has to be there, or the message can't go. I can't think of a fix for that one, can you? Mostly I just don't want an endlessly growing amount of information out there. I want old information to die after a while, as keys are erased or expired.
No, I can't think of a fix, although your idea at the bottom might be workable in some form.
[ DH exchange / Key broadcast approach ]
Broadcasting a list of keys is one possibility; what if someone else uses the same key? Birthday theorem makes this hard to prevent.
You would want some confirmation that you got the key you requested. The broadcasted key list could be updated to show which ones have been reserved already, marked with a "nonce" (a one-time use secret random number you sent with your request) to show who reserved them. In this case you might not even need to request a specific one, just ask for one to be assigned to you and then look and see which one you got. Of course this assumes a broadcast mechanism but perhaps this is tolerable if there aren't too many remailers.
Pigeonhole holds a one-time reply address. Every week or two it expires and you send a new one. If a mail comes in, it uses it, and you send a new one.
You'd have to watch out for attackers who constantly ping the pigeonhole address and try to see which messages leave the remailer network in a correlated way.
Methinks I'd make it a little more robust than the existing systems (easy with perl) like being able to grep out a reply header anywhere in the message, ignore > indentation, and similar safety precautions.
Yes, that is a good idea. Many of the existing remailers are also written in perl (calling PGP for decryption) but not much work has been done to improve them in this way. I think there is recognition that the biggest security improvement would come with message quantizing (and not passing subject lines through!) and until we have that the rest is pretty pointless.
RSAREF is useful for public key and DH. Secret sharing we have to get for ourselves. I looked at Shade v1.0, and it seems to be broken on little-endian machines. It works on an HP-UX machine, but fails on a PC running linux with small-endian enabled in shade.h. The half-hour setup delay is not encouraging, either. Your SECSPLIT is nice and simple, but each shade is the size of the message. What I need is an error-correcting protocol to build a no-growth secret splitter.
I have not looked at the Shade source. Here is the posting I made to cypherpunks on Krawczyk's method. I wasn't very well organized but if you read through to the end you may be able to get the gist of it:
From inbox/cpz Sat Aug 13 19:00:00 1994 From owner-cypherpunks@toad.com Sat Aug 13 14:10:33 1994 Date: Sat, 13 Aug 1994 14:06:25 -0700 From: Hal <hfinney@shell.portal.com> Message-Id: <199408132106.OAA13869@jobe.shell.portal.com> To: cypherpunks@toad.com Subject: Secret sharing made short Sender: owner-cypherpunks@toad.com Precedence: bulk
I came upon a paper with this title in the 1993 Crypto conference proceedings, by Hugo Krawczyk. He pointed out that with the Shamir-type secret splitting which we discuss here periodically you have considerable space expansion. Splitting a message of M bits into N shares causes each share to itself be M bits. Krawczyk shows a simple system which basically has each share be only M/N bits. (I will ignore for simplicity the issue of providing a threshold K<N such that any K of the N shares are sufficient to restore the message.)
He achieves this be foregoing "pure" information-theoretic secrecy in favor of "mere" computational secrecy. This is a reasonable tradeoff since most implementations of Shamir sharing end up relying on computational secrecy for their random numbers, anyway.
Krawczyk's idea, in the simple subset I am describing, is almost embarrassingly easy. Take your message M and encrypt it using a random IDEA or DES key. Split the resulting cyphertext into N pieces (just carve it up) and give each piece to a shareholder. Take the IDEA/DES key and Shamir-split it into N pieces and give those out as well. (Shamir splitting for this case can be done simply by having N-1 of the pieces be totally random, and having the last piece be the xor of the IDEA/DES key and the N-1 random pieces. Only by xor'ing all N pieces can the original key be recovered.)
Everyone ends up with slightly over M/N bits; they have M/N plus the size of a DES or IDEA key. But that is pretty close. And unless IDEA or DES can be broken they will have to recover all of the shares in order to recon- struct the key and read the message.
For generalization to the K<N case you still use Shamir splitting on the IDEA or DES key, but the message itself gets split up using an error-cor- recting code concept so that K pieces are enough to reconstruct the message. This requires M/K bits per share, plus the overhead for the DES/IDEA key.
This sounds like it would be a good enhancement to the Shamir splitting code that was posted here. The IDEA or DES module could be a source of random bits for the Shamir splitting. PGP's IDEA module is pretty self-contained and has a random-number entry point.
(Oh, well, I've come this far, I might as well finish it. The message distribution scheme Krawczyk gives is this: split the message into K pieces. Treat each piece as the coefficient of a K-1 degree polynomial. Evaluate the polynomial at X=0,...,N-1 and let the results be the shares. Now any K of the shares will allow the polynomial to be reconstructed, and by concatenating the coefficients we recover M. This is similar to Shamir's scheme but is not informationally secure and has shares of size M/K.)
Hal
Considering all the pros and cons, I am afraid that even the security of the one-shot return address is probably insufficient, especially when the simple "post replies to usenet encrypted with this key" is so easy and safe. Granted it will be a problem once everybody starts doing that, but flooding is going to be hard to beat for safety.
Yes, broadcast is the most secure, but it has a fundamental problem: security scales linearly with bandwidth. If you have a pool of 100 users and one of them gets a message, your uncertainty is 1 in 100. I've tried without success to figure out a broadcast mechanism where security scales faster than linearly with bandwidth.
This is true, but you said you are talking about things that can be done today, and today Usenet already has a pool of probably a million users. That is plenty of security. The problem is if everyone starts using it for their replies, but that won't be more than a drop in the bucket for a long time.
We need a mechanism where there is either a circulating data stream or a large file on a server. An incoming message alters the data somehow, diffusing the changes over a large area. A request for information selects out some transformation of the selected data in such a way that the server cannot correlate the incoming message with the outgoing message. I don't see any way to do this.
This is an interesting idea. It is sort of like broadcast except you would be reducing the bandwidth requirements by only sending certain information to each user. One way to formalize it would be to say that you have two datasets, D1 and D2. These get combined into D12 = f(D1,D2) for some combinging function f. Then we ask whether there is a g(D12) which allows reconstruction of just D1 or D2 in such a way that we can't tell which one it will get just from knowing f and g. Plus, g must output data which is no larger than D1 or D2. In this strict form I don't think it can be done, because you could change D1 and see if g(D12) changed. If it did, then it was getting D1, and if it didn't, it was getting D2. However if we let g be a little bigger then perhaps it wouldn't be so clear. I don't know...
Elimination of the replay traffic-analysis problem is major progress. As for step-by-step coercion back to the source, I don't see a fix, and we will probably have to live with that unless there is a major breakthrough.
Again, users may not be willing to live with it since they have an alternative right now in usenet. Hal
participants (2)
-
Hal -
Mike Ingle