The problem with the idea of posting anonymous mail to a newsgroup is sheer volume. Remember, we aim at a system where a large fraction of mail is potentially being done this way. Imagine if almost all email was done today by posting to newsgroups! There is probably thousands of times as much email traffic as news traffic now. It would totally swamp the system. You'd literally have to send every email message sent by any user in the world to _every_ user in the world, in effect. As Yanek says:
You are guaranteed anonymity because no-one can find out who decrypted the alt.w.a.s.t.e message, since everyone received it.
This really won't scale to large numbers of users. Yanek also writes:
Here is an example of how to use the cryptographic remailer at <hal@alumni.caltech.edu> to implement an anonymous return address.
But the again do you trust hal@alumni.caltech.edu...
With conventional remailer schemes such as this one, you are announcing your True Name (or at least your True Internet Mailbox) to someone you must trust. With my scheme (posted earlier today) you don't need to trust anybody except yourself (to not make a dumb mistake like including a signature).
This is why you would want to use a chain of remailers as your return address, what Chaum calls a "cascade". No single remailer sees the correspondance between your anonymous address and your real address. Only by breaking all of them can the bad guys find out who you are. Ideally, remailers would operate in a variety of countries with different laws, making it difficult to crack them all. Remailers could be designed to periodically flush themselves, deleting old keys and/or pseudonym maps. This way anonymous addresses would have a limited lifetime if desired, and the attackers would have only a finite time window to break all the remailers involved. (Different keys/pseudonyms could have different lifetimes as needed.) We could also imagine that there are lots of remailers - not just dozens, or hundreds, but millions of them. Maybe almost everyone runs a "cheap" remailer on the side, collecting a few cents in digital cash for each message they pass, enough to pay for their own messages. Putting all this together, you could have an anonymous address which passes through, say, 10 remailers which might be any of the millions of remailers in the world. It could have a limited lifetime of only a few hours for some ultra-sensitive applications, with the remailers involved flushing their databases after that time. To break this, the enemy would have to sequentially break into machines all over the world, one after another, defeat any physical barriers (locks, men with guns), overcome tamper-resistance in the computers, break the encrypted files, and find out what the next step is in the address cascade, all in a couple of hours. This doesn't seem possible. Hal 74076.1041@compuserve.com
The only solution (and I think I mean ONLY) is positive filtering. When pseudonyms proliferate, the only way to cut down on trash is by filtering based on reputation. Since negative reputation can be avoided simply by creating another pseudonym, the only reputation that will make a difference is positive reputation: credibility. An example system would be one in which I give credibility and transitive credibility ratings to all the names whose posts I want to see. The transitive part lets me discover new people (who know people I respect who know people they respect...). Then anyone credible can introduce someone else around simply by deciding to read their mail (assuming their taste is good enough that people want to read what they're reading). This grows in several directions: AI, reputation services (magazines), etc. A public system with pseudonyms will require this very quickly. Reputation systems only work if things are digitally signed, of course (so readers and filters can't be spoofed). I will be talking about this more at the next cypherpunks meeting. dean
filtering based on reputation. Since negative reputation can be avoided simply by creating another pseudonym, the only reputation that will make a difference is positive reputation: credibility.
What's to stop you (once you have some "reputation") from creating 250 other pseudonyms or "identitites", giving them all a "reputation", and then create another identity, and have these 250 all give this one as much as possible, in effect creating an identity with a lot of "credibility" out of thin air? -- Yanek Martinson mthvax.cs.miami.edu!safe0!yanek uunet!medexam!yanek this address preferred -->> yanek@novavax.nova.edu <<-- this address preferred Phone (305) 765-6300 daytime FAX: (305) 765-6708 1321 N 65 Way/Hollywood (305) 963-1931 evenings (305) 981-9812 Florida, 33024-5819
What's to stop you (once you have some "reputation") from creating 250 other pseudonyms or "identitites", giving them all a "reputation", and then create another identity, and have these 250 all give this one as much as possible, in effect creating an identity with a lot of "credibility" out of thin air? Even in the simple system I described, there's probably sufficient feedback to discourage that. If the identity you went to so much trouble to promote turns out to be a bozo, then the original identity loses credibility as a source of recommendation. Further, the positive recommendations aren't just for filtering, they're also for sorting. By spreading the credibility of the first identity out over 250 others, those 250 identities just don't carry much weight when my mailer is ordering messages for me to read. If reputation is a conserved capital, for instance, they together carry only as much weight as the first identity that recommended them. dean
participants (3)
-
ghsvax!hal@uunet.UU.NET
-
tribble@xanadu.com
-
yanek@novavax.nova.edu