Re: Future of anonymity (short-term vs. long-term)
Sorry, I sent my last message before it was ready (and before it got divided into two separate messages). It mostly says what I wanted it to, so I won't bother you with another version. On the SHORT-TERM end of things, I have two more thoughts on how to make truly anonymous remailers good net citizens: 1. Agree on a header line which identifies all messages coming out of our remailers. If someone wants to filter out all anonymous messages, I think we should help them to do so. 2. Here's my proposal for what kind of remailer logging to do: logging of source-to-destination mapping: NONE. destination logging: NONE. source logging: on a machine-by-machine basis, log the total input volume over a fairly long period, with some random noise added. When a source is providing too much volume, and it's not on your local list of "friendly" remailers, then take action to reduce the volume. I suggest that the first action should be to INCREASE THE DELAY to reduce the volume-per-unit-time of messages from that site. If the volume of spooled traffic from a site reaches a threshold, only then start throwing away messages. -- Marc Ringuette (mnr@cs.cmu.edu)
Date: Sun, 28 Feb 1993 19:59-EST From: Marc.Ringuette@GS80.SP.CS.CMU.EDU 1. Agree on a header line which identifies all messages coming out of our remailers. If someone wants to filter out all anonymous messages, I think we should help them to do so. This would indeed be a considerate thing to do. In the short run, the only way a mailing list maintainer can avoid being abused by someone twit determined to hide behind your network of maintainers is to disallow anonymous postings altogether. Since John Gilmore, the maintainer of the Cypherpunks mailing list, is one of the absolute free speach advocates --- let me ask a question directly at you: What would you do if sometime next week, someone decided to flood the Cypherpunks mailing list with a large amount of trash postings, routed through different combinations of remailers? Let us assume that the trash is generated by grabbing varying snippets from USENET articles, so that current AI technology is not able to distinguish a true Cypherpunks submission from the flooded trash postings. What would you do? Now let's also suppose someone does the same thing to all of the GNU newsgroups. What would you do then? I ask these questions well aware that somewhere out there, some immature twit might get an idea from this scenario, and make the above questions less hypothetical. :-( (Sorry for sounding so cynical, but after being a News admin at MIT for a long time, and dealing with a lot of people suffering from severe cases of freshmanitis, I have a less than optimistic view about human nature.) source logging: on a machine-by-machine basis, log the total input volume over a fairly long period, with some random noise added. When a source is providing too much volume, and it's not on your local list of "friendly" remailers, then take action to reduce the volume. I suggest that the first action should be to INCREASE THE DELAY to reduce the volume-per-unit-time of messages from that site. If the volume of spooled traffic from a site reaches a threshold, only then start throwing away messages. This doesn't work. Someone clever could easily redirect the message through different (non-anonymous) SMTP servers before the message entered the remailer network; this would completely defeat the volume logging, and while the first hop would still be logged somewhere, unless the remailer administrator reveals the input/output address mapping, you'd still have no way to trace the message from the destination to the source. - Ted
Marc's short-term suggestion of bandwidth limiting from a particular source seems like a reasonable exigency. Let me suggest a way of doing that which does not require keeping long-term logs. Suppose your bandwidth limiter kept totals of all bytes sent in the last week. In order to keep that data current, it needs to know when to remove byte counts that are a week old. Thus it needs to keep logs of the last week's worth of messages, at least in byte count form. Instead of that, you can just make the byte count decay. Once a day, a process goes through the byte counts and reduces them. Remove any entries are <= 0. If this decaying byte count is bigger than some threshold, bounce the message. I would suggest that the reduction equation be linear: multiply by some constant between one and zero, and subtract off a fixed amount, drop the fractional part. The multiplicative factor, which I would set between .9 and 1.0, means that an occasional large file could be sent through without completely eliminating email delivery for a while. The subtractive amount cleans out the database more quickly. Comments? Eric
participants (3)
-
Eric Hughes
-
Marc.Ringuette@GS80.SP.CS.CMU.EDU
-
Theodore Ts'o