Re: You Aren't [I'm Not]
From: Theodore Ts'o <tytso@athena.mit.edu>
Sorry; typo on my part. What I meant to say was "No, I am not arguing that free speach is bad." Mr. Metzger was putting words in my mouth when claimed that I was saying that.
Anonymity and free speach are *NOT* the same thing. As I posited in an earlier message, which no one has yet to comment on, those two concepts are not the same thing.
Yes they are, Ted. They are mathematically equivalent. If I can say anything, I can say it in code. If I can say anything, I can repeat what someone else said in code, possibly transforming it. Ta Da, remailers. To stop remailers, you will need to stop free speech. Please at least admit this much. It might be unpleasant, but in a society with no prior restraints on speech it is likely not possible to stop cryptographic systems to assure anonymity. Perry
Perry: You are right that because of the right free speach, it is impossible to prohibit remailers. However, while I don't believe in prior restraint; but I do believe in personal responsibility. It is certainly true that it is possible to construct a remailer service, using cryptography, such that it would be impossible to trace it back to the original sender. This class of remailer would generally not provide a return address mapping feature, since if the remailer can generate a return path, it can be revealed. There are ways to make it more difficult to reveal, but they still don't make it impossible. So Julf's remailer doesn't fall into this category, but ones where the input and output mappings are destroyed immediately do. So in this model, how can you provide personal responsibility? Well, I would argue that the buck should stop at the remailer site. They are the closest link to the chain of liability, and they have intentionally performed measures which make it impossible find the next link in the chain of liability. So, let the liability rest with the remailer site! Now, I'm not a lawyer, and as far as I know, this legal theory hasn't been tested in a court. So only time will tell what happens when these remailers hit the real world. As far as remailers like Julf's are concerned, I very much like the idea which Tim Moors suggested --- which is to have some method which the identity between the input and output address could be revealed. This provides general anonymity, but one that can be breached when someone has abused that anonymity, as convicted by a jury of their peers. Perhaps the way this could be reflected into the "real world" legal system is that remailers which do keep a mapping between input and output addresses, and which are willing to reveal them under appropriate circumstances, would be exempt from being held liable for what comes out of their remailer. Perhaps these are not the right sets of tools to be used to provide some sort of controls over remailers so that the negative effects of these remailers can be controlled. But it is our responsibility to consider them, and not just pretend they don't exist. I hope we don't have the attitude of "Vonce the rockets go up, who cares vere they come down? That's not my department....." - Ted
< > From: Theodore Ts'o <tytso@athena.mit.edu> < > < > Sorry; typo on my part. What I meant to say was "No, I am not arguing < > that free speach is bad." Mr. Metzger was putting words in my mouth < > when claimed that I was saying that. < > < > Anonymity and free speach are *NOT* the same thing. As I posited in an < > earlier message, which no one has yet to comment on, those two concepts < > are not the same thing. < < Yes they are, Ted. They are mathematically equivalent. If I can say < anything, I can say it in code. If I can say anything, I can repeat what < someone else said in code, possibly transforming it. Ta Da, remailers. < < To stop remailers, you will need to stop free speech. Please at least < admit this much. It might be unpleasant, but in a society with no < prior restraints on speech it is likely not possible to stop cryptographic < systems to assure anonymity. let me say that some of this discussion has certainly been mind- bending, and i appreciate having taken part. i would like to add my 2 pfennigs worth. anonymity and free speech are different in precisely this way: that we are free to say what we want doesn't mean we aren't also accountable for what we may say. when we can speak freely *and* anonymously, then we are no longer accountable for what we say. anonymous free speech is a *stronger* form of free speech; this is what i think perry is arguing. however, this stronger form of freedom means individuals are no longer accountable for their words or behavior; this, i believe, is ted's concern. i can see that some members of this list are interested in providing an environment where these fundamentally social issues are solved technically. however, this seems to be an issue which cries out for a social solution, with perhaps a technical implementation. they may be looking to (over)simplify these social issues so that they are *easily* solved technically, and this is where they might be going astray. in our society, for example, there are strong cultural restrictions on what we can say. these are not mandated by law. these are the rules of the game when it comes to existing in a particular culture. an instance of such rules might be "politeness vs. rudeness." accountability can have positive or negative affects. it seems to me that the usefulness of anonymous free speech hinges on whether the speaker should or should not be held accountable for her/his words. i can't find an easy technical way of making possible free speech which is beneficial, but limiting non-beneficial free speech. there may be, however, ways of structuring or socially incorporating anonymous free speech such that the benefial uses are encouraged, and the maleficient uses are reduced. but i feel strongly that the approach will have to be socially, not technically based. i don't think digital cash is a really equitable way of accomplishing this. as soon as economics are involved, individuals will be sucked into classes of "haves" and "have-nots". while markets are good, the effects on individuals can be horrendous, as serious as censorship. are we trying for meritocracy, or for rule based on who has the most dough? accountability is critical to those who can't protect themselves from the government or from other members of society. these are precisely the people who would be burned by such an economic system. this *is* what the media is for, right?
Ted writes:
[...] but I do believe in personal responsibility.
I do not think this is an entirely forthright self-assessment.
It is certainly true that it is possible to construct a remailer service, using cryptography, such that it would be impossible to trace it back to the original sender.
Let me call that strong anonymity. Let me also call the possibility for revealment weak anonymity.
So in this model, how can you provide personal responsibility? Well, I would argue that the buck should stop at the remailer site. They are the closest link to the chain of liability, and they have intentionally performed measures which make it impossible find the next link in the chain of liability. So, let the liability rest with the remailer site!
I interpret you to mean that it is not personal responsibility for speech that you want, but the existence of someone to sue. The placement of liability on the remailer does not directly affect what the anonymous sender is going to say. The assignment of liability has, foremostly, legal consequences. The way I see that it will increase personal responsibility for speech is to make the legal climate (in the U.S., at least) impossible for strong anonymity. By eliminating strong anonymity, you can ensure that their anonymity is only conditionally revealed. Now, you haven't directly stated that you think that strong anonymity shouldn't exist. If this is what you think, plase say so directly. You can then make whatever argument you wish to support this position, but I, for one, would like to argue against clearly stated positions.
Now, I'm not a lawyer, and as far as I know, this legal theory hasn't been tested in a court. So only time will tell what happens when these remailers hit the real world.
No, not only time will tell. This seems like an important enough point to legislate into existence before a court test. And for those with objections to making legislation, remember that the issue will be resolved publicly by law, but by lawyers in the courts. How about something like the following: "Speech made anonymously will carry a presumption of falsity in all consideration of tort resulting from said speech."
Perhaps these are not the right sets of tools to be used to provide some sort of controls over remailers so that the negative effects of these remailers can be controlled.
One can eliminate the negative effects by eliminating the positive ones as well. I do believe strong anonymity to be one of these benefits. Eric
Date: Wed, 3 Mar 93 11:39:32 -0800 From: Eric Hughes <hughes@soda.berkeley.edu>
So in this model, how can you provide personal responsibility? Well, I would argue that the buck should stop at the remailer site. They are the closest link to the chain of liability, and they have intentionally performed measures which make it impossible find the next link in the chain of liability. So, let the liability rest with the remailer site!
I interpret you to mean that it is not personal responsibility for speech that you want, but the existence of someone to sue. Sorry for not being clear; I was merely speculating on how the Real World might react to the presense of remailers. I actually think this might be a reasonable response, and perhaps even a likely one. Let's cast this into a physical world example. Suppose someone has developed a system which will allow someone to broadcast, over a bullhorn, at 150db, in your neighborhood. Suppose further that said system will allow anybody to broadcast over that source, at either free or at 10 cents a minute, in such a way that it is impossible to track down the source. Now suppose that this bullhorn (which is located on private property) starts spewing announcements and other people exercising their right of free speach, at all hours of the day and night. Now, then, let us explore the this example. In this example, is it reasonable to presume that it is each individual houseowner's responsibility to put up soundproofing, to protect themselves from unwanted noise? If so, why? Why not? And if the people of the neighborhood decided to get together and sue someone, who would be the likeliest target? Does this example apply to the remailer issue? Well, their are certainly examples that go both ways. For example, if you receive junk mail, you just throw it out. On the other hand, if you receive crank calls, you are entitled to call your phone company, and they will make an attempt track down the crank caller and turn over his identity to the police, with the charge of harassment. Now, you haven't directly stated that you think that strong anonymity shouldn't exist. If this is what you think, plase say so directly. You can then make whatever argument you wish to support this position, but I, for one, would like to argue against clearly stated positions. Whether or not it "shouldn't exist" is somewhat irrelevant, don't you think? If people really want to put them up, they're going to exist. In retrospect, it was a mistake for me to point out that it might be a bad idea to make that sort of services available, since I doubt any of the anonymity salwarts have been listening to me anyway. (It sometimes certainly as seemed like no one has really be listening to me, as some of the accusations of my being a censorship lover and being associated with some evil cabal (tm) seem to attest.) Some of my less than thoughtful outbursts were caused by my exasperation at how people were obviously not listening, and who were responding by name-calling and arguments that were completely beside the point. I apologize for those outbursts. In any case, I don't believe the benefits of strong anonymity are worth the negative consequences, and that most of the benfits of strong anonymity are also provided by weak anonymity. Hopefully, if strong anonymity does have the bad effects I fear, there will be ways for our society to correct for them --- for example, holding the administrators of the remailers liable for the damage caused by the remailers. This may not be the case, given things like international boundaries. But it is probably unproductive to argue about whether or not this will or will not happen. Time alone will tell. "Speech made anonymously will carry a presumption of falsity in all consideration of tort resulting from said speech." One can pass legislation proclaiming this to be the case; legislation has been passed declaring PI to be 3. The question is whether or not this is a really a true statement the way the human mind works in general. While tort law often seems to bear little or no resemblence to the outside world, it is supposed to based on the real world. This is why when someone is suing someone else for Libel, English Common Law states that you have meet three standards: (a) the statements must be false, (b) the speaker must have know the statements were false, and spoke them with malicious intent, and (c) real damages were incurred. (And that is what the plaintiff is sueing to recover for.) If what you say is true, that human beings have a presumption against believing statements made anonymously, then test (c) will fail automatically; no real damage would have occurred. In this case, the legislation is simply not needed. On the other hand, if it is true that people will believe statements made anonymously, and so real damage can be done as a result, then the person who has been wronged should have every right to obtain compensation for those damages. That's what the tort system is all about. - Ted
I thank Ted for such a clear reply. He writes:
Sorry for not being clear; I was merely speculating on how the Real World might react to the presense of remailers. I actually think this might be a reasonable response, and perhaps even a likely one.
This was the other interpretation I came up with, yet it did not seem as likely to me as the one I assumed. Excuse me if I ever implied you were a freedom-hating, Dorothy-Denning-loving crypto-fascist. ;-) Yes, there are plenty of large organizations who sue at the drop of a hat. Yes, it is likely that remailer operators would get sued. I do think, however, there are legislative and judicial defenses.
Let's cast this into a physical world example. [anonymous bullhorn example deleted]
The place that this example breaks down is that silence is a commons, and a communications network is not. Society finds it profitable to break up control of land into ownerships. It is not, on the other hand, profitable to do so with airspace as a sound-carrying medium, because the cost of shielding, in addition to being expensive, looks awful. Thus sound has remained a commons wherein all maintain an interest equal to their proximity. A communications network, however, is an artifact, _i.e._ an object created by design and technology. As such it has no status as commons unless the owners agree to grant it such. One might argue that the aggregate actions of backbone sites create such a commons. Granted, but the fact remains that the transmission of data in a particular way or in a particular form or structure is not fundamental to the medium. Like any other artifact, it can be changed. Furthermore, the analogy of shouting at the neighbors does not accurately reflect the facts of reception. The sound from a loudspeaker cannot be silenced except with great expenditure and loss of sightline. The speech of an anonymous posting source can be easily silenced with filter. There is a salient difference in effort here. The loudspeaker example is that of an additive medium; all sounds come over the same channel. A telecommunications network, however, is on the other end of the spectrum; every message comes in separately. The electronic medium is the most separable there is. Filtering is not possible for the loudspeaker; it is easy for the messages. And again, no one requires a carrier to carry anonymous messages. Practically speaking, you might easily end up with a situation like the alt.* hierarchy, where only certain subnets agree to exchange anonymous traffic. I suspect this is inevitable in the short term.
On the other hand, if you receive crank calls, you are entitled to call your phone company, and they will make an attempt track down the crank caller and turn over his identity to the police, with the charge of harassment.
But the phone company is not held liable when the call was made from a pay phone.
Whether or not it "shouldn't exist" is somewhat irrelevant, don't you think? If people really want to put them up, they're going to exist.
I don't think it is irrelevant. If we allow each person unlimited personal freedom, that freedom include the freedom not to cooperate with those one disagrees with. Since the power of groups is larger than the power of individuals, there is no such thing as unlimited personal action. To wit: "You may do what you like, but I don't have to help, and I may actively hinder you."
In any case, I don't believe the benefits of strong anonymity are worth the negative consequences, and that most of the benfits of strong anonymity are also provided by weak anonymity.
Here is where we differ. I do believe that strong anonymity is desirable. I believe that weak anonymity is undesirable for the same reason that I believe key registration is undesirable. (That said, I think weak anonymity is not nearly as dangerous as key registration.) The similarity is this: that an action performed in expectation of one setting (privacy or anonymity) is later found to have been performed in another. [re: legislative protections of anonymous speech.]
One can pass legislation proclaiming this to be the case; legislation has been passed declaring PI to be 3. The question is whether or not this is a really a true statement the way the human mind works in general.
A law which states that from now on that pi will be three does not change the actual ratio of the circumference to the diameter. A law which says that certain facts of a situation are to be considered in a certain way in a court of law does, in fact, change the way those facts are considered. If someone makes a claim and it is rejected because of protecting legislation, then even if the person was offended, the law still says there is no redress. If you declared that claims of offense are to be disallowed, then they are disallowed, regardless of whatever perceived or even actual harm there is. Can such legislation could be passed? There's the rub. We can certainly work for it.
While tort law often seems to bear little or no resemblence to the outside world, it is supposed to based on the real world.
It is meant to describe society's reaction to the facts of the real world, not to describe the facts themselves.
On the other hand, if it is true that people will believe statements made anonymously, and so real damage can be done as a result, then the person who has been wronged should have every right to obtain compensation for those damages.
Any such legislation would not claim that people did or did not believe them. It would state that regardless of whether they did or not, that as a matter of public policy it would not matter. Your statement begs the question of whether anonymous speech can cause "real damage." I will leave this to another discussion. Eric
Date: Wed, 3 Mar 93 16:02:00 -0800 From: Eric Hughes <hughes@soda.berkeley.edu> A communications network, however, is an artifact, _i.e._ an object created by design and technology. As such it has no status as commons unless the owners agree to grant it such. One might argue that the aggregate actions of backbone sites create such a commons. Granted, but the fact remains that the transmission of data in a particular way or in a particular form or structure is not fundamental to the medium. Like any other artifact, it can be changed. True, like any other artifact, it can be changed. But then again, someone could try to change the status of sound as a "commons" as well. Perhaps the real problem is that there are a large number of people who are currently using mailing lists and Usenet newsgroups with the expectation that there are currently existing controls on the signal-to-noise levels and protection against mail bombs, which are being enforced by simple standards of personal (or at worse, site) accountability. So in affect, the common usage of these colections of sites has created a "commons" which you are proposing to take away. As an artifact, certainly that can be changed; and you are proposing that we change them. But then, who should bear the cost of this change? To bring this back to the house/anonymous bull horn analogy, that would be like deciding cease considering sound (or rather lack of sound) a commons, and expecting each home owner, who up until now enjoyed the relative peace and quiet of their neighborhood, to pay the cost of losing their sightlines, and needing to put up expensive shielding. Maybe there are good, sound, policy reasons for making this change. But out of fairness, one would think that the agents of change should be prepared to bear some of cost of that change. Without that, the homeowners will not be bought into such a change, and you can hardly blame them for resisting. Wouldn't you, in similar situations? And again, no one requires a carrier to carry anonymous messages. Practically speaking, you might easily end up with a situation like the alt.* hierarchy, where only certain subnets agree to exchange anonymous traffic. I suspect this is inevitable in the short term. Well, this really can only happen if a carrier can easily distinguish anonymous messages from non-anonymous messages. Out of fairness, I would argue for putting in a standard header which clearly labels a message as being anonymous, so that carriers can have the choice of whether or not they want to carry that message. Given the earlier discussion of doing filtering at the server level, this seems to fit right in.
On the other hand, if you receive crank calls, you are entitled to call your phone company, and they will make an attempt track down the crank caller and turn over his identity to the police, with the charge of harassment.
But the phone company is not held liable when the call was made from a pay phone. True; but the phone company is a common carrier. The networks today aren't. This could be changed by legislation, and that's something I would support, for networks. However, I doubt that such legislation would actually extend as far as protecting hosts on a network, such as remailer sites. It might happen, but it would definitely be a much harder sell.
On the other hand, if it is true that people will believe statements made anonymously, and so real damage can be done as a result, then the person who has been wronged should have every right to obtain compensation for those damages.
Your statement begs the question of whether anonymous speech can cause "real damage." I will leave this to another discussion. You misunderstand my argument. My argument is that if anonymous speech doesn't cause "real damage", then your proposed legislation isn't necessary, since real damage is a requirement for a successful libel action. On the other hand, if it does cause "real damage", then your proposed legislation would prevent someone who had been damaged from obtaining redress. So I would argue that such legislation would be bad public policy. - Ted
Perhaps the real problem is that there are a large number of people who are currently using mailing lists and Usenet newsgroups with the expectation that there are currently existing controls on the signal-to-noise levels
Existing controls on the signal-to-noise ratio? However such postulated controls might function in practice, they don't function well enough to make Usenet useful to as many people as its bandwidth is capable of. I don't read Usenet any more. I can't find enough useful information in a short enough period of time. I have _no_ expectations about any controls of content on Usenet. Ted postulates that standards of accountability provide a control over the signal-to-noise level. I grant that. It does prevent the very worst excesses from occurring. It does provide an upper bound on noise in discussion groups. Yet this upper bound is ineffectual. Let us take the widely used analogy of Usenet as a sewer. Reading Usenet is like wading chest high through the muck. But am I reassured that there is an overflow valve so that it never gets past my chin? Hardly at all. I won't drown, to be sure; what a _slight_ comfort. (For those of you who want a much more graphic depiction of walking through sewers, read the relevant chapters in _Les Miserables_.)
and protection against mail bombs,
I had thought that we had pretty clearly established that attacks on a system of content and of volume were of different natures. Lack of robustness in mail software makes a mailbomb possible, not lack of accountability.
As an artifact, certainly that can be changed; and you are proposing that we change them. But then, who should bear the cost of this change?
The structures need to be changed for much better reasons than to prevent anonymous attacks. I infer from your arguments that you think that our current communications fora, newsgroups and mailing lists, are not fundamentally broken. I do think they are fundamentally broken. (This doesn't mean that they are completely non-functional.) I think they are fundamentally broken because they do not facilitate human communication as they were intended. They did when they were small, I grant, but they did not scale well. They even continue to work when small and focused, but very few things with wide interest or large import remain small. We already have most of the features of anonymity and pseudonymity already online, in the system that already exist. I've made this point before; I'll make again now. I have never met most of the people I've conversed with online. I expect that I will never meet most of them. The personal responsibility that comes with personal contact is mostly not present online. The negative feedback loops that are normally present in face-to-face conversation are not present online, and it shows. One of the greatest lacks in online life is the lack of restraint. How many people online do you know who continue to rant about their own positions without engaging in dialectic with another? How many do you know who, even given FAQ's, continue to ask newbie questions? How many do you know who jump to answer with the conventional net-foolishness about whatever issue is at hand. (For a concrete example, consider patent legalities.) Lack of restraint causes far more problems that lack of accountability. We have most all of the disadvantages of pseudonymity, but hardly any of the advantages. Our correspondents are able to be determined readily by anyone with the ability to monitor (and that's quite a few people). We therefore cannot conduct our affairs online with the same amount of privacy we can create in the physical world. There is no assurance, when exposing the corruption of a powerful figure, that one's identity cannot be determined and punitive actions taken. Those who have some sort of taint imputed them by certain sections of society do not out of fear speak freely. The virtues of technically secure anonymity outweigh the negative effects. You can flame impersonally as much as you want right now, and there is no recourse. Yet you cannot keep private from your own sysadmin the identities of those with whom you communicate. Anonymity in communciations is fundamentally consistent with an open society dedicated to free speech.
To bring this back to the house/anonymous bull horn analogy, that would be like deciding cease considering sound (or rather lack of sound) a commons, and expecting each home owner, who up until now enjoyed the relative peace and quiet of their neighborhood, to pay the cost of losing their sightlines, and needing to put up expensive shielding.
I will not press the point further than the following. Whereas we cannot change the physics of wave propagation in air, we can change where the cables are laid.
Maybe there are good, sound, policy reasons for making this change. But out of fairness, one would think that the agents of change should be prepared to bear some of cost of that change.
Were there silence before in the neighborhood, I would agree.
And again, no one requires a carrier to carry anonymous messages. Practically speaking, you might easily end up with a situation like the alt.* hierarchy, where only certain subnets agree to exchange anonymous traffic. I suspect this is inevitable in the short term.
Well, this really can only happen if a carrier can easily distinguish anonymous messages from non-anonymous messages.
The simple expedient of a standard header line has already been agreed upon. Re: crank calls
But the phone company is not held liable when the call was made from a pay phone.
True; but the phone company is a common carrier. The networks today aren't. This could be changed by legislation, and that's something I would support, for networks.
I think that networks will be common carriers, for the same reasons that phone companies became such: that having a common carrier is consistent with freedom of speech in an open society.
However, I doubt that such legislation would actually extend as far as protecting hosts on a network, such as remailer sites.
You can't protect the network unless you *do* protect individual sites. The network as a whole is not a legal entity, only the companies and individuals that run them are. I have left off a reply of the libel issue for such a time after I have read up a little on the subject. Eric
Date: Fri, 5 Mar 93 12:41:24 -0800 From: Eric Hughes <hughes@soda.berkeley.edu> Existing controls on the signal-to-noise ratio? Yet this upper bound is ineffectual. Let us take the widely used analogy of Usenet as a sewer. Reading Usenet is like wading chest high through the muck. But am I reassured that there is an overflow valve so that it never gets past my chin? Hardly at all. I won't drown, to be sure; what a _slight_ comfort. Touche'. Granted, the signal-to-noise ratio on Usenet varies widely. However, some groups still are able to function quite well, although perhaps not as well as they could in an ideal world. Just because they aren't working perfectly isn't an excuse to break them completely, or at least until this mythical positive reputation technology is implemented, debugged, and deployed on the all over Usenet. As far as the sewer analogy goes, what you are trying to do is to remove the overflow valve *now*, while not providing the drain to actually drain out all of the muck. While there has been some prototype designs which have been thrown about, I have yet to hear a coherent, realistic plan for how it could be installed on all or most of the Usenet servers and readers *today*. I had thought that we had pretty clearly established that attacks on a system of content and of volume were of different natures. Lack of robustness in mail software makes a mailbomb possible, not lack of accountability. However, this mail software is deployed all over the world, and is not going to change anytime soon. And again, I have yet to see a coherent and realistic protocol that will be able to screen out mailbombs while leaving "only the good stuff" on the SMTP layer --- let alone an implementation of the same.
Maybe there are good, sound, policy reasons for making this change. But out of fairness, one would think that the agents of change should be prepared to bear some of cost of that change.
Were there silence before in the neighborhood, I would agree. There may not have been silence, but nevertheless, if the agents of change are going to increase the average sound level by 50db, it is unreasonable to assume that the people who will suffer from this noise increase, and who will have to go out of their way to implement soundproofing, etc. are going to sit back passively and let you screw them. You can't protect the network unless you *do* protect individual sites. The network as a whole is not a legal entity, only the companies and individuals that run them are. Sure you can; you can protect regional and national networks such as NEARnet, by making them common carriers. I think that would be a fine idea! However, that does not mean that people who connect to that network should then be also protected. In the same way, just because Nynex is a common carrier, it doesn't and shouldn't mean that anyone who uses Nynex to place a call is similarily protected from legal liability. If you cause someone damage by your speech, and you maliciously did so knowing that your speech was false, the person you harmed should be able to recover damages from you, whether it is done over the phone or done over a TCP/IP network. - Ted
Interesting the conversation about accountability and free speech has turned toward discussing the weaknesses in Usenet. I've been thinking about Usenet software a lot, and think there are some fundamental methods that could vastly improve the dreary and oft-discussed-lamented-cursed signal-to-noise ratio. I would propose these ideas in some newsgroup devoted to the topic but these tend to be frequented by fuddy duddies with too much at stake in the current system and completely unimaginative and uninnovative, and interested in yucky stuff like strengthening authentication (in stark contrast to the sheer brilliance in our club). (For an existence proof, look at the brouhaha on anonymity in news.admin.policy.) Now, I think we should get a thread started on the ultimate news posting software system. Let's recall the totally ad hoc nature of the original Usenet, which just sort of *emerged* because people started writing and running software for it. I fully believe this could happen with `our' system, esp. if the systems are "workable" and very attractive, and *effective*. I propose to call it MUSENET, because it's what I'm musing on at the moment. Above I called authentication mechanisms `yucky', and I still believe that they should be avoided, or at least I want to be able to peruse groups with no posting restrictions. But the authentication technique really does improve signal-to-noise ratios. That is because, no matter what anybody tells you, it is really only used for holding users accountable for their posts, to the degree of complaining to their sysadmins. I submit that high-signal-to-noise and total freedom of posting (e.g. anonymity) are mostly mutually exclusive objectives, but unfortunately each equally preferrable. So, here's the idea. Let USENET continue to ferment in relative `peaceful anarchy', with total freedom in posting. Lets start MUSENET with significant registration mechanisms. Just having an internet account wouldn't cut it. Some groups might be invitation only, others you might fill out an application/background form and current members vote on you, or whatever. The system should allow as much flexibility across groups as possible. Wouldn't it be great if every new user had to pass a multiple choice test on the group's FAQ? (sort of like getting a poster's license!) Or that the faq was archived along with group postings? Wouldn't it be great to peruse lists of members, their backgrounds or ``electronic resumes'', and their interests? This all should be possible. (Imagine reading a neat post and reading about the accomplishments of the person behind it, where they work, etc.) Now, imagine that every group also has an associated 'metagroup' for discussions about the group itself, whether it should be split, posters that are abusing it and the actions against them, etc. *built into the software* would be mechanisms for "complaining" about a post. If a user gets too many complaints, depending on the group charter, he might be automatically expelled or suspended. I proposed earlier the idea of a bank account that people can credit or debit based on your postings, and membership dependent on nonbankruptcy! There could be "trials" and "proceedings" against the accused in the meta-group. Also, mechanisms for tracking article use would be great. People could vote on articles they *liked* also. Each group would automatically have an associated "supergroup" where the best articles are percolated up, not by posting, but by positive vote mechanisms. It would be a great honor to make it into certain of these groups. In fact, there might be a net-wide "super hall of fame" (or even a "hall of shame"). I'd also like to see a lot of tracking about when articles are saved, how long they are being read, that kind of thing--propagated back to the poster! Can you imagine what kind of effect that would have on quality? (er, maybe I mean `could'...) There is a tremendous amount of analysis of articles that is going on *completely behind the scenes* right now, totally separated from the articles themselves. Lets get that beautiful data into cyberspace! Group charters should be very specific about the mechanisms involved in the particular group, and what kind of speech will be tolerated, and how abuses will be dealt with. There should be some way for a group to approve their "official faq", or more than one of such. Maybe it would appear first as a regular article, and make it into FAQhood if there are enough positive votes. I also like the idea of "free-lance moderators" or "free-lance editors". The newservers would not only propagate articles but meta-articles built by these free-lance editors of their favorite articles, perhaps in a single group but ideally globally. These editors would be able to create very customized portfolios of their favorite articles, even with their own comments on the stuff, and anyone can read the portfolios instead of the raw unfiltered stuff. I think anyone should be able to become a free-lance moderator. I think many people will. There should be some way to keep around outstanding articles. I.e., if they get enough votes, they are archived on some machine (ideally, the site they originated from or whatever) and they can be referenced in future articles. I think there ought to be a new "pseudonymous FTP" where anybody with an internet account could set up a part of their directory for archiving their favorite articles, made available to other newsreaders, possibly on the local news server. (My luddite administrators can seem to deal with anonymous FTP.) Holy cow, I haven't even gotten to all the cryptography features. Traffic should be encrypted. Everybody has public and private keys with verification. No free posting--if an article is transmitted, it means that it really was written by someone, by strength of their password secrecy. Hashing on articles to ensure they're untampered, etc. I think people should get away from the point of view that any restrictions on posting are anti-free-speech. I see a lot of news admins pretend that they don't want more control, and that any such suggestion is an insult to their unimpeachible ethical standards. There is a lot of hypocrisy going on right now. Lets make control legitimate, something *everyone* can exercise. More control is not censorship. It is the means toward improving s/n drastically. Anonymity should be built into the software for the appropriate groups. *no* tracking (e.g. storing machine routing paths) should be appended to the articles that are posted anonymously. In fact, the new server should act like our lovely remailers in this regard (cloaking/rerouting mechanisms, etc.) OK, I have to mention hypertext too. What if articles could incorporate GIF pictures or postscript files? Audio sound? have push-button pointers to other articles and files and FTP sites? yowza! Please don't misconstrue any of this. I don't advocate getting rid of completely free posting areas, forcing everyone to be validated, etc. In fact, I think these systems should always be there, and that they *will* always be frequented even after much better systems with better s/n will come along (there may also be a "creep" of outstanding freely-posted articles into the selective groups by people who vouch for them by posting them, and take the consequences for failures of judgement, as determined by voting response). Whaddya say, cypherpunks? want to be in on the next communication revolution? Want to mold the onslaught of cyberspace the way you like it, according to your distinct and prophetic vision? All we have to do is put a little prototype code together...
Last night I spoke with Mike Godwin of the EFF about the issue of anonymous libel. Mike is not on the list, and I've copied him on this message. Mike knows more about electronic speech issues than pretty much anyone else. Here is my remembrance about what he said: 1. Anonymous libel exists. Just because the speech is anonymous does not mean it can't be libellous. If libellous speech is made, and you can infer the identity of the speaker, you can sue. 2. An anonymous remailer is not liable. In order to be liable for the libellous speech, the operator of the remailer would have to have prior knowledge that the speech was libellous. Since the operation of the remailer is fully automated, prior knowledge is impossible. Those two points are my summary of Mike's opinion. For further clarifications, please post to the list and to Mike. Left out of this message is any consideration on the _realpolitik_ of anonymous remailers: whether others will carry such traffic. I'd like to not fill Mike's inbox with clutter. Eric
Eric Hughes writes:
Last night I spoke with Mike Godwin of the EFF about the issue of anonymous libel. Mike is not on the list, and I've copied him on this message. Mike knows more about electronic speech issues than pretty much anyone else. Here is my remembrance about what he said: <text deleted> 2. An anonymous remailer is not liable. In order to be liable for the libellous speech, the operator of the remailer would have to have prior knowledge that the speech was libellous. Since the operation of the remailer is fully automated, prior knowledge is impossible.
I'd modify that second point slightly--specifically, I'd say that an anonymous remailer *probably* is not liable. There's never been a case of this sort, but current American libel law suggests that the remailer would not be liable. --Mike
participants (6)
-
Chuck Lever
-
Eric Hughes
-
ld231782@longs.lance.colostate.edu
-
Mike Godwin
-
pmetzger@shearson.com
-
Theodore Ts'o