Re: Sandy and I will run a cypherpunks "moderation" experiment in Jan

At 09:37 AM 1/7/97 -0800, Pierre Uszynski wrote:
Rich Graves <rcgraves@disposable.com> very correctly mentions:
1) Moderator liability and anonymous posting.
I agree that this is actually a critical problem with a filtering moderation scheme. Such a scheme appears to provide the capability to filter out possible "copyright violations" posts. From what I remember of the Netcom/CoS case (without going back to the sources), that may mean more liability for the reviewers (and the operator of the machine). That's a major point against simple filtering moderation.
I agree that this raises the spectre of liability for messages passed on, but I'm not sure it's a big problem. I see three broad categories of information which the moderation liability scheme may suppress: 1. Copyrighted items such as newspaper/magazine articles 2. Secrets which are being revealed (e.g., the alleged RC4 source, the Mykotronix trash stuff) 3. Defamation As to the loss of (1), I'm not heartbroken - much of this information is already placed online by its owners, and can be referred to with a hypertext link or a reference instead of being posted in its entirety. Also, some people (who I don't feel like singling out) currently provide access to third-party copyrighted information, but in a discreet manner. The "copyright violations" poster(s) could provide access in this way, unless they're committed to making John Gilmore and the remailer operators take the heat for someone else's actions. The loss of (2) is regrettable; but Usenet (and other unmoderated forums) are still available for hit-and-run disclosure of secrets. Also, it's unclear to what extent (2) will be lost. Given the recent California and Federal statutory changes strengthening trade secret protection, I think it's useful to be careful - but it's arguable that neither the alleged RC4 code nor the Mykotronix trash stuff would have been a trade secret violation. Also, let's weigh the value of what'd be lost (how many of these messages do we really get?) against the negative value of the crap we're currently subjected to, and the (speculative) value of posts we don't get because their authors have left the list because of disgust and annoyance. I'm not going to lose much sleep over (3). I can't call to mind a single instance of interesting or useful defamation that I've seen on the Net.
ichudov@algebra.com (Igor Chudov @ home) responded:
[in another forum] After long thinking, moderator board has come with the following solution:
1) We do not know for sure if a certain post violates some copyrights or not [and more in the same line 2,3,4,5]
Would any of this have mattered in Netcom/CoS?
My hunch is that liability for infringement will focus on defendants' actions after they knew or had reason to know that a particular act of copying or distribution was a violation of the copyright owners' rights. Copying data from one place to another is the essence of the net; holding owners of machines (or moderators) responsible for violations they couldn't have stopped is nonsensical. I predict (but cannot cite cases so holding) that moderators/listowners will be held liable for copyright violations they could have detected and stopped with the exercise of some level of care; but that they will not be held liable where no reasonable amount of effort on their part could have prevented the copyright violation. And I think "reasonable" will depend on the facts at hand. (All of the contributory infringement cases I'm familiar with have involved a defendant who was aware of, if not actively assisting with, the underlying infringement.)
Instead, a system that would forward reviewers' opinions *after the fact* does not have any of this problem. And we have already mentioned, it is also more powerful (real time initial feed, easy multiple feedback feeds, fully compatible with anything else...) although it does not reduce bandwidth requirements.
Can you name a software package which runs under Windows or the Mac OS which automatically processes reviewers' opinions against a mailbox of incoming mail? Better yet, can you name a Eudora plug-in which does so? Abandoning an imperfect but workable solution because it's possible to imagine that someone will create something better someday strikes me as silly. Comparing theoretical software/strategies with currently implemented software/strategies is comparing apples and oranges. -- Greg Broiles | US crypto export control policy in a nutshell: gbroiles@netbox.com | http://www.io.com/~gbroiles | Export jobs, not crypto.

At 4:40 PM -0800 1/7/97, Greg Broiles asks:
Can you name a software package which runs under Windows or the Mac OS which automatically processes reviewers' opinions against a mailbox of incoming mail? Better yet, can you name a Eudora plug-in which does so?
An opportunity for somebody who doesn't already do enough programming. As for the whole moderation idea, consider officially defining the list to be "occasionally moderated". If the abuse is bad, start moderating it. If there get to be few problems, stop moderating it. (A separate issue is whether to tell anyone which way the list is currently running.) ------------------------------------------------------------------------- Bill Frantz | Client in California, POP3 | Periwinkle -- Consulting (408)356-8506 | in Pittsburgh, Packets in | 16345 Englewood Ave. frantz@netcom.com | Pakistan. - me | Los Gatos, CA 95032, USA

Bill Frantz wrote:
As for the whole moderation idea, consider officially defining the list to be "occasionally moderated". If the abuse is bad, start moderating it. If there get to be few problems, stop moderating it. (A separate issue is whether to tell anyone which way the list is currently running.)
Strongly disagree. That would be too arbitrary. In practice, a system such as Chudov's STUMP reduces the latency and moderator effort considerably, as responsible posters get put onto the white list for auto-approval (to be degraded to hand-moderation if they lose it). For list regulars, it's as if the list were unmoderated. -rich

Rich Graves <rcgraves@disposable.com> very correctly mentions:
1) Moderator liability and anonymous posting.
At 09:37 AM 1/7/97 -0800, Pierre Uszynski agreed:
I agree that this is actually a critical problem with a filtering moderation scheme. Such a scheme appears to provide the capability to filter out possible "copyright violations" posts. From what I remember of the Netcom/CoS case (without going back to the sources), that may mean more liability for the reviewers (and the operator of the machine). That's a major point against simple filtering moderation.
gbroiles@netbox.com (Greg Broiles) continues:
[...] I'm not sure it's a big problem. I see three broad categories of information which the moderation liability scheme may suppress:[...]
I think I went a bit in the wrong direction mentioning Netcom/CoS: 1) The problem is not so much what posts are "legal" or "illegal", as it is what posts *could* bring in lawsuits and what effect this has on the moderators. Not everyone evaluates that "Sword of Damocles" threat identically: Some argue it is statistically irrelevant, some argue that their pockets are not tempting targets anyway, some are in different countries (and have not noticed it does not matter anymore), etc... The point is that 2) The above "do I dare approve this post" equation is clearly not the one that determines whether a post is good cypherpunks material or not. I do not want this liability issue to matter in any way in the moderators ratings (no matter how Sandy and John would themselves resolve it) if we can help it. And we can. and 3) This equation would seriously affect how *I* would offer help in a filtering moderation scheme: I most likely would not (whether or not anyone would be interested in me participating, and my decision for reasons that are not relevant, etc...) Again, that's the wrong reason brought into the discussion: the right reasons would be "do I have the time now?" (hah! ;-), "do I currently read in sync with incoming traffic?", and "do I want to participate?".
Can you name a software package which runs under Windows or the Mac OS which automatically processes reviewers' opinions against a mailbox of incoming mail?[...]
I spend a ridiculous portion of my time fixing the damage caused by software that was used just because "it was there". Just because there is software to do^H^H, sorry, to botch something under Windows is not reason enough to use it, and even less of a reason to limit ourselves to these options. Yes, some people couldn't use the *option* initially, and others could (with ready software or by writing their own). Too bad. As someone else mentioned, that's an opportunity (and there must be a cross-platform java mail reader somewhere that can be modified or written to satisfy lots of platforms at once), and cypherpunks wouldn't be the only forum moving toward "after-the-fact cooperative filtering"...
[Greg Broiles US crypto export control policy in a nutshell: Export jobs, not crypto.]
Great summary, Pierre. pierre@rahul.net

I'm not sure I see why a moderator would have any liability for passing on a trade secret or even a classified military secret so long as the moderator did not have a contractual relationship (or a clearance, as the case may be) with the owner. That's free speech. The moderator might have to cooperate with a subpoena from the original owner seeking to find who stole/released the secret, but that's a different problem. Is there some trade secret theory of an implicit trusteeship that i don't know about? On the more general subject of moderator liability, I vaguely recall that the CDA had some badly-drafted language that was designed to reduce moderator liability, but that it was so badly worded as to be ambiguous. I'm just back from a months' trip and sorting through stuff so i don't have time to look it up. Anyone got it handy? PS I support moderation, at least as an experiment. As to the choice of the moderator, innocent until proven guilty, I say. I personally don't put much store in requiring a moderator to issue a code of practice. Common law and equity will do to evolve a system as it goes along. I prefer a system where the rejected posts are in a segregated list so that they are easy to find. Not everyone has great tools for merging two lists to find the differences. Of course, the moderator liability issue is at its greatest is the moderator creates a "trash" list, also know as a "sue me" list. I do see how the legal position would be better if you have an unmoderated list and a separate "best-of" list that is moderated. Finally, how about a THIRD list to discuss moderation issues so that it doesn't clutter up the "best-of" list? On Tue, 7 Jan 1997, Greg Broiles wrote:
2. Secrets which are being revealed (e.g., the alleged RC4 source, the Mykotronix trash stuff)
[...]
The loss of (2) is regrettable; but Usenet (and other unmoderated forums) are still available for hit-and-run disclosure of secrets. Also, it's unclear to what extent (2) will be lost. Given the recent California and Federal statutory changes strengthening trade secret protection, I think it's useful to be careful - but it's arguable that neither the alleged RC4 code nor the Mykotronix trash stuff would have been a trade secret violation. Also, let's weigh the value of what'd be lost (how many of these messages do we really get?) against the negative value of the crap we're currently subjected to, and the (speculative) value of posts we don't get because their authors have left the list because of disgust and annoyance.
A. Michael Froomkin | +1 (305) 284-4285; +1 (305) 284-6506 (fax) Associate Professor of Law | U. Miami School of Law | froomkin@law.miami.edu P.O. Box 248087 | http://www.law.miami.edu/~froomkin Coral Gables, FL 33124 USA | It's warm here.

On Wed, 8 Jan 1997, Michael Froomkin - U.Miami School of Law wrote:
Date: Wed, 8 Jan 1997 10:52:09 -0500 (EST) From: "Michael Froomkin - U.Miami School of Law" <froomkin@law.miami.edu> To: Greg Broiles <gbroiles@netbox.com> Cc: Pierre Uszynski <pierre@rahul.net>, cypherpunks@toad.com Subject: Re: Sandy and I will run a cypherpunks "moderation" experiment in Jan
I'm not sure I see why a moderator would have any liability for passing on a trade secret or even a classified military secret so long as the moderator did not have a contractual relationship (or a clearance, as the case may be) with the owner. That's free speech. The moderator might have to cooperate with a subpoena from the original owner seeking to find who stole/released the secret, but that's a different problem. Is there some trade secret theory of an implicit trusteeship that i don't know about?
Not to my knowledge. But See, subpoena comment above. I am amazed that no one has suggested a pool of moderators with provisions to blind a given post from attributation to a specific moderator. (Attornies- what might be impact of a Res Ipsa attack on this kind of set up, and incidently, on other anonymous pool arrangements?)
As to the choice of the moderator, innocent until proven guilty, I say. I personally don't put much store in requiring a moderator to issue a code of practice. Common law and equity will do to evolve a system as it goes along.
While as far as conduct goes I agree, in defining what will eventually be the list content, and thus what I should or should not use/waste my time typing up, it is an important ex ante condition that a stated policy exist. There are two questions here. 1> Will my material be booted off only because it disturbs the moderator? (I think this is the one answered by your common law reference) 2> Will my material violate some restriction and be booted "for cause?" (This can only be predicted accurately if there is a stated policy on what constitutes a violation- and then only where the policy has teeth or is generally respected). -- Forward complaints to : European Association of Envelope Manufactures Finger for Public Key Gutenbergstrasse 21;Postfach;CH-3001;Bern Vote Monarchist Switzerland

Black Unicorn wrote:
On Wed, 8 Jan 1997, Michael Froomkin - U.Miami School of Law wrote:[snippo] I am amazed that no one has suggested a pool of moderators with provisions to blind a given post from attributation to a specific moderator. (Attornies- what might be impact of a Res Ipsa attack on this kind of set up, and incidently, on other anonymous pool arrangements?)
As to the choice of the moderator, innocent until proven guilty, I say. I personally don't put much store in requiring a moderator to issue a code of practice. Common law and equity will do to evolve a system as it goes[mo' snippo]
Here's a perfect example of two concepts, the first very thoughtful (however flawed), and the second not very thoughtful at all. "Innocent until proven guilty" is good for individuals, but it is mis- construed for organizations. Sure, officials who do public service, and moderators/censors who have to make judgements need to be protected from penalties for common everyday mistakes. But "innocent until proven guilty" is being misapplied here, to suggest advance trust for entities that haven't earned any trust. You might fool some people, but you aren't fooling me.
participants (7)
-
Bill Frantz
-
Black Unicorn
-
Dale Thorn
-
Greg Broiles
-
Michael Froomkin - U.Miami School of Law
-
Pierre Uszynski
-
Rich Graves