Distributed Dejanews

Mike Duvos enoch at zipcon.net
Fri Aug 8 16:13:47 PDT 1997

Adam Back <aba at dcs.ex.ac.uk> writes:

 > Recap on old idea of using dejanews and altavista

 > However is another important reason not to use altavista or
 > dejanews: it's not decentralised.  If something
 > sufficiently hot gets published they'll both be taken off
 > line long enough to figure out how to disable access to
 > eternity.  Even if they can't do it ultimately because of
 > sufficiently advanced public key text steganography, it'll
 > disrupt the service.

If we aren't going to actually shell out money for a worldwide
network of Eternity file servers, we have to use existing
publicly available services as our distributed data haven.

The more places we can stash things, the merrier.

Currently, we post documents to Usenet, and keep them alive by
reposting prior to the server expiring them.  This is robust, not
easily censored, and little effort.

I see nothing wrong with having the server also examine Dejanews
for Eternity documents, lightly stegoed to avoid the deletion of
encoded data.  To the extent that Dejanews serves as a more
permanent repository of data than Usenet, fine.  The worst thing
that can happen is that documents get removed, in which case, the
guardian of the document in question returns to keeping it alive
on Usenet.  All this is transparent to people using Eternity to
read documents.

Dejanews likely removes encoded data because of storage
requirements, not because of content.  Unless the Eternity
business becomes a significant fraction of Usenet bandwidth,
stego wars with Dejanews are unlikely.

 > A distributed deja-news replacement

 > So what are the storage requirements?  What does dejanews
 > have in their machine room?  (Huge raid server?  How many
 > Gigs?).

Dejanews started in 1995.  By 1996, they had 80 million articles
in 15,000 Usenet newsgroups.  They currently have 109 million
articles.  The Dejanews database is presently 180 gigabytes. They
index all the "most active" Usenet newsgroups, minus binaries and
some other controversial content.

A week of a full Usenet feed is presently just a tad over 20
gigabytes, and is growing by leaps and bounds.

 > I'm wondering if 1000 academic nodes, small and large ISP
 > nodes, and indivduals each contributing 1Gb each could out
 > perform deja-news.

That's more storage than Dejanews currently has.  Performance is
another issue.

 > That'd be 1Tb.  How long would 1Tb last at the being
 > consumed at a rate of a full USENET feed?

Less than one year, if you kept everything.

 > On top of distributed news archive, build an eternity
 > service

 > So with the eternal news archive, you now have everything
 > you need to easily build an eternity server.

 > How practical is this?

Not very.  You are tossing out one of the most useful aspects of
Usenet news as a Distributed File System, namely acting as a
"Giant FIFO" of what is interesting during any particular window
of time.

 > How much interest is there likely to be in creating a full
 > USENET archive as a distributed net based effort.

None at all.  Keeping everything that had ever been posted to
Usenet forever would be a giant waste of resources.

What I suspect you need here is the "Distributed File System" of
the future, which could carry Usenet as well as support Web, NFS,
FTP, and all other popular protocols to access its content.

It would be a giant FIFO with hundreds of gigabytes of capacity,
with reliable transport, authentication, encryption, and a
micropayment system for accessing and prolonging the lifetime of
data stored on it.  It would of course be uncensorable, as Usenet
is today, and the Eternity Service could run on top of it, as
well as a lot of other interesting things.

 > It's clearly got vastly larger storage requirments than
 > storing the full set of eternity articles posted to USENET.
 > However it has wider uses, and has lots of innocuous reasons
 > to exist.  I guess it's quite an interesting project in it's
 > own right.  You could start with just some newsgroups, and
 > build up as a way to boot strap it.  But it may be harder to
 > generate enough interest to deploy this than to deploy
 > enough eternity servers to do build a distributed archive
 > of just eternity articles.

It's certainly interesting to think of what the ultimate DFS for
the Internet might look like, given the various white papers on
NC, thin clients, fat servers, and big pipes.

But this isn't something that is going to happen overnight.  To
the extent that we employ Usenet in interesting new ways as a DFS
today, we create working models of many of these concepts and
demonstrate their feasibility.

 > How soon could it happen?

 > Who knows.  What do people reckon the interest would be in a
 > distributed complete USENET archive for it's own sake.  For
 > purposes of the archive cancels would not be applied.
 > Indeed archive the cancels too, and the control messages for
 > everyone to see for perpetuity who did and said what.

Well, the disk manufacturers would probably love it, and want it
replicated every ten feet across the planet.  I don't know about
other people.

 > Perhaps it will be more realistic to build a distributed
 > archive for eternity documents only first.  If it is
 > designed as a separate service, then it could be the same
 > software to do either, just restrict it to
 > alt.anonymous.messages, or just for alt.anonymous.messages
 > passing a filter program.

 > Comments?

First of all, instead of thinking of this in terms of being the
singing and dancing Usenet archive of all time, we would do much
better if we thought of it as a protocol-independent Network-wide
Distributed File System with configurable characteristics suited
to a wide variety of applications.

If we could put together a prototype with a small number of
nodes, and demonstrate it could carry both Usenet and Eternity as
well as provide NFS3 access to commonly requested Network
binaries, all supported by an integrated micropayment scheme,
that would go a long way towards making it a standard.

Would anyone care to donate some storage, cycles, and a fast pipe
to the Net?

     Mike Duvos         $    PGP 2.6 Public Key available     $
     enoch at zipcon.com   $    via Finger                       $
         {Free Cypherpunk Political Prisoner Jim Bell}

More information about the cypherpunks-legacy mailing list