metrics of eternity service properties

Adam Back aba at dcs.ex.ac.uk
Tue Jan 13 15:30:36 PST 1998




Tim May <tcmay at got.net> writes:
> >> This of course doesn't scale at all well. It is semi-OK for a tiny,
> >> sparse set of reposted items, but fails utterly for larger database
> >> sets. (If and when Adam's reposted volumes begin to get significant,
> >> he will be viewed as
> >> a spammer. :-) )
> >
> >The best criticism of my eternity design to date!  I agree.
> 
> I assume you are serious, and do agree, as it is a very solid
> criticism of the "Eternity as continuous posting to Usenet" model.

Yes.  I think my eternity service in USENET is actually fairly
resistant to attackers determining readers and publishers.  The
limiting factor is the bandwidth consumed.  We have to keep below some
threshold otherwise news admins may attempt to filter out eternity
articles; and there is some limit at which the system would break down
altogether.

One suspects posting more than around 10Mb-20Mb / day would be enough
to motivate administrators to start filtering.  Some people with
questionable motivation also have made something of a sport of sending
forged cancel messages, and these people will be more easily
motivated, as as far as I can tell their motivation is a perverse
enjoyment of censorship, and sense of self importance, and of malice
in perpetrating DoS attacks.

> I see several axes to the analysis of the various Eternity schemes.
> 
> -- retrieval time for a customer or client to obtain some set of data,
> ranging from (I assume) ~minutes or less in an Eternity DDS file system to
> ~days or less in a Blacknet system to (I am guessing) ~weeks or months in
> an Adam Back sort of system.

Actually I was aiming for seconds access time with "Eternity USENET".
(I am going to dub it "Eternity USENET" -- just any name -- because
the lack of names to denote the services under discussion is becoming
awkward.)

However the reason I am able to claim seconds is because I aim to keep
the data either in the newspool, or in a local copy of recent messages.

The design is similar to TELETEXT, the text mode information services
hosted as side bands on TV signals.

The basic topology is the same: we have a broadcast medium, data is
slowly re-broadcast.  So my design is fairly analogous to a FAST-TEXT
system, which keeps a copy of all the pages, and updates them as the
TEXT signal sweeps past.

So metrics we could use to describe the functionality of an eternity
design should include "update time", and "access time".  I'll try a
quick table of this to illustrate my understanding of Eternity
BlackNet (BlackNet used to host an Eternity like service), and
Eternity DDS as compared to Eternity USENET.


                            E-BlackNet     E-USENET       E-DDS

update delay                2-3 days	   2-3 days       minutes
access time                 2-3 days       seconds        minutes
b/width efficiency          good/          poor           intermediate
                            intermediate
nodes                       10             10,000         10,000
capacity                    100 Gb         100 Mb         10 Tb
security                    good           good           intermediate
security dependencies       remailers,     remailers,     custom 
                            USENET         USENET         componenets
ease of deployment          easy           easy           hard

Clearly these figures are order of magnitude only, if even that
accurate.

My reasoning is as follows:

Eternity BlackNet:

I am presuming we have third party BlackNet operators who offer to buy
and sell information, acting as distribution agents.  It takes around
a day to send requests or submissions to the operator via a dead drop
(encrypted material posted to USENET via remailer).

If requested information is sent to clients via replyable nyms
bandwidth efficiency is improved.  If information is sent to clients
via dead drops efficiency drops because the information is broadcast
for each request.

Clearly a user could, if he wished to improve his access time for
browsing BlackNet eternity documents, replicate the entire BlackNet
database locally.  (Or perhaps subsets defined as all documents by a
chosen set of authors, or all documents with given keywords, or with a
given third party rating etc).

E-USENET:

This involves re-posting documents periodically to USENET, and relying
on the small data set size to ensure that local eternity proxies can
retain a current copy of the dataset.

Variants on this approach can migrate towards E-DDS approach by using
publically accessible E-USENET servers, and by E-USENET server caches
sharing their then larger document store.  Also by adding other
submission mechanisms to the pool formed by the set of publically
accessible E-USENET services.

E-DDS:

Economic incentives are used to encourage people to take risks in
hosting controversial documents.  Indirection is used to have a hidden
set of data servers, which are only accessed via publically known
gateways.  (I would recommend looking at Ian Goldberg and David
Wagner's TAZ server, the url for which I posted in an earlier post,
for this indirection function).

I am unsure how an attacker would be prevented from discovering the
hidden set of data servers.  How do we exclude the attacker from
becoming a gateway server?

I get the impression that the idea also is make the service a
commercial success as a distributed web hosting system, and to get
away with comparitively small amounts of controversial materials which
migrate.  This idea is a useful model I think.

Comments?  Clarifications, Ryan?

Adam







More information about the cypherpunks-legacy mailing list