Re: (eternity) mailing list and activity
[I've Cc'ed this to cypherpunks from discussion on eternity@internexus.net] Jeff Knox <trax@ix.netcom.com> writes:
I do have a few questions. I am curios as to how i would go about creating a domain service. If i wanted to start a domain service to server a domain i create like test.dom, how would i run a server to processes the request, and how would all the other servers around the world cooperate?
Firstly note that there are 3 (three) systems going by the name of eternity service at present. Ross Anderson coined the term, and his paper which describes his eternity service, you should be able to find somewhere on: http://www.cl.cam.ac.uk/users/rja14/ Next, my eternity prototype which I put a bit of time into hammering out last year in perl, borrows Ross's name of "eternity service", but differs in design, simplifying the design by using USENET as the distributed database / distribution mechanism rather than constructing one with purpose designed protocols: http://www.dcs.ex.ac.uk/~aba/eternity/ And there is Ryan Lackey's Eternity DDS (where DDS stands for I presume Distributed Data Store?). I am not sure of the details of his design beyond that he is building a market for CPU and disk space, and building on top of a an existing database to create a distributed data base accessible as a virtual web space (amongst other `views'). He gave this URL in his post earlier today: http://sof.mit.edu/eternity/ To answer your question as applied to my eternity design, I will describe how virtual domains are handled in my eternity server design which is based on USENET as a distributed distribution medium. A proto-type implementation, and several operational servers are pointed to at: http://www.dcs.ex.ac.uk/~aba/eternity/ In this design the virtual eternity domains are not based on DNS. They are based on hash functions. Really because it is a new mechanism for accessing information URLs should perhaps have the form: eternity://test.dom/ Where a separate distributed domain name lookup database is defined, and a distributed service is utilised to host the document space. However because it is more difficult to update browsers to cope with new services (as far as I know, if anyone knows otherwise, I'd like pointers on how to do this), I have represented URLs of this form with: http://test.dom.eternity/ This would enable you to implement a local, or remote proxy which fetched eternity web pages, because you can (with netscape at least) direct proxying on a per domain basis. Using the non-existant top level domain .eternity, you can therefore redirect all traffic for that `domain' to a local (or remote) proxy which implements the distributed database lookup based on URL, and have normal web access continue to function. My current implementation is based on CGI binaries, so does not work directly as an eternity proxy. Rather lookups are of the form: http://foo.com/cgi-bin/eternity.cgi?url=http://test.dom.eternity/ The virtual eternity URLs (URLs of the form http://*.eternity/*) are converted into database index values internally by computing the SHA1 hash the URL, for example: SHA1( http://test.dom.eternity/ ) = d7fa999054ba70e1ed28665938299061b519a4f7 The database is USENET news, and a distributed archive of it; the database index is stored in the Subject field of the news post. The implementation optionally keeps a cache of news articles indexed by this value on eternity servers. If the required article is not in the cache it is searched for in the news spool. Currently this is where it stops, so articles would have to be reposted periodically to remain available if articles are either not being cached, or are flushed from the cache. However the cache is never flushed, because the cache replacement policy is not currently implemented. An improvement over this would be to add a random cache flushing policy and have servers serve the contents of their cache to each other forming a distributed USENET news article cache. Another option would be to search public access USENET archives such as dejanews.com for the requested article. The problem is really that we would prefer not to keep archives of articles directly on an open access server, because the server's disks could be seized, and the operator could be held liable for the presence of controversial materials. Wei Dai suggested that documents should be secret split in a redundant fashion so that say 2 of 5 shares are required to reconstruct the document. If the shares are distributed across different servers, this ensures that one server does not directly hold the information. Ross describes ideas on how to ensure that a server would not know what shares it is holding in his paper which can be found on his web pages, the url being linked from: http://www.cl.cam.ac.uk/users/rja14/ Adam
-----BEGIN PGP SIGNED MESSAGE----- Adam Back writes:
Firstly note that there are 3 (three) systems going by the name of eternity service at present.
I've heard that there are perhaps as many as two more which are vaguely non-public, as well.
And there is Ryan Lackey's Eternity DDS (where DDS stands for I presume Distributed Data Store?). I am not sure of the details of his design beyond that he is building a market for CPU and disk space, and building on top of a an existing database to create a distributed data base accessible as a virtual web space (amongst other `views'). He gave this URL in his post earlier today:
I'm only using an existing database for reasons of expediency in prototyping. The actual production system will include no commercial code. Oracle is a bloated pig for this kind of thing, too. One of the 'views' for the data will be a virtual database.
[description of how to commit data in Adam Back's Eternity Service implementation elided]
My current design will require some kind of interface between users and eternityspace in order to commit data. Since the pricing/specification/etc. system will be rather sophisticated, I'm also working on a simulation tool to assist users in committing data. However, it will be possible in my ideal implementation for a user to fill out a form with duration of storage required, bandwidth/accessibility/uptime requirements, a pro-rated schedule for nonperformance, amount of space needed, amount of computation needed, etc. and attach their object. Then, there will be a market based system which lets people bid on storing that data -- various indices of prices will exist (although since it is multiaxis, they will have to be surfaces, with a large amount of interpolation). Someone will buy it, perhaps resell in a recursive auction market, etc. There will be a designated verifier which will make sure the contract is met, using a variety of mechanisms designed to preserve anonymity and prevent fraud, and enforce the contract. Payment will be held in escrow, disbursed at pre-specified intervals, again totally anonymously. Since these contracts are negotiable, there should be a market in selling/trading/etc. old Eternity contracts. As the realities of storing data change, the market will take this into account, and up or down value existing and new contracts. Potentially, one could even use Eternity storage as a kind of currency. The system would seem to be purpose-built for money laundering, once it is big enough that all money going into or out of the system is not monitorable (a threshold which depends on the design of the system and payment scheme as well). I think I'm going to take a break from my demo writing to work on some public documents. Some parts of my current system are total mock-ups, others don't yet exist. My "two-way anonymous e-cash implementation" is a chit file in /tmp on my machine (heh), and putting data into the system requires serious frobbing. And Oracle is handling most of the coherency/etc. issues right now. - -- Ryan Lackey rdl@mit.edu http://mit.edu/rdl/ -----BEGIN PGP SIGNATURE----- Version: 2.6.2 iQEVAwUBNLmcw6wefxtEUY69AQH+tgf8Ctsg6UJbGzDhzeXwNKYB6qI+afTnKWkw sYgyRf/tKEual6yaXymiCBswUzfW39jdqkQ313DPlGQzovCq6AcsXQyhk5H93h+e V5aI2belCUEUFxJ21WUj++ZtM5vSh0lcFAlz/w3ejuju7Il27uW8vDVfHjOBg65m oO6F7Wi4Wp2V4B4720KieHLhs9Rg9YGNVsDZPSgfY5KFpcDylpEARW6ipCceBQu+ fLrT+or4T3KVSKR7hSPMxULw/FKarMAz+xpyBXFt8UQMEU0azBVhPU3Ui5VPUZ+o mVMBiZKDFU4dvQCNygcGtebQ1JUFy3YKq48SuTYBr4FPyYNWMgGCSQ== =oRpR -----END PGP SIGNATURE-----
participants (2)
-
Adam Back
-
Ryan Lackey