That 70's Crypto Show (Scalability and Napster)

Bill Stewart bill.stewart at pobox.com
Wed Dec 27 16:30:01 PST 2000


At 02:42 AM 12/26/00 -0500, dmolnar wrote:
>More than that, if the "tragedy of the commons" really happens for
>Gnutella and Napster and friends, then people will look for ways to avert
>it. Maybe it won't happen ("The Cornucopia of the Commons"), but if
>it does, reputation systems might see some sudden interest. 

Napster itself suffers from tragedy of the inadequate business model,
since it relies on centralized servers with no visible means of 
support (other than the "with 20 million users we should be 
able to get revenue _somewhere_") and a potential for
exponential growth in their legal costs if they get any revenue.

They do have a problem related to tragedy of the commons,
which is a need for servers that are bigger than the
biggest individual servers they currently support,
and a technology that doesn't scale as well as they'd like,
though some parts of it scale extremely well and the
next level of bottlenecks are still good enough for
pirating music, with users sharing music in communities of
a few hundred thousand, if not good enough for six billion users.

I suspect the next layer of scalability could be handled
adequately by some good engineering, though perhaps it needs
Real Computer Science, but without a good funding model
it's not likely to get done.   The current model does seem
to port well to the Open-Servers-Not-Run-By-Napster model -
volunteers can run medium-sized servers because the 
first level of scalability design was well done,
and as with Napster-run servers, it's close enough for
pirate music, though it doesn't let you find
everything on the distributed net.

Less Napster-like systems with decentralized servers
have to address scaling problems as well.
Some of them tie their metadata and their transmission methods
together closely; some split them apart better.
Gnutella sounds like it's in trouble - too much needs to
be online, and the original designs can't handle a large number
of requests if there are people with slow connections on the net.
It's kind of like tragedy of the commons where the commons is
small and everybody has to walk their sheep in single file,
so the slowest or dumbest sheep become a bottleneck for everyone else.
Freenet paid more attention to scaling in its design -
it's easy to retrieve stuff if you know where it is,
or to find stuff if it's relatively near you,
and it can cope with not being able to find everything -  
On the other hand, it may be harder to find the stuff you want.

>On Mon, 25 Dec 2000, Tim May wrote:
>> In other words, it's time to get crypto out of the math and computer 
>> science departments and put it in the engineering departments where 
>> it belongs.

Some of this may be computer science, some is engineering,
some is just counting stuff :-)  Some problems, like scalability
or understanding don't-use-the-same-key-twice attacks on RC4,
are Science the first time you learn them, but they're just
engineering after a while, the way understanding the relationship
of the tensile strength of material to its molecular structure
is science, but designing a bridge so that it doesn't overstress
any of its beams is engineering, and taking occasional samples of bolts
and destructively testing them to make sure they've got the
tensile strength they're supposed to is engineering or maybe
just business practice (depending on whether you're doing it
to make sure your bridge will perform the way you want or
to make sure your suppliers aren't ripping you off.)

				Thanks! 
					Bill
Bill Stewart, bill.stewart at pobox.com
PGP Fingerprint D454 E202 CBC8 40BF  3C85 B884 0ABE 4639





More information about the cypherpunks-legacy mailing list