Torrenting The Darknets

Zenaan Harkness zen at freedbms.net
Sun May 21 22:24:10 PDT 2017


> On May 21, 2017 4:09 PM, "grarpamp" <grarpamp at gmail.com> wrote:
> 
> > https://yro.slashdot.org/story/17/05/17/1830228/popular-torrent-site-
> > extratorrent-permanently-shuts-down
> > https://torrentfreak.com/extratorrent-shuts-down-for-good-170517/
> >
> > ExtraTorrent is the latest in a series of BitTorrent giants to fall in
> > recent months. Previously, sites including KickassTorrents,
> > Torrentz.eu, TorrentHound and What.cd went offline.
> > Clearnet = Fail.


On Sun, May 21, 2017 at 05:04:34PM -0700, Steven Schear wrote:
> These trackers need to adopt distributed hosting tech, like IPFS or
> NetZero, so there are no single points of pressure/failure and the operator
> IP and identity have a reasonable chance of staying private from technical
> snooping.
> 
> Warrant Canary creator


(Please bottom post. Please please. Thanks in advance...)

There are two primary groups of overlay-net nodes:
	-	"unlimited bandwidth" nodes
	-	"set/specific limit" nodes

The subtlety of the effects of network integration of these two types
of nodes into any overlay network might be optimised, i.e. considered
algorithmically as different node types, even though most all of the
network connection/participation params would otherwise be identical.

	-	Permanent connected, fixed-throughput rate nodes are ideal for
		distributed store.

	-	When the "available permanent rate" drops below a certain figure,
		the node may be optimally useful for DHT style data store.

	-	"Latency <-> stealth" tradeoff setting is another end-user
		preference of course.

	-	Randomly connected nodes, which make available significant cache
		store, and also have high-speed net connection (when connected)
		should still be readily employable by the network overall (it's
		just algorithms)

No matter the generosity of some nodes, ultimately the freenet and
other discussion histories appear to show that "optimizing for the
end-users personal data-type preferences" (e.g. movies, books,
software etc), is the minimum-friction route to maximum participation
- all algorithms should well account for this fact of human nature.

Git shows us (many of) the benefits of a content-addressable mid
layer - whether we build torrents, or p2p libraries (clearnet or
darknet), blogs or websites, or other databases, having all content
be uniquely and unambiguously addressable (planet-wide) is a
facility we need not discard at this point in our computing history.

When SHA began to dominate and MD5 became viewed as "gee, that's
obviously not very good", it was far too easy for the lesson to be
ignored and random software to hardcode an algorithm that would be
"good enough forever", like I dunno, Git for a random example.

So we face the lesson again: no algorithm is certain to withstand the
test of time, and we can almost say with certainty that all
algorithms today could fail the test of theoretical future "quantum
computers".

Primary questions re content-addressability are:

	-	what is Git transitioning to, and is Git's upcoming new hash
		mechanism adequate for global content addressing?

	-	what is robust in the face of hash-algorithm changes?

	-	what are the interactions between some definitive chosen hash
		system (either Git's or otherwise), and other existing systems
		like bittorrent?

Our juicy future beckons...



More information about the cypherpunks mailing list