On Sun, May 21, 2017 at 09:27:21PM -0400, grarpamp wrote:
On Sun, May 21, 2017 at 8:04 PM, Steven Schear <schear.steve@gmail.com> wrote:
These trackers
These *websites* are not actually "trackers" and have generally shifted away from providing tracker service ever since legal pressure made bundling services riskier, and a new independantly operated layer of services providing the tracker function with opentracker as bootstrap / fallback, for the DHT via PEX etc... has arisen. https://en.wikipedia.org/wiki/Opentracker
The proper term for these websites is searchable "indexes (DB's)", bundled with community bling forums.
In reality it is the "indexes" that...
need to adopt distributed hosting tech, like IPFS or NetZero, so there are no single points of pressure/failure and the operator IP and identity have a reasonable chance of staying private from technical snooping.
The bootstrap of tracking (where to first securely linkup with peers for DHT) also needs fixed.
Yes, and many a seemingly-worthy attempt has been attempted. Consensus is adequate if you are certain you're not "entrapped in a deceptive small-world scanerio". Other than that, the only sure option is actual fellow humans, at least one of which you trust to have a reasonably 'accurate' network view. For those who are unsure, humans are those funny creatures that on occasion wear sneakers to transport USB sticks and other devices between your precious computers - if you check closely, you may find that you yourself, whatever and whoever you consider yourself to be, are in fact also human - but be extra careful and paranoid, as competent deceptions have been known to occur.
"Filesharing" could be designed lots of ways, doesn't have to be "bittorrent protocol" proper. Though if it was compatible with BT clients you'd have millions of instant users / nodes in your encrypted anonymous ecosystem.
Azureus/Vuze (Java) is highly plugin-ready, having many plugins already (excessively many some would say) and so its plugin API is well and truly street tested over many years. And anyway, STEP 1 we must have a sane architecture, NOT "overlay net that only runs TCP" for just 1 example (<AHEM>Tor<COUGH>). I.e., the Unix design philosophy - no more monolithic, independent projects thanks. In the latest Git-rev-news newsletter e.g., the git guys proposed, discussed, and ultimately merged a (C language) "SWAP" macro - and yet nowhere can be seen any mention of "what do the Linux kernel guys do" - guys, are you serious? I guess for some, programming is just entertainment, a means to enthrall oneself - but I hope some round these parts take at least a mild passing interest in prior art in Unix, Linux and the rest... Need some discussions about each layer (and the layers themselves), and eliminate that which can be easily eliminated from the design space. - hash/id layer: identify nodes, content items, possibly routes, etc (if you feel competent in creating a competent hash/ ID layer, please join the ongoing Git discussion - see previous email for more) - DHT/ basic data store layer (Demonoid / bittorrent genre): various key-data pair lookups and searches (nodes, content items, users, bloggers etc) - network and transport layers - UDP at the least, perhaps ethernet frames? - distributed cache/data store layer/ ala freenet, gfs, etc Principles, protocols: - version things, so the protocol for a layer can be enhanced over time/ transitioned, e.g. Git and ID/GUID hashing