On Mon, May 15, 2017 at 10:49 AM, Razer <g2s@riseup.net> wrote:
Indie films on darknets mate.
https://en.wikipedia.org/wiki/The_President's_Analyst http://torrentking.eu/movie-1967/the-president-s-analyst-torrents/ http://www.demonoid.click/files/details/3552579/ http://www.demonoid.click/files/download/3552579/ udp://inferno.demonoid.pw:3391/announce
I never had much luck uploading torrents myself.
A functional darknet "torrent" system would have at least just one "tracker"... the darknet internal DHT itself. You'd publish the InfoHash to whatever darknet indexes you like. In this case, the "infohash" is 33fc6b8baa0d74e0c0c96d33a29d11dbbce9edc1 and the "index" for at least this message is the cpunks list, while someone might "seed" it on I2P.... I2PSnark, I2PRufus, I2P-Transmission... which utilize certain backend "trackers" mechanisms not necessarily all distributed yet. But there still desperately needs to be a distributed storage layer that long term automagically backs up the explicit "seeders" which tend to be volatile. At which point the original "OP seeder's" best role is then to just inserting into the storage layer and walking away. "Elective non OP seeders of specific torrent sets" such as with all torrents in Vuze / Transmission seed list, needs to transition to offering darknet storage blocks for all insertions that may happen.
On 05/20/2017 08:42 PM, grarpamp wrote:
On Mon, May 15, 2017 at 10:49 AM, Razer <g2s@riseup.net> wrote:
Indie films on darknets mate.
https://en.wikipedia.org/wiki/The_President's_Analyst http://torrentking.eu/movie-1967/the-president-s-analyst-torrents/ http://www.demonoid.click/files/details/3552579/ http://www.demonoid.click/files/download/3552579/ udp://inferno.demonoid.pw:3391/announce
I never had much luck uploading torrents myself.
A functional darknet "torrent" system would have at least just one "tracker"... the darknet internal DHT itself. You'd publish the InfoHash to whatever darknet indexes you like. In this case, the "infohash" is 33fc6b8baa0d74e0c0c96d33a29d11dbbce9edc1 and the "index" for at least this message is the cpunks list, while someone might "seed" it on I2P.... I2PSnark, I2PRufus, I2P-Transmission... which utilize certain backend "trackers" mechanisms not necessarily all distributed yet.
i2p can be called "a functional darknet torrent system," in that the large majority of traffic crossing that network is torrents. The i2p package includes the router, a browser based torrent client and a simple web server. The two biggest trackers on i2p are Postman and Difftracker, both are stable with good uptimes. I was pleased to note on my last visit that some files I seeded and promoted there about five years ago are still available. I got my copy of The President's Analyst ages ago, I would seed it but alas, my poor overworked computer can't afford the cycles to run i2p alongside all the other crap I am using it for: Graphics and video editing, etc. A more "normal" user won't see a performance hit from i2p unless the system they run it on is already overstressed. :o)
But there still desperately needs to be a distributed storage layer that long term automagically backs up the explicit "seeders" which tend to be volatile. At which point the original "OP seeder's" best role is then to just inserting into the storage layer and walking away. "Elective non OP seeders of specific torrent sets" such as with all torrents in Vuze / Transmission seed list, needs to transition to offering darknet storage blocks for all insertions that may happen.
On Sat, May 20, 2017 at 10:21 PM, Steve Kinney <admin@pilobilus.net> wrote:
i2p can be called "a functional darknet torrent system,"
on my last visit that some files I seeded and promoted there about five years ago are still available.
Only through the explicit goodwill of human "seeders" actively seeding specifically chosen infohashes. That's nice, and very good to have, for "speed" if nothing else. But the missing link is some form of AI that actually maintains a copy on a distributed redundant storage backend according to whatever demand or insertion parameters. Or reasonably forever [1]. Otherwise when the last human "seeder" ceases seeding, no matter what the popularity was, the material dies with them. [1] Seeders life are a terrible definition of when to expire something. And has no facility to even try to maintain a single archive copy for decades. There's enough slack space on the planet's masses of hard drives for an app to plugin to the darknets and do just that. Think of it this way, even grander... all human knowledge <= Npeople * avg unused space * darknet storage redundancy level This equation should be easy to estimate as true. So it can be built... And when you do, people won't mind running it, after all, it's anonymous, encrypted, etc... and they have access to indexes of all knowledge therein, including fave pop music and puppy pictures.
One of the great weaknesses of torrents (and filesharing systems in general ) is the lack of mechanisms to promote persistence. That's why a group of us (including Bram Cohen, BitTorrent and Bryce Wilcox-O'Hearn (Zooko), Tahoe-LAFS, MNET, ZCash) created Mojo Nation. Unfortunately, Mojo failed to get follow-on funding due to Napster. Fortunately, the idea of a publishing model (vs. filesharing), with an internal reward system for persistence, was independently re-discovered by MaidSafe, IPFS and ZeroNet. I hope at least one succeeds. Warrant Canary creator On May 20, 2017 7:22 PM, "Steve Kinney" <admin@pilobilus.net> wrote:
On 05/20/2017 08:42 PM, grarpamp wrote:
On Mon, May 15, 2017 at 10:49 AM, Razer <g2s@riseup.net> wrote:
Indie films on darknets mate.
https://en.wikipedia.org/wiki/The_President's_Analyst http://torrentking.eu/movie-1967/the-president-s-analyst-torrents/ http://www.demonoid.click/files/details/3552579/ http://www.demonoid.click/files/download/3552579/ udp://inferno.demonoid.pw:3391/announce
I never had much luck uploading torrents myself.
A functional darknet "torrent" system would have at least just one "tracker"... the darknet internal DHT itself. You'd publish the InfoHash to whatever darknet indexes you like. In this case, the "infohash" is 33fc6b8baa0d74e0c0c96d33a29d11dbbce9edc1 and the "index" for at least this message is the cpunks list, while someone might "seed" it on I2P.... I2PSnark, I2PRufus, I2P-Transmission... which utilize certain backend "trackers" mechanisms not necessarily all distributed yet.
i2p can be called "a functional darknet torrent system," in that the large majority of traffic crossing that network is torrents. The i2p package includes the router, a browser based torrent client and a simple web server. The two biggest trackers on i2p are Postman and Difftracker, both are stable with good uptimes. I was pleased to note on my last visit that some files I seeded and promoted there about five years ago are still available.
I got my copy of The President's Analyst ages ago, I would seed it but alas, my poor overworked computer can't afford the cycles to run i2p alongside all the other crap I am using it for: Graphics and video editing, etc. A more "normal" user won't see a performance hit from i2p unless the system they run it on is already overstressed.
:o)
But there still desperately needs to be a distributed storage layer that long term automagically backs up the explicit "seeders" which tend to be volatile. At which point the original "OP seeder's" best role is then to just inserting into the storage layer and walking away. "Elective non OP seeders of specific torrent sets" such as with all torrents in Vuze / Transmission seed list, needs to transition to offering darknet storage blocks for all insertions that may happen.
Rewards seem nice, yet not everyone who wants to play can pay, or the math overhead is crushing, or it becomes centralized. Definitely worth trying, especially if it fits some usage model. Another form is to just let the network use whatever CPU, RAM, DISK, NET that you're not currently using, or give it whatever limits you want. In short, set it and forget it. Let the network figure out how to best use your node to support the network. Maybe it's a strictly filesharing network, or a general purpose network. That's on the "Hey I just want to donate this because it's cool like Seti@Home, etc." Users actual use of the network would be through different apps... be it submitting infohashes, or compute jobs, etc. Does eliminating all the reward tracking overhead provide substantial resources back to support free use. ie: Most people and their computer resources sit idle, probably more than enough to provide back whatever multimedia they want to consume. If true, all balances out, no need to bother track accounting with "pay to play" style system? I like "pay to play" as it offers at least some firm guarantee to the consumer offeror. But an accounting free system is more fun as in free beer :) Hybrids might work too. https://en.wikipedia.org/wiki/Exabyte https://en.wikipedia.org/wiki/Zettabyte https://en.wikipedia.org/wiki/Yottabyte https://en.wikipedia.org/wiki/Orders_of_magnitude_(data) 100M users donating 10GiB slack space is about 0.93 EiB of non redundant storage, excluding overhead. Example, at 4x redundancy, that probably easily covers lossless versions of all movies (at least 1080p) and all audio (FLAC), all wikipedia, all OS and apps. Approaching mini-NSA scale... not a bad start.
Warrant Canary creator On May 20, 2017 9:34 PM, "grarpamp" <grarpamp@gmail.com> wrote: Rewards seem nice, yet not everyone who wants to play can pay, or the math overhead is crushing, or it becomes centralized. Definitely worth trying, especially if it fits some usage model. Another form is to just let the network use whatever CPU, RAM, DISK, NET that you're not currently using, or give it whatever limits you want. In short, set it and forget it. Let the network figure out how to best use your node to support the network. Maybe it's a strictly filesharing network, or a general purpose network. That's on the "Hey I just want to donate this because it's cool like Seti@Home, etc. Mojo's internal currency was based on the resources offered, shared or consumed. We even patented it (apparently unknown to MaidSafe; hell they never even heard about us :) Users actual use of the network would be through different apps... be it submitting infohashes, or compute jobs, etc. Does eliminating all the reward tracking overhead provide substantial resources back to support free use. Probably not. ie: Most people and their computer resources sit idle, probably more than enough to provide back whatever multimedia they want to consume. If true, all balances out, no need to bother track accounting with "pay to play" style system? I like "pay to play" as it offers at least some firm guarantee to the consumer offeror. But an accounting free system is more fun as in free beer :) Hybrids might work too. https://en.wikipedia.org/wiki/Exabyte https://en.wikipedia.org/wiki/Zettabyte https://en.wikipedia.org/wiki/Yottabyte https://en.wikipedia.org/wiki/Orders_of_magnitude_(data) 100M users donating 10GiB slack space is about 0.93 EiB of non redundant storage, excluding overhead. Example, at 4x redundancy, that probably easily covers lossless versions of all movies (at least 1080p) and all audio (FLAC), all wikipedia, all OS and apps. Approaching mini-NSA scale... not a bad start.
On 05/21/2017 12:32 AM, grarpamp wrote:
Rewards seem nice, yet not everyone who wants to play can pay, or the math overhead is crushing, or it becomes centralized. Definitely worth trying, especially if it fits some usage model.
Another form is to just let the network use whatever CPU, RAM, DISK, NET that you're not currently using, or give it whatever limits you want. In short, set it and forget it. Let the network figure out how to best use your node to support the network. Maybe it's a strictly filesharing network, or a general purpose network. That's on the "Hey I just want to donate this because it's cool like Seti@Home, etc."
Now I think you're describing Freenet. How doth Freenet suck, let me count the ways... massive computational overhead was the main thing, last time I tried it which was ages ago. It really needed its own dedicated box to "just work." But it does distribute files, increase the availability of more popular ones (via increased redundancy of storage), and is censorship resistant due to distributed storage of data which itself is encrypted and anonymized. I think a project that aims to improve on the implementation of the basic ideas in Freenet could be a big winner. :o)
Users actual use of the network would be through different apps... be it submitting infohashes, or compute jobs, etc.
Does eliminating all the reward tracking overhead provide substantial resources back to support free use.
ie: Most people and their computer resources sit idle, probably more than enough to provide back whatever multimedia they want to consume. If true, all balances out, no need to bother track accounting with "pay to play" style system?
I like "pay to play" as it offers at least some firm guarantee to the consumer offeror.
But an accounting free system is more fun as in free beer :)
Hybrids might work too.
https://en.wikipedia.org/wiki/Exabyte https://en.wikipedia.org/wiki/Zettabyte https://en.wikipedia.org/wiki/Yottabyte https://en.wikipedia.org/wiki/Orders_of_magnitude_(data)
100M users donating 10GiB slack space is about 0.93 EiB of non redundant storage, excluding overhead.
Example, at 4x redundancy, that probably easily covers lossless versions of all movies (at least 1080p) and all audio (FLAC), all wikipedia, all OS and apps.
Approaching mini-NSA scale... not a bad start.
On Sun, 21 May 2017 01:45:20 -0400 Steve Kinney <admin@pilobilus.net> wrote:
Now I think you're describing Freenet. How doth Freenet suck, let me count the ways...
actually freenet seems like the best project of its kind. It's not garbage produced by the pentagon, and it tries to be really decentralized.
massive computational overhead was the main thing,
I never experienced that, although it would be nice if they didn't use java.
last time I tried it which was ages ago. It really needed its own dedicated box to "just work."
nonsense.
But it does distribute files, increase the availability of more popular ones (via increased redundancy of storage), and is censorship resistant due to distributed storage of data which itself is encrypted and anonymized.
yes, the concept is pretty 'cypherpunk'.
I think a project that aims to improve on the implementation of the basic ideas in Freenet could be a big winner.
:o)
Users actual use of the network would be through different apps... be it submitting infohashes, or compute jobs, etc.
Does eliminating all the reward tracking overhead provide substantial resources back to support free use.
ie: Most people and their computer resources sit idle, probably more than enough to provide back whatever multimedia they want to consume. If true, all balances out, no need to bother track accounting with "pay to play" style system?
I like "pay to play" as it offers at least some firm guarantee to the consumer offeror.
But an accounting free system is more fun as in free beer :)
Hybrids might work too.
https://en.wikipedia.org/wiki/Exabyte https://en.wikipedia.org/wiki/Zettabyte https://en.wikipedia.org/wiki/Yottabyte https://en.wikipedia.org/wiki/Orders_of_magnitude_(data)
100M users donating 10GiB slack space is about 0.93 EiB of non redundant storage, excluding overhead.
Example, at 4x redundancy, that probably easily covers lossless versions of all movies (at least 1080p) and all audio (FLAC), all wikipedia, all OS and apps.
Approaching mini-NSA scale... not a bad start.
Warrant Canary creator On May 20, 2017 10:46 PM, "Steve Kinney" <admin@pilobilus.net> wrote: On 05/21/2017 12:32 AM, grarpamp wrote:
Rewards seem nice, yet not everyone who wants to play can pay, or the math overhead is crushing, or it becomes centralized. Definitely worth trying, especially if it fits some usage model.
Another form is to just let the network use whatever CPU, RAM, DISK, NET that you're not currently using, or give it whatever limits you want. In short, set it and forget it. Let the network figure out how to best use your node to support the network. Maybe it's a strictly filesharing network, or a general purpose network. That's on the "Hey I just want to donate this because it's cool like Seti@Home, etc."
Now I think you're describing Freenet. How doth Freenet suck, let me count the ways... massive computational overhead was the main thing, last time I tried it which was ages ago. It really needed its own dedicated box to "just work." But it does distribute files, increase the availability of more popular ones (via increased redundancy of storage), and is censorship resistant due to distributed storage of data which itself is encrypted and anonymized. I think a project that aims to improve on the implementation of the basic ideas in Freenet could be a big winner. :o) Mojo was being developed contemporanously with Freenet and shares some of its distributed features. It was sort of like Freenet + a resource based currency. You do not want a filesharing system as it removes any hope of plausible deniability for content.
Users actual use of the network would be through different apps... be it submitting infohashes, or compute jobs, etc.
Does eliminating all the reward tracking overhead provide substantial resources back to support free use.
ie: Most people and their computer resources sit idle, probably more than enough to provide back whatever multimedia they want to consume. If true, all balances out, no need to bother track accounting with "pay to play" style system?
I like "pay to play" as it offers at least some firm guarantee to the consumer offeror.
But an accounting free system is more fun as in free beer :)
Hybrids might work too.
https://en.wikipedia.org/wiki/Exabyte https://en.wikipedia.org/wiki/Zettabyte https://en.wikipedia.org/wiki/Yottabyte https://en.wikipedia.org/wiki/Orders_of_magnitude_(data)
100M users donating 10GiB slack space is about 0.93 EiB of non redundant storage, excluding overhead.
Example, at 4x redundancy, that probably easily covers lossless versions of all movies (at least 1080p) and all audio (FLAC), all wikipedia, all OS and apps.
Approaching mini-NSA scale... not a bad start.
Mojo was being developed contemporanously with Freenet and shares some of its distributed features. It was sort of like Freenet + a resource based currency.
True.
You do not want a filesharing system as it removes any hope of plausible deniability for content.
Huh? If it's encrypted and anonymous it's deniable by all, and even billed for "filesharing" is fine, at least currently, due to legal free speech uses riding within. Though if you bill it for "illegal copyright infringement', you yourself might take heat for "incitement", but the network itself would still be safe. Such network nodes themselves, like I2P / Tor / Freenet, operate freely because of that principle, and it's been proven out successfully so far for maybe 15-20 years. Strongly encrypted + strongly anonymous + decentralized works in this space. Unfortunately, few qualify... Napster, gnutella, limewire, kazaa, bittorrent, whatever... when run over clearnet... of course they all get shutdown. Due to some combination of centralized, not encrypted, not anonymous.... no deniability there. Wikipedia is a bit scattered, but here's some references....... https://en.wikipedia.org/wiki/Anonymous_P2P
What I meant, if you are holding and sharing an entire file of some really sensitive content and depend on networking technologies known or assumed to have flaws which can expose your IP address you have relinquished ability to deny it. Whereas is this content has been published, using something like Freenet, so no single user of the content distribution system has more than a fragment of that content and what they each have is not only encrypted (and you don't have the key) but its bit interleaved and your software has no idea what part(s) of the content you hold nor where those other parts reside (for that your software must possess the file's "treasure map" which can be closely held). This offers good plausible deniability. Warrant Canary creator On May 21, 2017 2:24 PM, "grarpamp" <grarpamp@gmail.com> wrote:
Mojo was being developed contemporanously with Freenet and shares some of its distributed features. It was sort of like Freenet + a resource based currency.
True.
You do not want a filesharing system as it removes any hope of plausible deniability for content.
Huh? If it's encrypted and anonymous it's deniable by all, and even billed for "filesharing" is fine, at least currently, due to legal free speech uses riding within. Though if you bill it for "illegal copyright infringement', you yourself might take heat for "incitement", but the network itself would still be safe. Such network nodes themselves, like I2P / Tor / Freenet, operate freely because of that principle, and it's been proven out successfully so far for maybe 15-20 years. Strongly encrypted + strongly anonymous + decentralized works in this space. Unfortunately, few qualify... Napster, gnutella, limewire, kazaa, bittorrent, whatever... when run over clearnet... of course they all get shutdown. Due to some combination of centralized, not encrypted, not anonymous.... no deniability there. Wikipedia is a bit scattered, but here's some references....... https://en.wikipedia.org/wiki/Anonymous_P2P
https://yro.slashdot.org/story/17/05/17/1830228/popular-torrent-site-extrato... https://torrentfreak.com/extratorrent-shuts-down-for-good-170517/ ExtraTorrent is the latest in a series of BitTorrent giants to fall in recent months. Previously, sites including KickassTorrents, Torrentz.eu, TorrentHound and What.cd went offline. Clearnet = Fail.
These trackers need to adopt distributed hosting tech, like IPFS or NetZero, so there are no single points of pressure/failure and the operator IP and identity have a reasonable chance of staying private from technical snooping. Warrant Canary creator On May 21, 2017 4:09 PM, "grarpamp" <grarpamp@gmail.com> wrote:
https://yro.slashdot.org/story/17/05/17/1830228/popular-torrent-site- extratorrent-permanently-shuts-down https://torrentfreak.com/extratorrent-shuts-down-for-good-170517/
ExtraTorrent is the latest in a series of BitTorrent giants to fall in recent months. Previously, sites including KickassTorrents, Torrentz.eu, TorrentHound and What.cd went offline. Clearnet = Fail.
On Sun, May 21, 2017 at 8:04 PM, Steven Schear <schear.steve@gmail.com> wrote:
These trackers
These *websites* are not actually "trackers" and have generally shifted away from providing tracker service ever since legal pressure made bundling services riskier, and a new independantly operated layer of services providing the tracker function with opentracker as bootstrap / fallback, for the DHT via PEX etc... has arisen. https://en.wikipedia.org/wiki/Opentracker The proper term for these websites is searchable "indexes (DB's)", bundled with community bling forums. In reality it is the "indexes" that...
need to adopt distributed hosting tech, like IPFS or NetZero, so there are no single points of pressure/failure and the operator IP and identity have a reasonable chance of staying private from technical snooping.
The bootstrap of tracking (where to first securely linkup with peers for DHT) also needs fixed. "Filesharing" could be designed lots of ways, doesn't have to be "bittorrent protocol" proper. Though if it was compatible with BT clients you'd have millions of instant users / nodes in your encrypted anonymous ecosystem.
On Sun, May 21, 2017 at 09:27:21PM -0400, grarpamp wrote:
On Sun, May 21, 2017 at 8:04 PM, Steven Schear <schear.steve@gmail.com> wrote:
These trackers
These *websites* are not actually "trackers" and have generally shifted away from providing tracker service ever since legal pressure made bundling services riskier, and a new independantly operated layer of services providing the tracker function with opentracker as bootstrap / fallback, for the DHT via PEX etc... has arisen. https://en.wikipedia.org/wiki/Opentracker
The proper term for these websites is searchable "indexes (DB's)", bundled with community bling forums.
In reality it is the "indexes" that...
need to adopt distributed hosting tech, like IPFS or NetZero, so there are no single points of pressure/failure and the operator IP and identity have a reasonable chance of staying private from technical snooping.
The bootstrap of tracking (where to first securely linkup with peers for DHT) also needs fixed.
Yes, and many a seemingly-worthy attempt has been attempted. Consensus is adequate if you are certain you're not "entrapped in a deceptive small-world scanerio". Other than that, the only sure option is actual fellow humans, at least one of which you trust to have a reasonably 'accurate' network view. For those who are unsure, humans are those funny creatures that on occasion wear sneakers to transport USB sticks and other devices between your precious computers - if you check closely, you may find that you yourself, whatever and whoever you consider yourself to be, are in fact also human - but be extra careful and paranoid, as competent deceptions have been known to occur.
"Filesharing" could be designed lots of ways, doesn't have to be "bittorrent protocol" proper. Though if it was compatible with BT clients you'd have millions of instant users / nodes in your encrypted anonymous ecosystem.
Azureus/Vuze (Java) is highly plugin-ready, having many plugins already (excessively many some would say) and so its plugin API is well and truly street tested over many years. And anyway, STEP 1 we must have a sane architecture, NOT "overlay net that only runs TCP" for just 1 example (<AHEM>Tor<COUGH>). I.e., the Unix design philosophy - no more monolithic, independent projects thanks. In the latest Git-rev-news newsletter e.g., the git guys proposed, discussed, and ultimately merged a (C language) "SWAP" macro - and yet nowhere can be seen any mention of "what do the Linux kernel guys do" - guys, are you serious? I guess for some, programming is just entertainment, a means to enthrall oneself - but I hope some round these parts take at least a mild passing interest in prior art in Unix, Linux and the rest... Need some discussions about each layer (and the layers themselves), and eliminate that which can be easily eliminated from the design space. - hash/id layer: identify nodes, content items, possibly routes, etc (if you feel competent in creating a competent hash/ ID layer, please join the ongoing Git discussion - see previous email for more) - DHT/ basic data store layer (Demonoid / bittorrent genre): various key-data pair lookups and searches (nodes, content items, users, bloggers etc) - network and transport layers - UDP at the least, perhaps ethernet frames? - distributed cache/data store layer/ ala freenet, gfs, etc Principles, protocols: - version things, so the protocol for a layer can be enhanced over time/ transitioned, e.g. Git and ID/GUID hashing
On Sun, May 21, 2017 at 09:27:21PM -0400, grarpamp wrote:
"Filesharing" could be designed lots of ways, doesn't have to be "bittorrent protocol" proper. Though if it was compatible with BT clients you'd have millions of instant users / nodes in your encrypted anonymous ecosystem.
A nice carrot in principle, but let's cleanly separate the layers, like Shrek's onion-boy. A plugin for ones fav bt client is the least of the problem space.
On May 21, 2017 4:09 PM, "grarpamp" <grarpamp@gmail.com> wrote:
https://yro.slashdot.org/story/17/05/17/1830228/popular-torrent-site- extratorrent-permanently-shuts-down https://torrentfreak.com/extratorrent-shuts-down-for-good-170517/
ExtraTorrent is the latest in a series of BitTorrent giants to fall in recent months. Previously, sites including KickassTorrents, Torrentz.eu, TorrentHound and What.cd went offline. Clearnet = Fail.
On Sun, May 21, 2017 at 05:04:34PM -0700, Steven Schear wrote:
These trackers need to adopt distributed hosting tech, like IPFS or NetZero, so there are no single points of pressure/failure and the operator IP and identity have a reasonable chance of staying private from technical snooping.
Warrant Canary creator
(Please bottom post. Please please. Thanks in advance...) There are two primary groups of overlay-net nodes: - "unlimited bandwidth" nodes - "set/specific limit" nodes The subtlety of the effects of network integration of these two types of nodes into any overlay network might be optimised, i.e. considered algorithmically as different node types, even though most all of the network connection/participation params would otherwise be identical. - Permanent connected, fixed-throughput rate nodes are ideal for distributed store. - When the "available permanent rate" drops below a certain figure, the node may be optimally useful for DHT style data store. - "Latency <-> stealth" tradeoff setting is another end-user preference of course. - Randomly connected nodes, which make available significant cache store, and also have high-speed net connection (when connected) should still be readily employable by the network overall (it's just algorithms) No matter the generosity of some nodes, ultimately the freenet and other discussion histories appear to show that "optimizing for the end-users personal data-type preferences" (e.g. movies, books, software etc), is the minimum-friction route to maximum participation - all algorithms should well account for this fact of human nature. Git shows us (many of) the benefits of a content-addressable mid layer - whether we build torrents, or p2p libraries (clearnet or darknet), blogs or websites, or other databases, having all content be uniquely and unambiguously addressable (planet-wide) is a facility we need not discard at this point in our computing history. When SHA began to dominate and MD5 became viewed as "gee, that's obviously not very good", it was far too easy for the lesson to be ignored and random software to hardcode an algorithm that would be "good enough forever", like I dunno, Git for a random example. So we face the lesson again: no algorithm is certain to withstand the test of time, and we can almost say with certainty that all algorithms today could fail the test of theoretical future "quantum computers". Primary questions re content-addressability are: - what is Git transitioning to, and is Git's upcoming new hash mechanism adequate for global content addressing? - what is robust in the face of hash-algorithm changes? - what are the interactions between some definitive chosen hash system (either Git's or otherwise), and other existing systems like bittorrent? Our juicy future beckons...
On 05/21/2017 04:07 PM, grarpamp wrote:
https://yro.slashdot.org/story/17/05/17/1830228/popular-torrent-site-extrato... https://torrentfreak.com/extratorrent-shuts-down-for-good-170517/
ExtraTorrent is the latest in a series of BitTorrent giants to fall in recent months. Previously, sites including KickassTorrents, Torrentz.eu, TorrentHound and What.cd went offline. Clearnet = Fail.
Demonoid is welcoming them and registrations are open. https://www.reddit.com/r/DemonoidP2P/comments/6brez4/extratorrent_users_welc...
On Sun, May 21, 2017 at 05:31:00PM -0700, Razer wrote:
On 05/21/2017 04:07 PM, grarpamp wrote:
https://yro.slashdot.org/story/17/05/17/1830228/popular-torrent-site-extrato... https://torrentfreak.com/extratorrent-shuts-down-for-good-170517/
ExtraTorrent is the latest in a series of BitTorrent giants to fall in recent months. Previously, sites including KickassTorrents, Torrentz.eu, TorrentHound and What.cd went offline. Clearnet = Fail.
Demonoid is welcoming them and registrations are open.
https://www.reddit.com/r/DemonoidP2P/comments/6brez4/extratorrent_users_welc...
I see what you did there - you snuck a sneaky "P2P" in the URL. Very sneaky. I was about to chastise the lack of any mention of whether this "Demonoid" thing you promote was darknet, or at least "fully P2P"...
On Sun, May 21, 2017 at 07:07:40PM -0400, grarpamp wrote:
https://yro.slashdot.org/story/17/05/17/1830228/popular-torrent-site-extrato... https://torrentfreak.com/extratorrent-shuts-down-for-good-170517/
ExtraTorrent is the latest in a series of BitTorrent giants to fall in recent months. Previously, sites including KickassTorrents, Torrentz.eu, TorrentHound and What.cd went offline. Clearnet = Fail.
Ah, grarpamp, the sweet smell of vindication :) "But oh" say the Tor shills "you'll crash the network" it's just so sad since "you know, there's no way to properly give back to the network since the network is [WE WANT IT TO BE] so centralised it'll increase our processing load to decloak those nasty, nasty Wikileaks perps we keep chasing". And besides, as we all know, Tor is the only partial anonymity network in existing, "so we all gotta protect it reall good like, make sure we don't degrade any of the high speed central nodes the CIA/NSA runs, and my god, especially the directory nodes now that that bloody trouble maker Applebaum is gone - finally we don't have someone who actually believes the free in 'freedom of speech' to mess up our little play pen".
While it might be agreed that Tor has certain non-code / monetary / political issues, why discriminate on that when the code of all current overlay networks does not do much to defeat GPA's and Sybil's that are well known to be in existance. Let's see something on the market that claims resistance to those, then we can talk secondary issues. Till then, use what you've got for what it's good at, and code what you don't. The world won't have [another] Appelbaum for some while to come. Yet there are others, and welcoming room for more new others... philosophers / visionaries / activists / teachers. https://www.youtube.com/watch?v=pin6DhQee48
On Sun, May 21, 2017 at 6:55 PM, Steven Schear <schear.steve@gmail.com> wrote:
What I meant, if you are holding and sharing an entire file of some really sensitive content and depend on networking technologies known or assumed to have flaws which can expose your IP address you have relinquished ability to deny it.
Yes, if the file isn't encrypted, of if rubberhose decrypt policies are in effect, and the pointer to your node strongly confirms presense or leads to inspection.
Whereas is this content has been published, using something like Freenet, so no single user of the content distribution system has more than a fragment of that content and what they each have is not only encrypted (and you don't have the key) but its bit interleaved and your software has no idea what part(s) of the content you hold nor where those other parts reside (for that your software must possess the file's "treasure map" which can be closely held). This offers good plausible deniability.
Sure. File sharding is interesting obfuscation defense in depth, but has *lots* of overhead. If the network is "flawlessly" encrypted and anonymous, as well as the disk storage managed by its nodes, it's probably not needed... users can insert / fetch, or run nodes, safely. Descriptions also depend on if the design provides both transport and user application all in one (Freenet, Mojo), or just rides on top of an already secure transport network (Ricochet over Tor, IRC over I2P).
On Sun, May 21, 2017 at 09:50:04PM -0400, grarpamp wrote:
On Sun, May 21, 2017 at 6:55 PM, Steven Schear <schear.steve@gmail.com> wrote:
What I meant, if you are holding and sharing an entire file of some really sensitive content and depend on networking technologies known or assumed to have flaws which can expose your IP address you have relinquished ability to deny it.
Yes, if the file isn't encrypted, of if rubberhose decrypt policies are in effect, and the pointer to your node strongly confirms presense or leads to inspection.
For the next Manning wikileaker (or leak seeder), is the problem space of "number of known trusted peers in a chaff-filled link network model required for 'reasonable protection' against 5-eyes global passive network monitoring" known? Perhaps rather than "peers" do we need to go to "#N trusted peers each with at least #M trusted peers other than myself"? What about for "global active network monitoring"? Another way to view this same question: Up until now there has been a presumption by some that with the five-eyes global network monitoring (whatever specific form it presently bullies) to be reasonably countered, that some level of neighbour to neighbour (street level, physical) network of the people is required. Is the case or not? Without being able to at least discuss the problem space reasonably succinctly, it feels like we're grasping around at straws in the dark. We know the general problem space, next step, can we reason about it and make any conlusions?
Whereas is this content has been published, using something like Freenet, so no single user of the content distribution system has more than a fragment of that content and what they each have is not only encrypted (and you don't have the key) but its bit interleaved and your software has no idea what part(s) of the content you hold nor where those other parts reside (for that your software must possess the file's "treasure map" which can be closely held). This offers good plausible deniability.
Sure. File sharding is interesting obfuscation defense in depth, but has *lots* of overhead. If the network is "flawlessly" encrypted and anonymous, as well as the disk storage managed by its nodes, it's probably not needed... users can insert / fetch, or run nodes, safely.
Descriptions also depend on if the design provides both transport and user application all in one (Freenet, Mojo), or just rides on top of an already secure transport network (Ricochet over Tor, IRC over I2P).
participants (6)
-
grarpamp
-
juan
-
Razer
-
Steve Kinney
-
Steven Schear
-
Zenaan Harkness