Re: [tor-onions] Presentation on Onion Networking at the BCS
On 7/22/19, Alec Muffett <alec.muffett@gmail.com> wrote:
"Why & How you should start using Onion Networking" https://www.youtube.com/watch?v=pebRZyg_bh8
A fine introduction. Yet how do people, including those involved with or using other projects in the space, compare contrast and evaluate this with "Why and how start using" and writing for... Onion, I2P, CJDNS, MaidSafe, IPFS and all the other overlay networks out there and forthcoming, all in their respective "non-exit" modes? Whether it be for protocol layer capabilities HTTPS/TCP/UDP/IPv6, or to achieve application layer... messaging, storage, web-ish, etc. And how does each's lack or presence of whatever API interfaces, UDP, broadcast, name layers, or other potential transport and programming models, lend themselves to app development and widespread eventual adoption and use? And how, without offering IPv6 or the ultimately better all encompassingly wide and modular, even cryptographic, AF_OVERLAY interface that all networks could plug into, does anyone expect to get everything interoperable and working together? [Note that comparing "traction" re all other nets accessing facebook is false since those nets simply do not offer a simple exit mode to do so as tor does. What would be fair is if facebook had CJDNS, I2P, Onion, etc interfaces, and then comparing those access stats, scaled relative to each respective project estimates of number of users, project advertising funding impact, project *Browser availability, etc.]
On Mon, Jul 22, 2019 at 08:55:32PM -0400, grarpamp wrote:
On 7/22/19, Alec Muffett <alec.muffett@gmail.com> wrote:
"Why & How you should start using Onion Networking" https://www.youtube.com/watch?v=pebRZyg_bh8
A fine introduction.
Yet how do people, including those involved with or using other projects in the space, compare contrast and evaluate this with "Why and how start using" and writing for... Onion, I2P, CJDNS, MaidSafe, IPFS and all the other overlay networks out there and forthcoming, all in their respective "non-exit" modes?
Whether it be for protocol layer capabilities HTTPS/TCP/UDP/IPv6, or to achieve application layer... messaging, storage, web-ish, etc.
And how does each's lack or presence of whatever API interfaces, UDP, broadcast, name layers, or other potential transport and programming models, lend themselves to app development and widespread eventual adoption and use?
And how, without offering IPv6 or the ultimately better all encompassingly wide and modular, even cryptographic, AF_OVERLAY interface that all networks could plug into, does anyone expect to get everything interoperable and working together?
[Note that comparing "traction" re all other nets accessing facebook is false since those nets simply do not offer a simple exit mode to do so as tor does. What would be fair is if facebook had CJDNS, I2P, Onion, etc interfaces, and then comparing those access stats, scaled relative to each respective project estimates of number of users, project advertising funding impact, project *Browser availability, etc.]
A primary foundation is a "trust based" underlying network, like a friend to friend (F2F) style network - just a simple IP/ETH packet delivering low layer. If friend A is physically close, your F2F link to that friend should be physical; if not, an encrypted link tunnel to that friend is created. "Friend" ~= someone who is unlikely to sell you out to govcorp should you exercise your ('absolute right' to) freedom of speech With no friends, expect to achieve at best ephemeral access to the world's information in any anonymous way. Just above this base layer is chaff fill ≡» such things must be configurable since some folks will not pay the price, and others will pay it sometimes, and others still may just want a low-level (e.g. 2 KiB/s) chaff link to cover high-latency low-bandwidth comms, e.g. . With this base layer in place, an onion/ I2P routing layer can readily be laid over this - but IP/UDP only of course. Without the lowest layer done right, there shall continue to be endless "new" network designs. Once a sane base level network is readily installable/ configurable, the next big concept is p2p distributed identity - DHTs, public and private keys for identifying an entity, website, "name" etc. And it ought go without saying that any identity implementation which provides something other than 100% control to the end user, is doomed to fail - the GNS or GNU Name System is good for thought in this regard, where the individual is the authority for each target name in his GNS (DNS like) local name database. Delegation of authority can be built in, but again, any delegation must be solely within the hands of the end user or end node; fail on this, and watch the roll out of "new" systems replace your own implementation. Without the fundamental (sane) infrastructure in place, we are doomed to an endless series of "new" "distributed" "global" interplanetary" "solutions" ad nauseum. Once the fundamental sane infrustructure IS in place, then IPFS, GNS, GitTorrent https://blog.printf.net/articles/2015/05/29/announcing-gittorrent-a-decentra... and the like will appear self-evidently either ready in large part, or needing certain improvements, in order to fit in with the sane foundation infrastructure. For whatever reason, folks often begin at the high layers and thus doom themselves to new solutions displacing their work inside a few short years. And some of those who would dedicate swaths of their consciousness to implementing experiments and test cases on the direct path to sane infrastructure, are tied up handling the most petty and mundane bullshit, legal, political and otherwise, that humans have devised to consume the consciousness of men of good character who would otherwise bring great technological advancement to this world in a much shorter time than otherwise is the case. Cest la vie...
On Mon, Jul 22, 2019 at 08:55:32PM -0400, grarpamp wrote:
Yet how do people, including those involved with or using other projects in the space, compare contrast and evaluate this with "Why and how start using" and writing for... Onion, I2P, CJDNS, MaidSafe, IPFS and all the other overlay networks out there and forthcoming, all in their respective "non-exit" modes?
By reading. And, obviously, comparing. Here's a little summry of a first reading of the IPFS whitepaper intro: To give due credit, IPFS is a very reasonable start to a content addressed (git style) 'potentially huge' distributed filesystem, although it seems they are yet to integrate Microsoft's latest "huge objects" support designed and integrated into git (understandable due to the respective public release dates). In IPFS we see some good things - simple protocol inclusion in certain lookups providing ready extensibility, e.g.: # an SCTP/IPv4 connection /ip4/10.20.30.40/sctp/1234/ # an SCTP/IPv4 connection proxied over TCP/IPv4 /ip4/5.6.7.8/tcp/5678/ip4/1.2.3.4/sctp/1234/ , the handling of certain attacks with certain crypto primitives and design principles, and some areas for improvement - in direct contrast to explicit connection protocols above, the routing is site- or installation- wide, see p/4 of the IPFS white paper at 3.3, which gives the routing API upon which IPFS relies, so the routing is pluggable, but not quite in the right way AFAICT: "Note: different use cases will call for substantially differ- ent routing systems (e.g. DHT in wide network, static HT in local network). Thus the IPFS routing system can be swapped for one that fits users’ needs. As long as the in- terface above is met, the rest of the system will continue to function." That is, certain types of data may be best (or mandatorily, e.g. in a corporate environment, think LDAP) served with e.g. "local network only" - but think also e.g. large torrent file parts hash map which at least must be cached locally (should be obvious). (But note the obvious, it's probably unwise to think of one implementation layer as a universal API - even if the API applies at multiple layers; e.g. if you want to audit the national voter roll, a copy ought first be downloaded as a single file/zip, or latency of db lookups will most likely cause massive time overhead.) Also, there's self evident [D]DoS possibility here: "The size of objects and use patterns of IPFS are similar to Coral [5] and Mainline [16], so the IPFS DHT makes a distinction for values stored based on their size. Small values (equal to or less than 1KB) are stored directly on the DHT." The big keynote may be that IPFS is a (first?) decent crack at a huge scale git style content addressed data store. Git broke the mold - Merkle DAGs for our (distributed) content are the future. The IPFS focus being limited to a (distributed) filesystem has perhaps provided sufficiently narrow scope to facilitate an actual implementation. Proof of concept is a great place to start. Stepping back, when distributing any content whatsoever, jurisdiction and legacy government statute fuckery must absolutely be handled as the first order of business, no matter whether you're a blogger in Iran, a website producer in China, or a conservative peaceful White skinned dissident in France. Not to mention the hard core (from certain real perspectives) Assanges and Snowdens of the world who bring against themselves entire nation states, simply for a little communication... The Guardian (or MSM) leakes the password on the unredacted names, and Assange is still taking the fall (this was well planned/ orchestrated of course). Even the average mom and pop is unlikely to want a court case from SONY when their tween downloads the latest twerking pop star "music", and likewise for those in the geek dept who look to Dr Who to catch up on cutting edge theories of the universe and time travel (for strictly educational purposes of course :) SO, point being, no matter who you are, unless you're an essentially dead plank of wood endlessly striving to get as quietly to death as possible, you're unlikely to not come up against someone else's desire to control you, what you do, what you read and view, and what you attemp to say to other Souls in this shared world; and so we must begin from the premise that everything incoming, and everything outgoing, from your computer (/phone /etc), is doing so in an adversarial world; and so our systems must begin with this fact, and design FROM THE OUTSET to handle the shitty of other humans wanting to control you and your communications. This is why onion/garlic routing, chaff fill, F2F and physical N2N (neighbour to neighbour connection networks), must take pride of place as a foundation on which to build out the rest, firstly your identies - public, private and anonymous - and how to actually firewall these from one another - and then higher layers of distributed content and P2P communications of various sorts. The lowest IP/packet routing layer must also provide for QoS - control/ request + response messages must be top priority, followed by realtime audio, then video, then 'the rest' - per confirmation/ provision of the requested bandwidth by your respective peers first of course...
participants (2)
-
grarpamp
-
Zenaan Harkness