[tor-onions] Presentation on Onion Networking at the BCS

Zenaan Harkness zen at freedbms.net
Sun Aug 4 20:28:41 PDT 2019


> On Mon, Jul 22, 2019 at 08:55:32PM -0400, grarpamp wrote:
> > Yet how do people, including those involved with or using other
> > projects in the space, compare contrast and evaluate this with
> > "Why and how start using" and writing for... Onion, I2P, CJDNS,
> > MaidSafe, IPFS and all the other overlay networks out there
> > and forthcoming, all in their respective "non-exit" modes?

By reading. And, obviously, comparing.

Here's a little summry of a first reading of the IPFS whitepaper
intro:

To give due credit, IPFS is a very reasonable start to a content
addressed (git style) 'potentially huge' distributed filesystem,
although it seems they are yet to integrate Microsoft's latest "huge
objects" support designed and integrated into git (understandable due
to the respective public release dates).

In IPFS we see some good things - simple protocol inclusion in
certain lookups providing ready extensibility, e.g.:

  # an SCTP/IPv4 connection
  /ip4/10.20.30.40/sctp/1234/

  # an SCTP/IPv4 connection proxied over TCP/IPv4
  /ip4/5.6.7.8/tcp/5678/ip4/1.2.3.4/sctp/1234/

, the handling of certain attacks with certain crypto primitives and
design principles, and some areas for improvement - in direct contrast
to explicit connection protocols above, the routing is site- or
installation- wide, see p/4 of the IPFS white paper at 3.3, which
gives the routing API upon which IPFS relies, so the routing is
pluggable, but not quite in the right way AFAICT:

  "Note: different use cases will call for substantially differ-
  ent routing systems (e.g. DHT in wide network, static HT
  in local network). Thus the IPFS routing system can be
  swapped for one that fits users’ needs. As long as the in-
  terface above is met, the rest of the system will continue to
  function."

That is, certain types of data may be best (or mandatorily, e.g. in a
corporate environment, think LDAP) served with e.g. "local network
only" - but think also e.g. large torrent file parts hash map which
at least must be cached locally (should be obvious).

(But note the obvious, it's probably unwise to think of one
implementation layer as a universal API - even if the API applies at
multiple layers; e.g. if you want to audit the national voter roll,
a copy ought first be downloaded as a single file/zip, or latency of
db lookups will most likely cause massive time overhead.)

Also, there's self evident [D]DoS possibility here:

  "The size of objects and use patterns of IPFS are similar
  to Coral [5] and Mainline [16], so the IPFS DHT makes a
  distinction for values stored based on their size. Small values
  (equal to or less than 1KB) are stored directly on the DHT."


The big keynote may be that IPFS is a (first?) decent crack at a huge
scale git style content addressed data store.

Git broke the mold - Merkle DAGs for our (distributed) content are
the future.


The IPFS focus being limited to a (distributed) filesystem has
perhaps provided sufficiently narrow scope to facilitate an actual
implementation. Proof of concept is a great place to start.

Stepping back, when distributing any content whatsoever, jurisdiction
and legacy government statute fuckery must absolutely be handled as
the first order of business, no matter whether you're a blogger in
Iran, a website producer in China, or a conservative peaceful White
skinned dissident in France. Not to mention the hard core (from
certain real perspectives) Assanges and Snowdens of the world who
bring against themselves entire nation states, simply for a little
communication... The Guardian (or MSM) leakes the password on the
unredacted names, and Assange is still taking the fall (this was well
planned/ orchestrated of course).

Even the average mom and pop is unlikely to want a court case from
SONY when their tween downloads the latest twerking pop star "music",
and likewise for those in the geek dept who look to Dr Who to catch
up on cutting edge theories of the universe and time travel (for
strictly educational purposes of course :)

SO, point being, no matter who you are, unless you're an essentially
dead plank of wood endlessly striving to get as quietly to death as
possible, you're unlikely to not come up against someone else's
desire to control you, what you do, what you read and view, and what
you attemp to say to other Souls in this shared world;

and so we must begin from the premise that everything incoming, and
everything outgoing, from your computer (/phone /etc), is doing so in
an adversarial world;

and so our systems must begin with this fact, and design FROM THE
OUTSET to handle the shitty of other humans wanting to control you
and your communications.

This is why onion/garlic routing, chaff fill, F2F and physical N2N
(neighbour to neighbour connection networks), must take pride of
place as a foundation on which to build out the rest, firstly your
identies - public, private and anonymous - and how to actually
firewall these from one another - and then higher layers of
distributed content and P2P communications of various sorts.

The lowest IP/packet routing layer must also provide for QoS -
control/ request + response messages must be top priority, followed
by realtime audio, then video, then 'the rest' - per confirmation/
provision of the requested bandwidth by your respective peers first
of course...



More information about the cypherpunks mailing list