Nextgen G* Traffic Analysis Resistant Overlay Networks (re Tor stinks)

grarpamp grarpamp at gmail.com
Sun Oct 27 22:33:29 PDT 2019


> GPA was monitoring the upload, thus why GPA attacked X (and others in
> their target set), and due to the [temp|full] link dropout (latency
> trough), GPA IDs node X as the uploader.
> But, same problem even if link not dropped, and just a latency trough
> - target X, is now IDed by GPA.
> In both cases a) X's uplink dropped, or b) not dropped,
> there destination of X's upload, sees the [temp|permanent] dropout.

No. First principles... under a proper enforced and regulated background
of fill traffic network, there is no such observables for GPA to monitor,
so nothing to attack (which a GPA doesn't do), no ID to be made.

Now if a GAA is an agent running the central server receiving the
upload, and they're fast enough to recognize the content on
their filesystem as such before it completes, they can try to
DoS back their own overlay hops toward the uploader in realtime.

At first that would seem hard to defend against.

Then you realize that again, a proper fill contract aware network
will auto detect and depeer any link that gets DoS, thus the
upload stops well before it can ever be tracked back.
And GAA has to literally sweep through the entire overlay spiking
at tens of thousands to millions of nodes (because every node is
a relay, linear search odds) trying to find the point source that causes
the server upload to stall. And if the user has selected a higher hopcount
for their side, odds are that much higher that GAA will spike one of
those first again depeering the path and ending the upload before discovery.

The upload might continue a bit slower in a multipath
or scatter mixnet, or if either the depeered node renegotiates
back in, or the source repaths another circuit around.
Whichever way the GAA has discovered nothing and is back
to step one at that point... the depeering problem.

Nodes might publish a number of depeering tolerance parameters,
or network metrics seen of peers, such that clients could use them
when constructing paths on continuum from more reliable to more
secure as desired. The particulars of what observables drive
depeering thresholds should be, and whether an overlay can perform
whatever self management tasks well... all need tested out.


Regardless, sensitive users should not upload to
any plausibly owned / suspect / scene central servers,
and instead should insert into any distributed encrypted
file storage overlays, IPFS, generic surface websites or
hosts, upload encrypted blobs and release the keys
elsewhere, use wifi, vpn, etc.

Any candidate network for any nextgen award in
the subject line should raise the bar significantly
such that only the highest sensitive and smallest
number of users would have reason to argue.

That is simply not the case with today's networks.


> incentivize

Incentive is that secure network platforms provide generic
transport for cool apps and services that people want to use.
Most users are not going to hack their nodes to disable giveback,
and or the network will detect that, or have natural consumption limits.
Again ventures into generic transport network built for mass utility
vs network built for some specific app or class of app... the
former has probably not yet received its fair share of development
consideration over the last decades.

As before, have fun creating and deploying new stuff.


More information about the cypherpunks mailing list