On Sun, Oct 27, 2019 at 08:50:03PM -0400, grarpamp wrote:
The problem is the node that was attacked with a latency injection attack - he just got attacked, his friends have now dropped him, and the Feds just identified whatever it was he was up/downloading
No, ident requires timing attack to propagate thereby exposing the end-to-end speakers. Node X, or its path to some other nodes was attacked, X's relavant peer nodes connected to X detected that disturbance in X's transmissions, and refused to forward on anything X sends (meanwhile the entire overlay is filling and reclock normalizing everything anyway).
And in this case, let's say X was uploading the next helo gunship collateral murder video, was half way through, is attacked, does not get to finish uploading. GPA was monitoring the upload, thus why GPA attacked X (and others in their target set), and due to the [temp|full] link dropout (latency trough), GPA IDs node X as the uploader. But, same problem even if link not dropped, and just a latency trough - target X, is now IDed by GPA. In both cases a) X's uplink dropped, or b) not dropped, there destination of X's upload, sees the [temp|permanent] dropout.
You could cut X's stream off from the left of Y (that Y normally forwards out its right), Y's CPU either creates fill to replace X's bw contract and sends that out its right, or ultimately renegotiates a lower sum of rates with some of its right peers that accounts for loss of X on its left, Y is now free to accept new contract proposals on its left summing up to the rate that X formerly consumed.
Yes. That is how we must operate - remaining (non attacked) nodes, must continue per chaff fill and renegotiation protocols.
Yes, X got depeered, sucks for X, at least until X reconnects and starts upholding policed timing traffic fill contracts expected, but the attack did not succeed in disclosing anyone who was talking to who end-to-end.
Yes - if X was uploading to a target passively monitored by GPA, GPA should not be able to detect any traffic troughs or dropouts, since the upload target's peers maintain chaff fill in the face of the wheat dropout. Only X is IDed, not also X's upload target. This is certainly an improvement on the status quo. So our problem set is now thankfully reduced to "what can X do to improve his own situation", and the answer appears to be "dark links" of some form.
It's entirely plausible and reasonable that in decades post-911 post-Snowden, G* may now have laughably trivial end-to-end who-to-who traffic analysis attacks that none of today's overlays are strongly resistant against. Most of today's overlay networks design-think predates one or both of those revelations and confirmations, and applies little of the new crypto and network research that has evolved since either of them.
If you (or anyone following along) is aware of specific cutting edge research that you believe applies to this problem domain, please of course post a link, and ideally a description. Now is the time to consider the cutting edge, what we can incorporate.
You need
s/you/we/
to come up with projects and overlays whose whitepapers clearly indicate solid resistance measures to G* TA (instead of disclaiming / dodging / burying / ignoring the topic as is the norm today), and whose analysis whitepapers by external reviewers cannot find fault with their approach (certainly at least not to any materially use case significant odds of success, unlike with todays overlays).
Logic is logical for a reason. Those who write whitepapers have the time and motivation to do so. We are writing here a whitepaper without polishing it for publishing. Anyone motivated to polish that which we discuss, into a whitepaper, is encouraged to do so. Short of that, we're coming to conclusions as best we can. Anyone who identifies errors in our logic or unspoken assumptions or etc, is encourage to name such, at the moment you cognize such.
There are probably a variety of design and tech can be applied towards that. Both for general purpose overlays, and app specific overlays.
Have fun creating and deploying them :)
I personally intend to focus primarily, though not exclusively, on a packet switched network layer. To this end we have a few more concepts to lay out and tear apart yet. Also, we should continue to put forth and discuss certain concepts which app layers can build on, in particular distributed cache incentivization. A big-I Internet replacement, must target both the comms, plus the content storage, not just one or the other - on occasion the dynamic between two core concepts gives rise to a breakthough. E.g., if we can sufficiently incentivize decentral opportunistic cacheing, we are very close to replacing cloudflare, akamai and youtube. And if with some DHT, P2P, possibly git style content distribution mech on top of opportunistic intrinsically incentivized cacheing, we may effectively and readily eliminate the need for all webservers - where every end user node can at any time publish anything, as long as he has an audience interested in his publishing, and that audience naturally caches that content, at the minimum for the time required to view that content and/ or decide whether to store it longer term in local library. How close we can get to replacing the Internet as we know it is yet to be determined, and there is apparently no inherent reason we cannot achieve this goal. The reason the centralizers exist is due to the nature of profit maximizing "content producers" holding the tide back against what is now possible. We begin from the present moment. We are not bound by the present manifestations of past intentions. We carve out our collective future with our intentions, will and actions today. Create our world,