On Sat, Oct 26, 2019 at 05:32:59PM -0400, grarpamp wrote:
On 10/26/19, Punk - Stasi 2.0 <punks@tfwno.gf> wrote:
2005 Low-Cost Traffic Analysis of Tor https://www.freehaven.net/anonbib/cache/torta05.pdf
"By making these assumptions, the designers of Tor believe it is safe to employ only minimal mixing of the stream cells...
...This choice of threat model, with its limitation of the adversaries’ powers, has been a subject of controversy...
...Tor, on the other hand assumes a much weaker threat model..
...we show that even relatively weak adversaries can perform traffic-analysis, and get vital information out of Tor. This means that even non-law-enforcement agencies can significantly degrade the quality of anonymity that Tor provides, to the level of protection provided by a collection of simple proxy servers, or even below."
-------
my comment : the attack is based on monitoring the latency of a node while sending an attacker controlled stream through it
"Tor exhibits the worst possible behaviour: not enough interference to destroy individ- ual stream characteristics, yet enough to allow the remote measurement of the node’s load."
Maybe some tor fanboi knows if this has been somehow fixed?
Tor, perhaps as result of years of such papers and posts, is attempting to incorporate some traffic noise, search there for "netflow" and "padding", however the breadth of its application may not be enough to close out many of the traffic analysis papers. People should look more closely at that work and analyse its results.
Anyway the article makes it clear that simple cover traffic in not enough to defend against timing attacks.
This may be one of the papers that may outlines why, in a fill padding chaff wheat network, or even any network, that every node also needs to reclock the input it receives into its output,
else adversary can see the perturbations it applies to a nodes input reflected in its output, and or see the natural perturbations the internet itself imparts, and those of the users own traffic patterns, carried through end to end without mitigation.
Also why, similar to some Bittorrent and other protocols, overlay nodes should probably drop contracts they have with other peer nodes that do not fulfill input performance expectations.
With merics sharing and collective node kicking, perhaps it may be possible to collectively identify stalker nodes, and share this information over a longer period of time - as with bottorrent, don't connect with bad performing nodes.
ie: Bittorrent nodes (clients) ignore nodes that send corrupted datas... overlays should drop peer nodes that exhibit non conformant traffic characteristics... such as unclocked, bursty waves, etc that was not agreed to, that would present risk to observability.
"Bursty" anything, without negotiation of graduated/ stepped up/down link management, must not be permitted in the first instance.
ie: First, don't accept sketchy peers trying to send you suspect waveforms to carry, and Second, do not forward anything on through you to another node without conforming and reclocking it, don't be so busy or partition attacked that your node can't uphold that thus becoming a risk to the net, sleep and reestablish your node into the net later.
For most all folks today, their first physical hop or link, is to their ISP. A GAA performing active timing attack, in the way of suspending your internet link for say 500 ms, is not possible to defend against when you have no other links for onboarding. And if we assume more secure hardware (phones with separation of non back doored CPU, and radio CPU), and a bug free driver and software stack, and a "difficult to statistically analyze" garlick net, then it is an entirely reasonable assumption that our not so friendly GPA and GAA stalkers will use the remaining tools in their toolbox, much more heavily - active link suspension across target sets of end users, bisecting as needed to map end user nodes to destination/ server streams of interest. We are engaged in an arms race of sorts. And we are pushing our adversaries to have to use the more overt tools in their toolbox; this is a good thing, as in the less your enemy can hide, the better.
Since an overlay network does not yet own the underlying internet hardware (and there is not yet full time crypted link rate fill HW RFC), this must be done in nodes CPU versus at the NIC PHY over the cable. You can get closer to Layer0 with #OpenFabs #OpenHW #MeshNets #GuerrillaNets.
Note also there is a difference in design thinking approaches between say...
- Cover traffic, meaning traffic laid over or filling in gaps in wheat. The wheat ends up being awarded a variety of mental biases and design assumptions in its favor which could prove controlling and exploitable.
Hmm. To what extent can we classify types of wheat - e.g. wheat which is high value and therefore needs QoS maximized, vs wheat which can tolerate burstyness, and can therefore be reduced in QoS? Which nodes would you give QoS info to? Phone calls require QoS.
- Base fill traffic, meaning a base of fill [at linerate, fulltime, regulated], such that when wheat is ready to pass, the wheat packet is substituted in for the fill packet. The fill security aspect is mentally biased for, such that it should continue to hold regardless whatever wheat or active adversary crap is attempted to be sent over or at it.
Active latency injection, say at your ISP, is the big nut to crack, as you say above, needs guerilla nets. Ultimately, if we don't own it, we don't control it, and to the extent we don't control it, it will be used against us. So we have no option but to eventually roll out our own phys links, mesh nets, back hauls etc - to some degree we have to, at least in early days e.g. use govnet links for international (undersea cables are prohibitively expensive, even for most corporations).
There's something to be said for "store and forward" overlays on the order of hours to days... file storage, email / nntp.
Unfortunately they're no good for anything interactive realtime... IRC, urgent messaging, website queries, transactions, voice, video.
We must treat each use case individually, to optimize not only the network, but the apps (yet to be written). Voice and realtime video are a big ask - without your dark links, and if you're a prime gov target, you're gonna be stalked and actively attacked. With some apps, e.g. IRC, although relatively low latency is needed for most folks to use, the b/w is very low (a text message here and there); web forums could actually work this way too, if we take remove content and thing "text messages", "+1s" etc. - the heaviest parts of forums are dodgy centralised-SQL-DB type code which does not scale sufficiently, and spitting endless images out of the servers network card - images are media which should be content addressed (ala git), thus cached locally. How many times have you been on a forum such as dailystormer and from one day to the next, EVERY profile image gets re-loaded? This is sheer insanity! Even custom firefox dark config "cache all HTTPS content, cache for ever" etc, was not sufficient to stop reloads of what should be cached content! So, for some server loads, we certainly should be able to reduce that load to a trickle, and for others, we may be able to completely decentralise, no server needed - $CURRENT_YEAR git and DHTs hints in this direction.
There's also consideration to make on whether an interactive realtime network should also provide, if needed, a store and forward layer, or treat it as just another app riding on top, as all realtime nets do today.
I don't understand the comparison you are drawing (or rather, the consideration you are putting forward) here. A sane network is not based on TCP. A sane network is based on UDP or IP or some lower version of "independent packets or frames". How successful can distributed reputation (namely "warnings") be at identifying and isolating "bad" e.g. stalking gov nodes. "Know your peer."
Or whether to cripple either s+f or realtime to preclude use of the other over them. ie: Tor is somewhat crippled by TCP only. Whereas I2P/Tor+OnionCat, CJDNS, Phantom enable UDP etc by providing a full IPv6 stack. Other overlays have similar restrictions and capabilities, most not intentionally preclusive, instead being just "We want to create a net to do X (say send an email)" and the net just ended up being useless for anything else.
Keep in mind, once you have a cryptographically strong analysis resistant general purpose overlay transport network (ie: internet stack), you can run anything over it.
Don't be afraid to create an AF_OVERLAY if needed to accomplish that. If the overlay network is truly good, people will get todays applications ticketed, patched, and compiled with that option to inferface with them.
There's a bit of a resurgence of user space networking, TCP/IP stacks etc in recent years - the most high performant I came across so far is SNABB (might have posted a link ~1 year ago (see one of the iqnets urls files attached).