Chaff might be really only "expensive" if 1) Monetary, user chose to pay for it under metered plan,
except, unmetered plans are a scam.
Yes if you don't get the physical line rate or whatever the marketing tries to bullshit.
And that's the whole point. I think it's safe to assume that 'backbones' can't carry chaff traffic. If a substantial number of ppl tried to use their 'unmetered' plans to transmit chaff the nsa-network would grind to a halt.
Not really, any substantial number on a nextgen overlay, plus the sum of all tor, i2p, etc... are not even a blip compared to the masses on clearnet and their sum of bittorrent, youtube, netflix, etc. And such overlays will have settings, in conjunction with the OS packet filters, to allow each users 1Mbps or whatever ISP feed they have to be allocated dynamically between clearnet and overlay as they see fit. People need to get out of the thought blocking mindset that chaff fill implies unusable saturation of all line rate pipes on the planet including their own, it doesn't, at all.
2) actual low latency. In order to prevent timing attacks, packets need to be reclocked, which means adding delay, which results in higher 'latency'.
Also, depending on nature of input, reclocking may not necessarily imply additional average delay, as packets and gaps between them might be simply normalized. randomized and or distributed within the same overall sum.
the only way to do that is by introducing more delay.
If A sends through B to some C 1pps on average distributed within 1s jitter, B has plenty of CPU, time, and space on its outbound wire to C, to reclock that to 1pps avg to within 0.01s jitter, or to apply it's own random jitter while still meeting 1pps. CPU's operate in GHz, so yes each packet is trapped up in that processing delay for some minimal amount of time like usec's. However so long as the node does that within the line rate, or at least the lesses rate the node has committed to upholding, the bps passed over the link doesn't change. The crypto operations and relay routing are responsible for more "delay" than anything else. After all, the Internet is commonly 10+ hops and 35-175+ msec. In those regards, background of 1Mbps of chaff traffic yielding to 1Mbps of wheat on demand, feels exactly same to the user application as 0Mbps quiet yielding to 1Mbps wheat. Yet with the former the network is using all its otherwise uselessly idle CPU's and NIC's to enable TA resistant cover, and with the latter you're screwed.
fucktards who want to download 100mbs in 2 seconds with no 'latency'. Such assholes need re-education.
True. And people considering designs for TA resistant overlays should probably self-educate on how ATM network cell switching and clocking works regarding how wheat and chaff could then be placed in those buckets, and paths made through. ie: Today's opensource SW devs in their cute corporate dayjob cube farms are lucky to have seen the end of their ethernet cable and the socket(2) manpage, let alone have physical root in telecom satcom bunkers jammed with random gear, so it's not unexpected that their designs might overlook some useful research areas.
any low-latency web onion router - could not defeat The Man
This seems yet to be lacking proof and perhaps cannot actually be said without it.
That's not what I quoted from scum-master syverson.
Quotes in papers that discount or dismiss areas of potential research not yet explored to at least as much breadth and depth as other areas, with known attack surfaces, that are then chosen coded and deployed to users, are probably suspect. Even if a plainly disclaimed "tradeoffs made" "only good enough for cat videos" network is built for 1B of whiny fucktards, that's no reason not to design and build a much more secure one for the say 1M that might want that. The dev efforts, fame, and user benefits are relatively same. So why nothing yet built today using whatever new and reinforced knowledge accumulated, and may be within reach of new research, since 20 years old designs? Alternatively, how can Tor, I2P, etc today possibly be the best that can be done such that there are no worthwhile gains left for any new network to do?
As to how much 'latency' would a better system introduce, that's an 'open question'.
My guess is that a more TA and Sybil resistant network than todays overlays... one within which IRC and low bitrate voice and video comms are usable... is entirely possible. There may even be some number of "more resistant" network designs, for generic transport of multi application data, that could be explored. Though each design by its nature might not be capable of integrating some tech of the other, so long as each is relatively equally better within a factor band above todays, then each could be deployed as needed.
Also, I forgot to mention the obvious fact that using 3 chained proxies aka 'onion routing' instead of a direct connection generates an amount of 'latency' that can't be avoided.
Direct connections may be hard to hide, thus all overlays over the Internet that attempt to hide connections don't use dc's. An SDR radio network, being it's own sort of layer-0 and more physically mobile capable network has more opportunity to exploit ephemeral direct connections. That each hop adds the latency of the respective physical distance and HW SW stack... should be obvious.