Low latency means that only a few seconds of traffic need be considered. Web means that users have lots of traffic repeats in time-defined patterns. These make traffic analysis resistance hard.
Without constant fill upon which they ride hiding within that... yes of course there's little resistance there. Yes, "seconds of traffic" refers to the global buffer space and time required for *PA find a solution within it. That seems to become much harder when, instead of watching discrete mouse clicks propagate around pulsing bumps in TCP like a rat through a snake, your vampire buffer is filled with normalized cell traffic.
Adding dummy cover traffic does not help until you use impractical levels of cover traffic ... better to spend limited spare traffic resources on padding
Adding fill is different from yielding existing background of fill on demand of wheat for carriage.
does not help until you use impractical levels of cover traffic
This is steeped in "OMG won't someone think of the bandwidth", they assume nodes don't manage their own ability to keep their CPU and pipe to ISP above water, that an intelligent and well configured net is not possible whether automatically or manually, that nodes can't contract links among themselves to ensure processing headroom, that it's just balls to the wall until it congests itself into packet loss and the whole thing melts down. When trying to design new things, coming at it with OMG oldtalk tends to limit those areas from being freshly explored anew.
padding to make files the same size
That assumes trying to build yet another network arbitrarily restricting itself to file transfer application. Instead of first trying to create a general purpose transport network that will serve many applications.
Dithering timing doesn't really help much against The Man's computing resources, at least until you get to something that is not low latency.
Dithering, reclocking, jitter... on every link regardless of designed latency... may serve to help reduce or eliminate ability to follow a packeting problem observed or injected upon one link... from being repeatered node to node across the net to the other side. It's finer grained complement to the overall background fill that masks out bigger problems and observables.
It should be noted that NSA do not say they can break TOR in practice
Yes they do, it is the very subject of this thread, search the net... pdf was think and process dating pre 2007... while Tor's design didn't change since then, it's without question that NSA's did to the tune of $Billions... people are utterly fucktarded if they don't think NSA and the rest can point-click that shit 12 years later in 2019... "don't tell anyone it is broken, so people keep using it"... https://edwardsnowden.com/wp-content/uploads/2013/10/tor-stinks-presentation...
In all the "Dark Web" busts I have read about there has been no evidence presented as part of a general break in TOR. Maybe they can't (or just don't) break it.
Parallel construction is well known available tactic to preserve valuable tools and clean up faulty evidence and illegal practice. So is sharing tips out to other entities. It's TOP SECRET so don't expect to see it on it's face. Though I definitely remember a leaked slide deck where a form of PC tipping was indicated as being used in drug cases. Someone else will have to link to that one.
any low-latency web onion router - could not defeat The Man
Crackheaded visions are hard to articulate, but here are some nodes, more or less randomly pathed through as needed by the communicating endpoints... https://www.youtube.com/watch?v=RGfr-KgWiiQ amongst which bucket brigades are constantly carrying things (though the brigade should probably span each link entirely, such that no segment of track is left entirely unoccupied between nodes)... https://www.youtube.com/watch?v=8qDXcQcj1fc capacities vary and can integrate, switch in/out/on/off, and route around problem track as needed and to enforce negotiated contracts and expectations... https://www.youtube.com/results?search_query=sdh+atm