At 01:40 PM 8/25/2013, coderman wrote:
and to StealthMonger's point about latest generation mix networks for best privacy, why not instead focus on building low latency protocols that are resistant to traffic analysis and confirmation?
Because "low latency protocols that are resistant to traffic analysis" is a really really hard problem. Even doing "high latency protocols that are resistant to traffic analysis" is a really hard problem. "Building" them is a mere application of sufficiently advanced technology, right?
make them datagram based; utilize user space stacks
Datagrams don't give you any useful anonymity, because any decent ISP is going to block forged-source packets, but they do give you a bit more flexibility about timing, which is important for defending against traffic analysis. The standard warning about using them for an application is that it's extremely tempting to use them to reinvent TCP badly, because TCP really does a lot of things you want, and in a security context it's tempting to also reinvent TLS badly. Some other problems with them are that you need to get firewalls to allow them through, unless you disguise them as other protocols like the horribly evil things Dan Kaminsky regularly does to DNS, and if you don't disguise them then they stick out like a sore thumb on any IDS or netflow analyser. And if you're the only person using that protocol, you're not hiding from traffic analysis. It's a lot easier to hide if you implement your datagrams as http/https transactions of some kind, but building a bunch of relay nodes to pass those transactions along ends up reinventing Tor. Putting them in user space is just fine and mostly more portable. It's hard to get millisecond-level latency if you do that, but you can't hide from traffic analysis with latency that low anyway.