Re: [tor-relays] clarification on what Utah State University exit relays store ("360 gigs of log files")
While reducing network traffic to various accounting schemes such as netflow may enable some attacks, look at just one field of it... bytecounting. Assume you've got a nice global view courtesy of your old bed buddies AT&T, Verizon, Sprint, etc and in addition to your own bumps on the cables. You know the IP's of all Tor nodes (and I2P, etc). So you group them into one "cloud" of overlay IP's. For the most part any traffic into that cloud from an IP on the left, after it bounces around inside, must terminate at another IP on the right. There are roughly 7000 relays, but because many of them are aggregable at the ISP/colohouse, peering and other good vantage point levels, you don't need 7000 taps to see them all. You run your client and start loading and unloading the bandwidth of your target in one hour duty cycles for a few days. Meanwhile, record the bytecount every minute for every IP on the internet into some RRD. There are only about 2.8 billion IPv4 in BGP [Potaroo]. Some usage research says about 1.3 billion of 2.6 billion BGP actually in use [Carna Census 2012]. IPv6 is minimal, but worth another 2.8 billion if mapped today. Being generous at 3.7 billion users (half the world [ITU]), that's 2^44 64-bit datapoints every three days... 128TiB. Now, can you crunch those 3.7B curves to find one whose bytecount deltas match those of your datapump? How fast can you speed it up? And can you find Tor clients of clearnet services using similar method since you are not the datapump there? What if you're clocking out packets and filling all the data links on your overlay net 24x7x365 such that any demand loading is now forced to ride unseen within instead of bursting out the seams?
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 08/28/2015 03:24 AM, grarpamp wrote:
While reducing network traffic to various accounting schemes such as netflow may enable some attacks, look at just one field of it... bytecounting.
Assume you've got a nice global view courtesy of your old bed buddies AT&T, Verizon, Sprint, etc and in addition to your own bumps on the cables.
You know the IP's of all Tor nodes (and I2P, etc). So you group them into one "cloud" of overlay IP's. For the most part any traffic into that cloud from an IP on the left, after it bounces around inside, must terminate at another IP on the right.
There are roughly 7000 relays, but because many of them are aggregable at the ISP/colohouse, peering and other good vantage point levels, you don't need 7000 taps to see them all.
[ etc, right on target AFAIK ] Global observer attacks can be augmented by owning a substantial number of the routers: All hosted at one facility, but globally distributed via transparent VPN connections running on a variety of platforms all over the world. These router instances would be somewhat customized to facilitate manipulation of traffic via a purpose built hypervisor with a plugin architecture for monitor functions. Since code names aren't supposed to be related to the named thing in any way, we can't call this Hydra. In terms of real world threats, I think it's safe to say that TOR "Hidden Services" aren't very well hidden from motivated adversaries who can deploy global observation and/or global infiltration attacks: The persistence, fixed physical location and interactive availability a hidden services makes it a fat, dumb, happy sitting target for any major State's military and police intelligence service that takes an interest in identifying the host and its operators IRL. :o/ -----BEGIN PGP SIGNATURE----- Version: GnuPG v1 iQIcBAEBAgAGBQJV4CPjAAoJEDZ0Gg87KR0L/NEQAKuHUSt75+drmpbT3E5N5EQq IohHdYiD1w0ui/PGjK/TE5AbUUcvRdxZ1RTKHlksvxxQeNRngimtUXbifb5SnCgo MpYMidXRxfNCNjvQOYTj5ao2uZ4k833uiHF8eKkVXoVrnxT5dMZnaFUnZUqoNoVQ Kf099zLvMDbcvnprO8ACGTCwmmFo81n2Qh5RnHvuXn1Y47tsLNNiaftzqZeucudq YDNoDi/U4VxRJvpMTUs0N7CcGoifZy573XK72kDriJj61Hk8irLtKyGkj/aNheUX mUi5RHYRhoiZYi8GMtPRXkehHX7bOtoevj4ndBU8VHVUD0HFj/B28FxlL4AH60SU x/8pTVSfdyivA4Iq6l6MHCQETCsRJtrEbQ7tZhZ+bke6Kp2zA2910nIXufnwZy2D x6emy2wSEjCme7VuZ+BXrPFXUBYf6d5J7hX21z2e09IV+EGteVsoYyifFGGKEe4e j9EopUatPvff+l1rE5ka49CcruT9dcKkc/W77H0etc186djSPElJj4Yo7Uwsrax/ qcNu8zAqrXzxxg2Og//cCV3BA9gRDMMqBXXyJZy3EdmuhcZyRI7s5Q4c/7vvRVFC iob4S6ZPoMmF39YJxPNlg8eq0YbjmZ04WRsHtG43IEBuSuQiz8MFoXWT1zKXJ/iw 4aw6fi0dqJ4DI1TEj6Co =+O1w -----END PGP SIGNATURE-----
From: Steve Kinney <admin@pilobilus.net>
In terms of real world threats, I think it's safe to say that TOR "Hidden Services" aren't very well hidden from motivated adversaries who can deploy global observation and/or global infiltration attacks: The persistence, fixed physical location and interactive availability a hidden services makes it a fat, dumb, happy sitting target for any major State's military and police intelligence service that takes an interest in identifying the host and its operators IRL. I have seen references to the idea of giving 'everyone' the option of having their router implement Tor. And I mention this because I'd like to see more about this idea. A modern router presumably has plenty of CPU power/memory capacity to do Tor. And, particularly since we are entering the era of gigabit fiber internet services (for reasonable prices; say $70 per month), there will be an ever-larger number of people who will be in the position to host a relay node. What's needed is to convince router manufacturers that they "must" transition to Tor-by-default routers. Wouldn't we like to see a million high-throughput nodes appear? Jim Bell
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 08/28/2015 12:46 PM, jim bell wrote:
*From:* Steve Kinney <admin@pilobilus.net>
In terms of real world threats, I think it's safe to say that TOR "Hidden Services" aren't very well hidden from motivated adversaries who can deploy global observation and/or global infiltration attacks: The persistence, fixed physical location and interactive availability a hidden services makes it a fat, dumb, happy sitting target for any major State's military and police intelligence service that takes an interest in identifying the host and its operators IRL.
I have seen references to the idea of giving 'everyone' the option of having their router implement Tor. And I mention this because I'd like to see more about this idea. A modern router presumably has plenty of CPU power/memory capacity to do Tor. And, particularly since we are entering the era of gigabit fiber internet services (for reasonable prices; say $70 per month), there will be an ever-larger number of people who will be in the position to host a relay node. What's needed is to convince router manufacturers that they "must" transition to Tor-by-default routers. Wouldn't we like to see a million high-throughput nodes appear? Jim Bell
To convince router manufacturers that they must transition to TOR-by-default routers is a tall order: That's a big commercial market with a small number of dominant players, inherently aligned with conservative a.k.a. Fascist interests. Global many-to-many communications is correctly perceived as a threat to the political and economic dominance of State and Corporate institutions, formerly assured by central control of mass scale communications for censorship and propaganda purposes. In this context, mass surveillance is an adaptive response that seeks to counter-balance the "liberating" impact of the Internet by enabling early identification and effective manipulation of emergent mass movements and ad hoc leadership cadres. TOR is a weapon; the U.S. State Department funds it to support the destabilizing impact of counter-censorship and counter-surveillance technology on other, more overtly repressive regimes. However, high profile busts of Hidden Service users indicate that TOR is not quite effective enough to defeat U.S. network surveillance assets, at least not where fixed high-value targets are concerned. This is consistent with U.S. policy objectives with regard to the strength of all cryptographic applications. We are told that the TOR Project favors convenience and speed over security, because this is necessary to build a large enough user base to make the system effective. That does not entirely make sense, as favoring security over speed and convenience would make the system effective regardless of the size of its user base. It makes more sense to imagine that the TOR Project would lose its Federal funding and become a target for effective harassment and manipulation by Federal security services, if TOR's security was upgraded to be resistant to U.S. surveillance capabilities. "Everybody knows" that effective resistance to traffic analysis of an encrypted low-latency anonymizing network requires a constant flow of traffic, padded as necessary with dummy packets to maintain a constant through-put when an endpoint is idle. This deprives observers of the ability to match the endpoints of any given session by analyzing the timing and number of packets at entry and exit nodes. But nobody implements effective cover traffic: The reasons given for this deficiency include concerns about bandwidth limitations and processor overhead. 20 years ago these barriers were real, today not so much. I2P users have the option of hosting enough torrents to keep cover traffic unrelated to their other uses of that network going; this is not as effective as padding traffic to maintain a uniform flow, but way better than no cover traffic. TOR actively discourages file sharing, "because" this would cause bandwidth and processor overhead problems. I believe it would be much easier to persuade the TOR Project to implement cover traffic, or to create a next generation TOR network that does, than to persuade router makers to support today's other than best practices TOR network by default. But I'm not sure that this can be done by any project based in a U.S. controlled jurisdiction, as it would be contrary to the National Interest. :o/ -----BEGIN PGP SIGNATURE----- Version: GnuPG v1 iQIcBAEBAgAGBQJV4TlCAAoJEDZ0Gg87KR0LNSUQAOWwmMPKGFmwH6SbS+P5Ko/i IuzkAa7w72NEnClHi5Nra+loIU4Ursr/+olNbRiqWtwQRoJunLokHqQ5o21XhEBZ bfTt+tYZki+S/vnwC7VwmLzUJj33B36h2Yfvk6ju1YpVXsTLmJmvK8hUVgGhBQQt cfFeW5v0OQrGXnXqchH9MAtrq0IwcNN/R4//6bDii0tm+huf8a8ocIPcjVz5rBFK 4V2wP+SZDprcgyZcW0/dkCzQ4vZucsGpF2lYXhlfLs02xz6DZgoeF5CYwVluHmhu osvgcGh5kRLV6u/18Hr68m/uptDflhsZMKVBwnArDGmwa8enxXmF3/7Me1yByMGb oRYnAcKNsXRNhGdOQFsK80aTeKdnqcjXclHrhCkXFKjw3qCXtTLWoUg90pekJf6l hWS5Bb5M74/8aHvsg5LUoOqTzHQ4MufWVWAHNQc1RejwTglBE4mxLR6YfGGeD9aD B3mI6k7Tuo849ViTMWQOM5CEc9+/qDEnB9TbjhLuXI9matb+oOBXcXXhXv478+g6 SWMnGyRzAg8duIFxAfjULbgc2nQOho07lw9olapSj6VqJMjn73DPik13MlFBVWtQ PNUzT3TaeZFLR8YM3qLTga45ZVXR48txALzfR9Zb+SKB+xxgBnYUf9j6jHNTr3Rc ShSE0/tUBuWCjfDQVSmu =VQPV -----END PGP SIGNATURE-----
On 8/28/15, Steve Kinney <admin@pilobilus.net> wrote:
... "Everybody knows" that effective resistance to traffic analysis of an encrypted low-latency anonymizing network requires a constant flow of traffic, padded as necessary with dummy packets to maintain a constant through-put when an endpoint is idle. This deprives observers of the ability to match the endpoints of any given session by analyzing the timing and number of packets at entry and exit nodes.
this is one approach, "zero knowledge" mixes. there are interesting research avenues around low latency traffic analysis resistant techniques. they're more complicated, of course, and in fact it is this complexity to blame rather than any conspiracy.
But nobody implements effective cover traffic: The reasons given for this deficiency include concerns about bandwidth limitations
effective cover traffic for zero knowledge mix is significant. this is because to be effective in a traditional mix produces bandwidth explosion among participants. i challenge you to show an effective mix protocol without this bandwidth explosion that also does not introduce a break in guarantee of anonymity.
and processor overhead. 20 years ago these barriers were real, today not so much. I2P users have the option of hosting enough torrents to keep cover traffic unrelated to their other uses of that network going;
wrong. I2P does not provide traffic analysis resistance, nor defense against active attacker.
this is not as effective as padding traffic to maintain a uniform flow, but way better than no cover traffic.
wrong. "way better" way too generous. this is just wrong. part of the problem is that active attacks and traffic analysis are so hugely effective. the defense of "adding some torrents" is misguided wishful thinking.
TOR actively discourages file sharing, "because" this would cause bandwidth and processor overhead problems.
again, more complicated. not just technical but legal.
I believe it would be much easier to persuade the TOR Project to implement cover traffic, or to create a next generation TOR network that does, than to persuade router makers to support today's other than best practices TOR network by default. But I'm not sure that this can be done by any project based in a U.S. controlled jurisdiction, as it would be contrary to the National Interest.
Tor research continues. however, solving low latency traffic analysis resistant anonymity is much harder than just "implement cover traffic"! in fact, you need to solve half a dozen hard problems at once, including how to define an appropriate level of cover traffic over selected links. best regards
Making all routers do Tor by default is not an economical proposal. Besides, you'd want so many Tor nodes to be easily exploited by The Man?
participants (5)
-
coderman
-
grarpamp
-
jim bell
-
Lodewijk andré de la porte
-
Steve Kinney