Building a new Tor that can resist next-generation state surveillance
Tor is an imperfect privacy platform. Ars meets the researchers trying to replace it. ince Edward Snowden stepped into the limelight from a hotel room in Hong Kong three years ago, use of the Tor anonymity network has grown massively. Journalists and activists have embraced the anonymity the network provides as a way to evade the mass surveillance under which we all now live, while citizens in countries with restrictive Internet censorship, like Turkey or Saudi Arabia, have turned to Tor in order to circumvent national firewalls. Law enforcement has been less enthusiastic, worrying that online anonymity also enables criminal activity. Tor's growth in users has not gone unnoticed, and today the network first dubbed "The Onion Router" is under constant strain from those wishing to identify anonymous Web users. The NSA and GCHQ have been studying Tor for a decade, looking for ways to penetrate online anonymity, at least according to these Snowden docs. In 2014, the US government paid Carnegie Mellon University to run a series of poisoned Tor relays to de-anonymise Tor users. A 2015 research paper outlined an attack effective, under certain circumstances, at decloaking Tor hidden services (now rebranded as "onion services"). Most recently, 110 poisoned Tor hidden service directories were discovered probing .onion sites for vulnerabilities, most likely in an attempt to de-anonymise both the servers and their visitors. Enlarge / Who can forget the now-famous "Tor stinks" slide that was part of the Snowden trove of leaked docs. Cracks are beginning to show; a 2013 analysis by researchers at the US Naval Research Laboratory (NRL), who helped develop Tor in the first place, concluded that "80 percent of all types of users may be de-anonymised by a relatively moderate Tor-relay adversary within six months." Despite this conclusion, the lead author of that research, Aaron Johnson of the NRL, tells Ars he would not describe Tor as broken—the issue is rather that it was never designed to be secure against the world’s most powerful adversaries in the first place. "It may be that people's threat models have changed, and it's no longer appropriate for what they might have used it for years ago," he explains. "Tor hasn't changed, it's the world that's changed." [continues with analysis and new tech examples for many pages...] Also, https://www.facebook.com/cnn/videos/10156083409206509/
https://arstechnica.com/security/2016/08/building-a-new-tor-that-withstands-... Forgot to put the link above.
On Fri, Feb 17, 2017 at 12:45:50AM -0500, grarpamp wrote:
https://arstechnica.com/security/2016/08/building-a-new-tor-that-withstands-...
Forgot to put the link above.
Anyone here able to evaluate the merits of the proposed new architectures? Or do we have to wait for the proof after pudding is served?
If you must use tor its best to combine it with a good, multi-hop, VPN. I prefer i2p (there's now a fully C++ version for those who don't trust Java) and cjdns. On Fri, Feb 17, 2017 at 12:42 AM, Eugen Leitl <eugen@leitl.org> wrote:
On Fri, Feb 17, 2017 at 12:45:50AM -0500, grarpamp wrote:
https://arstechnica.com/security/2016/08/building-a- new-tor-that-withstands-next-generation-state-surveillance/
Forgot to put the link above.
Anyone here able to evaluate the merits of the proposed new architectures? Or do we have to wait for the proof after pudding is served?
On Fri, Feb 17, 2017 at 12:42 AM, Eugen Leitl <eugen@leitl.org> wrote:
On Fri, Feb 17, 2017 at 12:45:50AM -0500, grarpamp wrote:
https://arstechnica.com/security/2016/08/building-a- new-tor-that-withstands-next-generation-state-surveillance/
Forgot to put the link above.
Anyone here able to evaluate the merits of the proposed new architectures? Or do we have to wait for the proof after pudding is served?
On Sat, Feb 18, 2017 at 09:46:44PM -0800, Steven Schear wrote:
If you must use tor its best to combine it with a good, multi-hop, VPN. I prefer i2p (there's now a fully C++ version for those who don't trust Java) and cjdns.
Now there's an open door for discussing "trust" :) C++ might be more performant ("might"), and similarly "might" be more secure. Neither is a certainty and C++ can certainly be worse on the 'security' front. DJB has an approach to software dev which seems to be "extremely defensive" from my minimal viewing some years back, and that's just C. Certain fundamentals will always be required, no matter the impl. language, e.g. design by composition vs design by inheritance, minimal binding between "modules" or "libraries" / API, deterministic input validation, etc etc. Algorithmic and protocol "security" are another matter again. Sorry for the ranting, but just as "character" is hopelessly overloaded in Java, "security" is also an overloaded term, not useful without much qualification. Of course. Good luck,
On Fri, Feb 17, 2017 at 3:42 AM, Eugen Leitl <eugen@leitl.org> wrote:
Anyone here able to evaluate the merits of the proposed new architectures?
There are some websites out there listing / ranking overlay networks in tickmark feature and buzzword bingo tables. I don't know of any project actually sitting down to brainalyze their overall design and operation at any level of depth. ie: "We kinda know what tor's doing with it's routing, and how to break it or not, now what about network x's routing." The sites just tick off 'uses onion / packet / garlic / mix routing', 'uses crypto x', etc, as found on the parent project website and that's it.
Or do we have to wait for the proof after pudding is served?
Tor has been serving pudding for years, and has a small but relavant number of whitepapers outstanding against it, at least a few of which range hard to unfixable outside of architecture. Every tool will have some weakness somewhere, some you can live with or fix, some you can't. Guessing that today's biggest ignored threats to overlays are: 1) GPA's and GAA's, operating at the wire level. 2) Who exactly is running the network nodes. n) What else ??? If that's reasonable, then any project trying to address these should get a closer look. There also needs to be some project doing serious digging into disappearances, shutdowns, and court cases, working the darknet forums and lawyers and dockets, looking for any unexplainably dead canaries arising from each active overlay network. Reviewing designs... designing against threats... tracking proof... three areas. Do it, get funding, make yourself a star.
On Feb 19, 2017, at 12:59 AM, grarpamp <grarpamp@gmail.com> wrote:
On Fri, Feb 17, 2017 at 3:42 AM, Eugen Leitl <eugen@leitl.org> wrote: Anyone here able to evaluate the merits of the proposed new architectures?
There are some websites out there listing / ranking overlay networks in tickmark feature and buzzword bingo tables.
Got any links you recommend for this? (i haven't googled it yet..)
I don't know of any project actually sitting down to brainalyze their overall design and operation at any level of depth. ie: "We kinda know what tor's doing with it's routing, and how to break it or not, now what about network x's routing." The sites just tick off 'uses onion / packet / garlic / mix routing', 'uses crypto x', etc, as found on the parent project website and that's it.
Or do we have to wait for the proof after pudding is served?
Tor has been serving pudding for years, and has a small but relavant number of whitepapers outstanding against it, at least a few of which range hard to unfixable outside of architecture. Every tool will have some weakness somewhere, some you can live with or fix, some you can't.
Guessing that today's biggest ignored threats to overlays are: 1) GPA's and GAA's, operating at the wire level. 2) Who exactly is running the network nodes. n) What else ???
I think it's healthy that at least that everyone is aware tor has these weaknesses, and if a GPA wants to find you, they probably will.. What concerns me are possible weaknesses that fall under your "What else?" category, although /hopefully/ there isn't a lot to that, with all the effort that has been put into showing tors weak spots. What also concerns me is - are the developers actually engaged in new ideas to address #1 and #2, or are they more worried about the browser bundle??
If that's reasonable, then any project trying to address these should get a closer look.
There also needs to be some project doing serious digging into disappearances, shutdowns, and court cases, working the darknet forums and lawyers and dockets, looking for any unexplainably dead canaries arising from each active overlay network.
Reviewing designs... designing against threats... tracking proof... three areas. Do it, get funding, make yourself a star.
On Sun, Feb 19, 2017 at 12:59:24AM -0500, grarpamp wrote:
Reviewing designs... designing against threats... tracking proof... three areas. Do it, get funding, make yourself a star.
Does theory allow anonymity in the presence of sufficiently powerful network adversary? What are the disadvantages for better anonymity? (using one time device isn't cheap and requires to find device)
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 02/19/2017 08:04 AM, Georgi Guninski wrote:
On Sun, Feb 19, 2017 at 12:59:24AM -0500, grarpamp wrote:
Reviewing designs... designing against threats... tracking proof... three areas. Do it, get funding, make yourself a star.
Does theory allow anonymity in the presence of sufficiently powerful network adversary?
I think that depends on the performance of the anonymous networking tool in question. Generally speaking, higher speed and capacity equates to lower security. High bandwidth, low latency connected protocols present the worst case scenario; low bandwidth, high latency unconnected protocols present the best case scenario. As an example, providing "normal" http performance on an anonymous overlay network (the TOR scenario) presents a huge attack surface. An adversary who can observe the majority of the physical network infrastructure all at once can use traffic analysis to trace connections from end to end; a lesser adversary could stand up enough routing nodes to be the majority owner of the overlay network and both passively observe and actively manipulate traffic to achieve the same goals as a global observer at a tiny fraction of the cost. (VPN connections from a cloud server farm to numerous remote hosts solves the problem of running centrally controlling nodes that /appear/ to be independently operated.) At the opposite end of the scale, imagine a network of NNTP servers that carry only PKI encrypted posts, distributing everything posted to all users. The users' local installations would try their owner's keys against /all/ the messages, writing those that decrypt to an inbox folder. Here, traffic analysis and/or majority ownership of nodes would be more or less useless; one good attack would be to overwhelm the network with flood of bogus message traffic. Countermeasures to this attack could include a web of trust arrangement, and configuring the nodes to only store and forward messages signed by "trusted" users; at least this would force an adversary to do some work to flood the network with garbage.
What are the disadvantages for better anonymity? (using one time device isn't cheap and requires to find device)
I believe it is reasonable to expect better anonymity to /always/ involve performance hits in latency, bandwidth, and local resource usage, relative to "normal" routing protocols. In practical terms, today's anonymizing technology /probably/ imposes sufficient delays on the identification of users and who is talking to whom that physical anonymity - i.e. making only brief connections to open wireless routers at locations where one is not seen coming and going - should provide "really good" anonymity. Of course one must prevent the hardware from leaking identifiers via RF or TCP/IP vectors. Conversely, repeatedly using anonymizing network protocols from one location provides cover against low powered adversaries, while top tier adversaries who by definition will know "who you are and who you communicate with" may be restrained from hostile action by their reluctance to disclose the existence of "sensitive sources and methods." That is, until or unless they find your activities /really/ annoying, and spend a little money / take a little risk setting you up for a series of unfortunate events that won't be attributed to them. :o/ -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.22 (GNU/Linux) iQEcBAEBAgAGBQJYqcVoAAoJEECU6c5XzmuqJPEIALZ4u2wDz8rY9f+xP+vlGxLs +tLwmeQsmT7kdiD0yKlzItyeWk58O0yeeptdihvs/nxGMrlI3MPjeVspzKCQL+03 S3ynjScVtSVv2W96v0HIMOCIBcVMyaOaSsUD89F9yB+RNotg16nze3WvF80HtULp xiz3E9okFIwN7eQ4+7q0n0tyc+y5HEwArczfDU1hZDj8j4anMxVWhHEzJ6Bwtavg pePdqh/+d10ocoYXxiE1k0aSahhXWa27xn8dQ9ynBW3oS+tE+Z4eA/XrwZ8oAKez +3NyGeEAEVNNngeK06mgH1ewdn5AHVMBA86l56kA5t5LUos6yqUhJa2+MQ4EhQk= =0gju -----END PGP SIGNATURE-----
it's healthy that at least that everyone is aware tor has these weaknesses
"overlays" means any given overlay, or all of them, not exclusively tor.
are the developers actually engaged in new ideas to address #1 and #2
The overlays with large user bases in production use today all originated from earlier schools of thought... formed well before Snowden publicly proved the threats above once and for all. This doesn't mean schools are invalid or did not have such adversaries well in mind. Simply that today, the design whitepapers of any overlay network (certainly any new networks) will be expected to devote pages to any ability they might have to nullify those threats. In other words, people will be actively looking for those abilities as features now.
Generally speaking, higher speed and capacity equates to lower security. High bandwidth, low latency connected protocols present the worst case scenario; low bandwidth, high latency unconnected protocols present the best case scenario.
While generally a historical summary, this isn't necessarily true. It seems possible to build a LL+HB overlay that will defeat GPA's from observing who is talking to who when. Just babble all the time while idle and yield when some other traffic is talking through you. GAA's are a totally different bitch and contain many different possible threats under one acronym. The historical summary probably carries more weight against these types. It's hard to obtain HB or LL over a LB or HL network (unless parallelizing the LB), while LB or HL over a HB or LL network could be interesting.
NNTP
... does a pretty poor job of hiding the original poster's injection event before it's had a chance to cascade far enough through the network. All depends on your needs.
participants (7)
-
Eugen Leitl
-
Georgi Guninski
-
grarpamp
-
John Newman
-
Steve Kinney
-
Steven Schear
-
Zenaan Harkness