and not a single Tor hacker was surprised...
Scientists detect “spoiled onions” trying to sabotage Tor privacy network Rogue Tor volunteers perform attacks that try to degrade encrypted connections. by Dan Goodin - Jan 21 2014, 2:42pm PST http://arstechnica.com/security/2014/01/scientists-detect-spoiled-onions-try... or reason #16256 to crypto end to end... --- Computer scientists have identified almost two dozen computers that were actively working to sabotage the Tor privacy network by carrying out attacks that can degrade encrypted connections between end users and the websites or servers they visit. The "spoiled onions," as the researchers from Karlstad University in Sweden dubbed the bad actors, were among the 1,000 or so volunteer computers that typically made up the final nodes that exited the Tor—short for The Onion Router—network at any given time in recent months. Because these exit relays act as a bridge between the encrypted Tor network and the open Internet, the egressing traffic is decrypted as it leaves. That means operators of these servers can see traffic as it was sent by the end user. Any data the end user sent unencrypted, as well as the destinations of servers receiving or responding to data passed between an end user and server, can be monitored—and potentially modified—by malicious volunteers. Privacy advocates have long acknowledged the possibility that the National Security Agency and spy agencies across the world operate such rogue exit nodes. The paper—titled Spoiled Onions: Exposing Malicious Tor Exit Relays—is among the first to document the existence of exit nodes deliberately working to tamper with end users' traffic (a paper with similar findings is here). Still, it remains doubtful that any of the 25 misconfigured or outright malicious servers were operated by NSA agents. Two of the 25 servers appeared to redirect traffic when end users attempted to visit pornography sites, leading the researchers to suspect they were carrying out censorship regimes required by the countries in which they operated. A third server suffered from what researchers said was a configuration error in the OpenDNS server. The remainder carried out so-called man-in-the-middle (MitM) attacks designed to degrade encrypted Web or SSH traffic to plaintext traffic. The servers did this by using the well-known sslstrip attack designed by researcher Moxie Marlinspike or another common MitM technique that converts unreadable HTTPS traffic into plaintext HTTP. Often, the attacks involved replacing the valid encryption key certificate with a forged certificate self-signed by the attacker. "All the remaining relays engaged in HTTPS and/or SSH MitM attacks," researchers Philipp Winter and Stefan Lindskog wrote. "Upon establishing a connection to the decoy destination, these relays exchanged the destination's certificate with their own, self-signed version. Since these certificates were not issued by a trusted authority contained in TorBrowser's certificate store, a user falling prey to such a MitM attack would be redirected to the about:certerror warning page."
From Russia with love
The 22 malicious servers were among about 1,000 exit nodes that were typically available on Tor at any given time over a four-month period. (The precise number of exit relays regularly changes as some go offline and others come online.) The researchers found evidence that 19 of the 22 malicious servers were operated by the same person or group of people. Each of the 19 servers presented forged certificates containing the same identifying information. The virtually identical certificate information meant the MitM attacks shared a common origin. What's more, all the servers used the highly outdated version 0.2.2.37 of Tor, and all but one of the servers were hosted in the network of a virtual private system providers located in Russia. Several of the IP addresses were also located in the same net block. The researchers caution that there's no way to know that the operators of the malicious exit nodes are the ones carrying out the attacks. It's possible the actual attacks may be carried out by the ISPs or network backbone providers that serve the malicious nodes. Still, the researchers discounted the likelihood of an upstream provider of the Russian exit relays carrying out the attacks for several reasons. For one, the relays relied on a diverse set of IP address blocks, including one based in the US. The relays frequently disappeared after they were flagged as untrustworthy, researchers also noted. The researchers identified the rogue volunteers by scanning for server relays that replaced valid HTTPS certificates with forged ones. That might have helped to detect certificate forgery attacks such as the one used in 2011 to monitor 300,000 Gmail users—wouldn't be detected using the methods devised by the researchers. The researchers don't believe the malicious nodes they observed were operated by the NSA or other government agencies. "Organizations like the NSA have read/write access to large parts of the Internet backbone," Karlstad University's Winter wrote in an e-mail. "They simply do not need to run Tor relays. We believe that the attacks we discovered are mostly done by independent individuals who want to experiment." While the confirmation of malicious exit nodes is important, it's not particularly surprising. Tor officials have long warned that Tor does nothing to encrypt plaintext communications once it leaves the network. That means ISPs, remote sites, VPN providers, and the Tor exit relay itself can all see the communications that aren't encrypted by end users and the parties they communicate with. Tor officials have long counseled users to rely on HTTPS, e-mail encryption, or other methods to ensure that traffic receives end-to-end encryption. The researchers have proposed a series of updates to the "Torbutton" software used by most Tor users. Among other things, the proof-of-concept software fix would use an alternative exit relay to refetch all self-signed certificates delivered over Tor. The software would then compare the digital fingerprints of the two certificates. It's feasible that the changes might one day include certificate pinning, a technique for ensuring that a certificate presented by Google, Twitter, and other sites is the one authorized by the operator rather than a counterfeit one. Several hours after this article went live, Winter published this blog post titled What the "Spoiled Onions" paper means for Tor users.
To verify though, this has no effect on someone using tor and staying on .onion sites or if you are using https end-to-end right? Honestly, if you use Tor and don't use SSL that seems like laziness to me and deserves to be caught. On 1/22/2014 9:54 AM, coderman wrote:
Scientists detect “spoiled onions” trying to sabotage Tor privacy network Rogue Tor volunteers perform attacks that try to degrade encrypted connections. by Dan Goodin - Jan 21 2014, 2:42pm PST http://arstechnica.com/security/2014/01/scientists-detect-spoiled-onions-try...
or reason #16256 to crypto end to end...
---
Computer scientists have identified almost two dozen computers that were actively working to sabotage the Tor privacy network by carrying out attacks that can degrade encrypted connections between end users and the websites or servers they visit.
The "spoiled onions," as the researchers from Karlstad University in Sweden dubbed the bad actors, were among the 1,000 or so volunteer computers that typically made up the final nodes that exited the Tor—short for The Onion Router—network at any given time in recent months. Because these exit relays act as a bridge between the encrypted Tor network and the open Internet, the egressing traffic is decrypted as it leaves. That means operators of these servers can see traffic as it was sent by the end user. Any data the end user sent unencrypted, as well as the destinations of servers receiving or responding to data passed between an end user and server, can be monitored—and potentially modified—by malicious volunteers. Privacy advocates have long acknowledged the possibility that the National Security Agency and spy agencies across the world operate such rogue exit nodes.
The paper—titled Spoiled Onions: Exposing Malicious Tor Exit Relays—is among the first to document the existence of exit nodes deliberately working to tamper with end users' traffic (a paper with similar findings is here). Still, it remains doubtful that any of the 25 misconfigured or outright malicious servers were operated by NSA agents. Two of the 25 servers appeared to redirect traffic when end users attempted to visit pornography sites, leading the researchers to suspect they were carrying out censorship regimes required by the countries in which they operated. A third server suffered from what researchers said was a configuration error in the OpenDNS server.
The remainder carried out so-called man-in-the-middle (MitM) attacks designed to degrade encrypted Web or SSH traffic to plaintext traffic. The servers did this by using the well-known sslstrip attack designed by researcher Moxie Marlinspike or another common MitM technique that converts unreadable HTTPS traffic into plaintext HTTP. Often, the attacks involved replacing the valid encryption key certificate with a forged certificate self-signed by the attacker.
"All the remaining relays engaged in HTTPS and/or SSH MitM attacks," researchers Philipp Winter and Stefan Lindskog wrote. "Upon establishing a connection to the decoy destination, these relays exchanged the destination's certificate with their own, self-signed version. Since these certificates were not issued by a trusted authority contained in TorBrowser's certificate store, a user falling prey to such a MitM attack would be redirected to the about:certerror warning page."
From Russia with love
The 22 malicious servers were among about 1,000 exit nodes that were typically available on Tor at any given time over a four-month period. (The precise number of exit relays regularly changes as some go offline and others come online.) The researchers found evidence that 19 of the 22 malicious servers were operated by the same person or group of people. Each of the 19 servers presented forged certificates containing the same identifying information. The virtually identical certificate information meant the MitM attacks shared a common origin. What's more, all the servers used the highly outdated version 0.2.2.37 of Tor, and all but one of the servers were hosted in the network of a virtual private system providers located in Russia. Several of the IP addresses were also located in the same net block.
The researchers caution that there's no way to know that the operators of the malicious exit nodes are the ones carrying out the attacks. It's possible the actual attacks may be carried out by the ISPs or network backbone providers that serve the malicious nodes. Still, the researchers discounted the likelihood of an upstream provider of the Russian exit relays carrying out the attacks for several reasons. For one, the relays relied on a diverse set of IP address blocks, including one based in the US. The relays frequently disappeared after they were flagged as untrustworthy, researchers also noted.
The researchers identified the rogue volunteers by scanning for server relays that replaced valid HTTPS certificates with forged ones. That might have helped to detect certificate forgery attacks such as the one used in 2011 to monitor 300,000 Gmail users—wouldn't be detected using the methods devised by the researchers. The researchers don't believe the malicious nodes they observed were operated by the NSA or other government agencies.
"Organizations like the NSA have read/write access to large parts of the Internet backbone," Karlstad University's Winter wrote in an e-mail. "They simply do not need to run Tor relays. We believe that the attacks we discovered are mostly done by independent individuals who want to experiment."
While the confirmation of malicious exit nodes is important, it's not particularly surprising. Tor officials have long warned that Tor does nothing to encrypt plaintext communications once it leaves the network. That means ISPs, remote sites, VPN providers, and the Tor exit relay itself can all see the communications that aren't encrypted by end users and the parties they communicate with. Tor officials have long counseled users to rely on HTTPS, e-mail encryption, or other methods to ensure that traffic receives end-to-end encryption.
The researchers have proposed a series of updates to the "Torbutton" software used by most Tor users. Among other things, the proof-of-concept software fix would use an alternative exit relay to refetch all self-signed certificates delivered over Tor. The software would then compare the digital fingerprints of the two certificates. It's feasible that the changes might one day include certificate pinning, a technique for ensuring that a certificate presented by Google, Twitter, and other sites is the one authorized by the operator rather than a counterfeit one. Several hours after this article went live, Winter published this blog post titled What the "Spoiled Onions" paper means for Tor users.
On Wed, Jan 22, 2014 at 7:12 AM, Kelly John Rose <iam@kjro.se> wrote:
To verify though, this has no effect on someone using tor and staying on .onion sites or if you are using https end-to-end right?
correct.
Honestly, if you use Tor and don't use SSL that seems like laziness to me and deserves to be caught.
i would agree, and i would also show some sympathy towards the unsuspecting. anything cypherpunks can do to ensure end to end crypto everywhere by default is another MitM and eavesdropping attack denied.... (someone should write more about using client-side certificates as a method to thwart SSL MitM with a CA signing transparent proxy adversary upstream. aka BlueCoat with "enterprise certificate" injected or private key pilfer.) best regards,
Dnia środa, 22 stycznia 2014 07:44:16 coderman pisze:
(someone should write more about using client-side certificates as a method to thwart SSL MitM with a CA signing transparent proxy adversary upstream. aka BlueCoat with "enterprise certificate" injected or private key pilfer.)
About this. Is there a way to serve 2 (or more) certificates for a given HTTPS server/domain? What I would like to have is a way to: - serve a proper, vanilla SSL certificate bought from some provider for the general public accessing my service; - serve a different cert (for example, using MonkeySphere) for those that do not trust (and with good reasons) major CA's. This would have to work for the *same* domain on the *same* webserver. I haven't yet seen a way to do this, so this might need implementing, but maybe somebody here has heard about something along these lines? -- Pozdr rysiek
Hi,
About this. Is there a way to serve 2 (or more) certificates for a given HTTPS server/domain? What I would like to have is a way to: - serve a proper, vanilla SSL certificate bought from some provider for the general public accessing my service; - serve a different cert (for example, using MonkeySphere) for those that do not trust (and with good reasons) major CA's.
This would have to work for the *same* domain on the *same* webserver. I haven't yet seen a way to do this, so this might need implementing, but maybe somebody here has heard about something along these lines?
Like the Soveraign or TACKed keys perhaps? <https://www.eff.org/deeplinks/2011/11/sovereign-keys-proposal-make-https-and-email-more-secure> <http://arstechnica.com/security/2012/05/ssl-fix-flags-forged-certificates-before-theyre-accepted-by-browsers/> -- Katana
Dnia środa, 22 stycznia 2014 18:47:12 katana pisze:
Hi,
About this. Is there a way to serve 2 (or more) certificates for a given HTTPS server/domain? What I would like to have is a way to: - serve a proper, vanilla SSL certificate bought from some provider for the general public accessing my service; - serve a different cert (for example, using MonkeySphere) for those that do not trust (and with good reasons) major CA's.
This would have to work for the *same* domain on the *same* webserver. I haven't yet seen a way to do this, so this might need implementing, but maybe somebody here has heard about something along these lines?
Like the Soveraign or TACKed keys perhaps? <https://www.eff.org/deeplinks/2011/11/sovereign-keys-proposal-make-https-an d-email-more-secure> <http://arstechnica.com/security/2012/05/ssl-fix-flags-forged-certificates-> before-theyre-accepted-by-browsers/>
Thanks! -- Pozdr rysiek
On Wed, Jan 22, 2014 at 06:05:51PM +0100, rysiek wrote:
Dnia środa, 22 stycznia 2014 07:44:16 coderman pisze:
(someone should write more about using client-side certificates as a method to thwart SSL MitM with a CA signing transparent proxy adversary upstream. aka BlueCoat with "enterprise certificate" injected or private key pilfer.)
About this. Is there a way to serve 2 (or more) certificates for a given HTTPS server/domain? What I would like to have is a way to: - serve a proper, vanilla SSL certificate bought from some provider for the general public accessing my service; - serve a different cert (for example, using MonkeySphere) for those that do not trust (and with good reasons) major CA's.
This would have to work for the *same* domain on the *same* webserver. I haven't yet seen a way to do this, so this might need implementing, but maybe somebody here has heard about something along these lines?
How secure is Bitcoin's ECDSA? My thought is using doing a *new* encrypted transport (or re-purposing SSL) and using the exact same ECDSA keys that are already being used as Bitcoin addresses would make it more likely that an attacker would just go after the money rather than wast time on MITM, and it's a lot more likely that average users would care to upgrade. This, I conjecture, would result in a generally much stronger deployment of crypto to end-users.
server/domain? What I would like to have is a way to: - serve a proper, vanilla SSL certificate bought from some provider for
general public accessing my service; - serve a different cert (for example, using MonkeySphere) for those
On Jan 23, 2014 6:13 AM, "rysiek" <rysiek@hackerspace.pl> wrote:> About this. Is there a way to serve 2 (or more) certificates for a given HTTPS the that do
not trust (and with good reasons) major CA's.
This would have to work for the *same* domain on the *same* webserver. I haven't yet seen a way to do this, so this might need implementing, but maybe somebody here has heard about something along these lines?
There are a lot of things like this, but the big question is: how does the user indicate to you which cert they want? If it was via pubca.x.com or privca.x.com - that's easy just put the different certs in the different sites. But otherwise, you have to rely on quirks. TLS allows you to send different certs to different users, but this is based off the handshake and is for algorithm agility - not cert chaining. EG I send ECDSA signed certs if I know you can handle them, and RSA if not. You can also send two leaf certs, two cert chains, a cert and garbage, a cert and a stego message - whatever. This is the closest to what you want, but this is undefined behavior. Browsers may build a valid chain off the public CA, and monkeysphere off the private* and it works perfect... Or the browser may pop an invalid cert warning. It's undefined behavior. You'll have to test, see what happens, and hope chrome doesn't break when it updates every week. -tom * I realize monkey sphere doesn't use a private CA, just using it as an example.
Hi there, Dnia czwartek, 23 stycznia 2014 00:47:48 Tom Ritter pisze:
There are a lot of things like this, but the big question is: how does the user indicate to you which cert they want?
Can't they just get both certs and accept the one that works for them? I.e. John Doe would just accept the "vanilla" SSL cert; Joe R. Hacker's browser would have these blocked, but could accept a Monkeysphere-based one.
If it was via pubca.x.com or privca.x.com - that's easy just put the different certs in the different sites.
The idea is to have the same site.
But otherwise, you have to rely on quirks.
Ah, yes, quirks. ;)
TLS allows you to send different certs to different users, but this is based off the handshake and is for algorithm agility - not cert chaining. EG I send ECDSA signed certs if I know you can handle them, and RSA if not.
Oh, this is good. Differentiating between "vanilla" certs and "advanced/really secure" Monkeysphere-based certs via ciphers is neat. Thanks for the idea!
You can also send two leaf certs, two cert chains, a cert and garbage, a cert and a stego message - whatever. This is the closest to what you want, but this is undefined behavior.
Mhm.
Browsers may build a valid chain off the public CA, and monkeysphere off the private* and it works perfect... Or the browser may pop an invalid cert warning. It's undefined behavior. You'll have to test, see what happens, and hope chrome doesn't break when it updates every week.
So, sticking to the ciphersuite hack, which is elegant and bound to work. Thanks a bunch. -- Pozdr rysiek
On 01/22/14 16:44, coderman wrote:
On Wed, Jan 22, 2014 at 7:12 AM, Kelly John Rose <iam@kjro.se> wrote:
To verify though, this has no effect on someone using tor and staying on .onion sites or if you are using https end-to-end right?
correct.
Honestly, if you use Tor and don't use SSL that seems like laziness to me and deserves to be caught.
i would agree, and i would also show some sympathy towards the unsuspecting. anything cypherpunks can do to ensure end to end crypto everywhere by default is another MitM and eavesdropping attack denied....
(someone should write more about using client-side certificates as a method to thwart SSL MitM with a CA signing transparent proxy adversary upstream. aka BlueCoat with "enterprise certificate" injected or private key pilfer.)
Dear coderman, Client certificates are part of my answer to MitM attacks. The other part is to forget about third-party CA's. 1. The trick is to have each (web-)site sign the client certificates for their own users. Users sign up for a site by creating a fresh public/private keypair, invent an account name, and create a CSR containing just that: the accountname and the public key. The site's own Certificate Signer (local authority) checks to see if the user's chosen account name is unique and if so signs the certificate and returns it in the same response. The site's web server is configured to only accept their own client certificates signed by their own Signer. Each site only accepts their own certificates. In addition to that, the server sports a server-certificate that has been signed by the site's Signer. When the user connects to the site, the user agent first connects without presenting any client certificates. Ie, anonymously. The agent will then offer the user to log in at the site. But it only offers those certificates that have been signed by the same local authority. The client certificate becomes the identity of the client, while the site's Certificate Signer Root Certificate becomes the identity of the site. The MitM protection so far, is all-or-nothing. The user can only be MitM'ed if Mallory sits in between all the time, right from the first connection. However, there are several mitigation strategies. 2. The first mitigation strategy is for the site-owner to publish the Site's Local Signer Root Certificate in the DNSSEC-record. I realise that "true cypherpunks" don't like centralised systems but bear with me, here it is part of the solution. The user agent does a DNSSEC lookup, validates the signature tree up to the pinned DNSSEC root key. This limits MitM attacks to those who have a copy of that root key. ie, state level spooks. This lookup needs only be done once, before the first connect. The second mitigation strategy is an independent global append-only log of created client certificates. Whenever a user agent receives a certificate, it submits it to this global log. Every once in a while, the agent queries the log for all certificates bearing the account name that the user has chosen. There must be exactly one anser. To improve security at first contact, the agent queries the log for the expected value of the sites' Certificate Signer Root certificate. There must be only one. This list must be cryptographically protected against tampering. Ideally it is a distributed, decentralised global effort. The downside of this second approach, it needs to be designed, the DNSSEC-approach can be used right now. The combination of DNSSEC and the Log make it even more robust. The DNSSEC effectively specifies the intentions of the site owner, the log measures the reality. These two should match. 3. So far, I haven't mentioned Tor. When you use this protocol, you are protected against spoiled onions. The exit nodes won't have access to any site's private key, so they cannot fake a certificate that matches the client certificates. When an exit node creates a fake certificate for a site, the user agent interprets that as either a new site, (and offering the user to create an account). Or the user agent detects that the server certificate does not match with the certificate that it has remembered for this site and raises an alarm. As users change Tor-exit-nodes regularly, there can't be a MitM at each connection. 4. As every connection is encrypted and authenticated, Tor traffic does not stand out from non-Tor traffic. Even if people use this protocol to connect to facebook and spill their lives there, they are helping activists to hide their traffic better. 5. Using this protocol, we can create an introduction-service that lets total strangers exchange and validate each other's public keys. And from there bootstrap other secure channels. Coderman (and others), does this appeal to you? See http://eccentric-authentication.org/ (via Tor, if you want) to read more. I'd love to hear comments. With regards, Guido Witmond.
On Sat, Jan 25, 2014 at 7:53 AM, Guido Witmond <guido@witmond.nl> wrote:
... Client certificates are part of my answer to MitM attacks.
The other part is to forget about third-party CA's.
my heart a twitter already! (these are the key points, and you hit them first.)
See http://eccentric-authentication.org/ to read more.
I'd love to hear comments.
i've come across this on other lists, and will one day provide a better response. my initial feedback relates to: - supported suites. NULL encryption is still a valid TLS mode! - end-point security (each site acting as a CA is like every bitcoin user acting as a bank. you've elevated the threat model on the unsuspecting.) - Namecoin and other decentralized alternatives to DNSSEC. best regards,
On 01/25/14 20:09, coderman wrote:
On Sat, Jan 25, 2014 at 7:53 AM, Guido Witmond <guido@witmond.nl> wrote:
... Client certificates are part of my answer to MitM attacks.
The other part is to forget about third-party CA's.
my heart a twitter already!
(these are the key points, and you hit them first.)
Lurking at several cryptography mailing lists, gave me some hints :-)
See http://eccentric-authentication.org/ to read more.
I'd love to hear comments.
i've come across this on other lists, and will one day provide a better response. my initial feedback relates to:
- supported suites. NULL encryption is still a valid TLS mode!
1st. Although NULL encryption is a problem, I expect that most crypto-toolkit developers will disable these in their default configuration. From there it will bubble up the stack into the distributions. That's a lesson that NSA has thought us: make defaults safe! 2nd. There is nothing in eccentric authentication that specifies one branch of public key mathematics over another. I deliberately leave the choice of either RSA, EC, or others out. As I'm not a cryptographer, I can't make that decision. I do specify what I expect the protocol needs to accomplish. It's up to the experts to match the appropriate parts. My prototype used RSA/TLS/DNSSEC
- end-point security (each site acting as a CA is like every bitcoin user acting as a bank. you've elevated the threat model on the unsuspecting.)
Not really. Each site signs only for itself. There is no need to trust anything else than your own systems (Or the hoster who does the work for you). That trust level is already needed for every current web site. In fact, with a proper setup, the Root certificate's private key for the site does not live at the server, for signing, it uses a subRoot. Now when the site gets hacked, the hackers can create more accounts for themselves or invalidate other peoples' accounts. But the attackers can never impersonate any of the sites user accounts at other sites, as these use their own signing key. I believe it is more safe than hashing passwords. The more worrisome part are the end-users' computers. The Posix-model is not designed to protect users against themselves. Although, every user expects that to be the case. Things like microkernels, Capsicum, Qubes-OS, Genode, Pola, least authority designs are in DIRE need.
- Namecoin and other decentralized alternatives to DNSSEC.
DNSSEC might be just as difficult as IPsec, or its private key might have already been leaked to NSA due to compromised hardware. We need to have alternatives. The eccentric-protocol can use other global unique naming schemes. The requirements are: easy and cheap enough so every website can get a unique and human memorize-able name. Namecoin might fit the requirements, or GNS (GnuNet). I hope this sparks the curiosity. With regards, Guido.
On Sun, Jan 26, 2014 at 9:44 AM, Guido Witmond <guido@witmond.nl> wrote:
... Although NULL encryption is a problem, I expect that most crypto-toolkit developers will disable these in their default configuration... There is nothing in eccentric authentication that specifies one branch of public key mathematics over another. I deliberately leave the choice of either RSA, EC, or others out. As I'm not a cryptographer, I can't make that decision. I do specify what I expect the protocol needs to accomplish. It's up to the experts to match the appropriate parts. My prototype used RSA/TLS/DNSSEC
fair enough; my position is that this is insufficient and passes the buck. many don't agree. said another way: security is everyone's responsibility! everyone should encourage and enforce strong defaults, strong suites, and accept no less. (i pay bribes in bitcoin to adopt this position ;)
In fact, with a proper setup, the Root certificate's private key for the site does not live at the server, for signing, it uses a subRoot.
this is better; although perhaps more cumbersome key management wise. good key management always cumbersome it seems!
Now when the site gets hacked, the hackers can create more accounts for themselves or invalidate other peoples' accounts. But the attackers can never impersonate any of the sites user accounts at other sites, as these use their own signing key. I believe it is more safe than hashing passwords.
absolutely better than storing hashed passwords. how many people generate long, random, unique passwords for every site?
The eccentric-protocol can use other global unique naming schemes. The requirements are: easy and cheap enough so every website can get a unique and human memorize-able name. Namecoin might fit the requirements, or GNS (GnuNet).
GNet NS is locally scoped to each peer as of my understanding, so not quite a strong global unique naming scheme. i do believe on further reading that Namecoin would work, and am looking at this further... thanks for the responses and clarifications! best regards,
Dnia sobota, 25 stycznia 2014 16:53:19 Guido Witmond pisze:
On 01/22/14 16:44, coderman wrote:
On Wed, Jan 22, 2014 at 7:12 AM, Kelly John Rose <iam@kjro.se>
wrote:
To verify though, this has no effect on someone using tor and staying on .onion sites or if you are using https end-to-end right?
correct.
Honestly, if you use Tor and don't use SSL that seems like laziness to me and deserves to be caught.
i would agree, and i would also show some sympathy towards the unsuspecting. anything cypherpunks can do to ensure end to end crypto everywhere by default is another MitM and eavesdropping attack denied....
(someone should write more about using client-side certificates as a
method to thwart SSL MitM with a CA signing transparent proxy
adversary upstream. aka BlueCoat with "enterprise certificate" injected or private key pilfer.)
Dear coderman,
Client certificates are part of my answer to MitM attacks.
The other part is to forget about third-party CA's.
1.
The trick is to have each (web-)site sign the client certificates for their own users. Users sign up for a site by creating a fresh public/private keypair, invent an account name, and create a CSR containing just that: the accountname and the public key.
The site's own Certificate Signer (local authority) checks to see if the user's chosen account name is unique and if so signs the certificate and returns it in the same response.
The site's web server is configured to only accept their own client certificates signed by their own Signer. Each site only accepts their own certificates.
In addition to that, the server sports a server-certificate that has been signed by the site's Signer.
When the user connects to the site, the user agent first connects without presenting any client certificates. Ie, anonymously. The agent will then offer the user to log in at the site. But it only offers those certificates that have been signed by the same local authority.
The client certificate becomes the identity of the client, while the site's Certificate Signer Root Certificate becomes the identity of the site.
The MitM protection so far, is all-or-nothing. The user can only be MitM'ed if Mallory sits in between all the time, right from the first connection. However, there are several mitigation strategies.
2.
The first mitigation strategy is for the site-owner to publish the Site's Local Signer Root Certificate in the DNSSEC-record. I realise that "true cypherpunks" don't like centralised systems but bear with me, here it is part of the solution.
The user agent does a DNSSEC lookup, validates the signature tree up to the pinned DNSSEC root key. This limits MitM attacks to those who have a copy of that root key. ie, state level spooks.
This lookup needs only be done once, before the first connect.
The second mitigation strategy is an independent global append-only log of created client certificates. Whenever a user agent receives a certificate, it submits it to this global log. Every once in a while, the agent queries the log for all certificates bearing the account name that the user has chosen. There must be exactly one anser.
To improve security at first contact, the agent queries the log for the expected value of the sites' Certificate Signer Root certificate. There must be only one.
This list must be cryptographically protected against tampering. Ideally it is a distributed, decentralised global effort. The downside of this second approach, it needs to be designed, the DNSSEC-approach can be used right now.
The combination of DNSSEC and the Log make it even more robust. The DNSSEC effectively specifies the intentions of the site owner, the log measures the reality. These two should match.
3.
So far, I haven't mentioned Tor. When you use this protocol, you are protected against spoiled onions. The exit nodes won't have access to any site's private key, so they cannot fake a certificate that matches the client certificates.
When an exit node creates a fake certificate for a site, the user agent interprets that as either a new site, (and offering the user to create an account). Or the user agent detects that the server certificate does not match with the certificate that it has remembered for this site and raises an alarm.
As users change Tor-exit-nodes regularly, there can't be a MitM at each connection.
4.
As every connection is encrypted and authenticated, Tor traffic does not stand out from non-Tor traffic. Even if people use this protocol to connect to facebook and spill their lives there, they are helping activists to hide their traffic better.
5.
Using this protocol, we can create an introduction-service that lets total strangers exchange and validate each other's public keys. And from there bootstrap other secure channels.
Coderman (and others), does this appeal to you?
That makes sense. I'll have to look into it more.
See http://eccentric-authentication.org/ (via Tor, if you want) to read more.
Thanks. -- Pozdr rysiek
To be fair, literally no one who works on Tor or Tor-related projects is surprised. This is addressed at nearly every talk, nearly every workshop, and people are pretty open about it as a feature of the landscape. That most of these are low-speed exits is pretty telling. Most bad exits are designed to inject/replace ads, which is pretty stupid. If you catch someone doing this, share their ad code so they can be reported to ad networks and lose the money they were trying to make. However, the balance of probability has it that any given user is likely to be fine. Enforce https and don't accept random certificate errors. If you're getting a certificate error, click New Identity and you'll find that most magically disappear (and those that don't are typically issues with the website itself - don't patronize poorly-secured websites). What do I mean by "balance of probability"? Well, if you use Tor, there's about a 90% chance you'll pass through an exit run by someone I know, and a quite good chance that you'll specifically exit through a Torservers node. My point being that the only real answer to this problem is network diversity. If you're concerned about "spoiled onions," run a node! Don't have the time/money/interest? Donate to Torservers or Nos Oignons or Noisetor so that they can run more exit nodes. ~Griffin On 01/22/2014 09:54 AM, coderman wrote:
Scientists detect “spoiled onions” trying to sabotage Tor privacy network Rogue Tor volunteers perform attacks that try to degrade encrypted connections. by Dan Goodin - Jan 21 2014, 2:42pm PST http://arstechnica.com/security/2014/01/scientists-detect-spoiled-onions-try...
or reason #16256 to crypto end to end...
---
Computer scientists have identified almost two dozen computers that were actively working to sabotage the Tor privacy network by carrying out attacks that can degrade encrypted connections between end users and the websites or servers they visit.
The "spoiled onions," as the researchers from Karlstad University in Sweden dubbed the bad actors, were among the 1,000 or so volunteer computers that typically made up the final nodes that exited the Tor—short for The Onion Router—network at any given time in recent months. Because these exit relays act as a bridge between the encrypted Tor network and the open Internet, the egressing traffic is decrypted as it leaves. That means operators of these servers can see traffic as it was sent by the end user. Any data the end user sent unencrypted, as well as the destinations of servers receiving or responding to data passed between an end user and server, can be monitored—and potentially modified—by malicious volunteers. Privacy advocates have long acknowledged the possibility that the National Security Agency and spy agencies across the world operate such rogue exit nodes.
The paper—titled Spoiled Onions: Exposing Malicious Tor Exit Relays—is among the first to document the existence of exit nodes deliberately working to tamper with end users' traffic (a paper with similar findings is here). Still, it remains doubtful that any of the 25 misconfigured or outright malicious servers were operated by NSA agents. Two of the 25 servers appeared to redirect traffic when end users attempted to visit pornography sites, leading the researchers to suspect they were carrying out censorship regimes required by the countries in which they operated. A third server suffered from what researchers said was a configuration error in the OpenDNS server.
The remainder carried out so-called man-in-the-middle (MitM) attacks designed to degrade encrypted Web or SSH traffic to plaintext traffic. The servers did this by using the well-known sslstrip attack designed by researcher Moxie Marlinspike or another common MitM technique that converts unreadable HTTPS traffic into plaintext HTTP. Often, the attacks involved replacing the valid encryption key certificate with a forged certificate self-signed by the attacker.
"All the remaining relays engaged in HTTPS and/or SSH MitM attacks," researchers Philipp Winter and Stefan Lindskog wrote. "Upon establishing a connection to the decoy destination, these relays exchanged the destination's certificate with their own, self-signed version. Since these certificates were not issued by a trusted authority contained in TorBrowser's certificate store, a user falling prey to such a MitM attack would be redirected to the about:certerror warning page."
From Russia with love
The 22 malicious servers were among about 1,000 exit nodes that were typically available on Tor at any given time over a four-month period. (The precise number of exit relays regularly changes as some go offline and others come online.) The researchers found evidence that 19 of the 22 malicious servers were operated by the same person or group of people. Each of the 19 servers presented forged certificates containing the same identifying information. The virtually identical certificate information meant the MitM attacks shared a common origin. What's more, all the servers used the highly outdated version 0.2.2.37 of Tor, and all but one of the servers were hosted in the network of a virtual private system providers located in Russia. Several of the IP addresses were also located in the same net block.
The researchers caution that there's no way to know that the operators of the malicious exit nodes are the ones carrying out the attacks. It's possible the actual attacks may be carried out by the ISPs or network backbone providers that serve the malicious nodes. Still, the researchers discounted the likelihood of an upstream provider of the Russian exit relays carrying out the attacks for several reasons. For one, the relays relied on a diverse set of IP address blocks, including one based in the US. The relays frequently disappeared after they were flagged as untrustworthy, researchers also noted.
The researchers identified the rogue volunteers by scanning for server relays that replaced valid HTTPS certificates with forged ones. That might have helped to detect certificate forgery attacks such as the one used in 2011 to monitor 300,000 Gmail users—wouldn't be detected using the methods devised by the researchers. The researchers don't believe the malicious nodes they observed were operated by the NSA or other government agencies.
"Organizations like the NSA have read/write access to large parts of the Internet backbone," Karlstad University's Winter wrote in an e-mail. "They simply do not need to run Tor relays. We believe that the attacks we discovered are mostly done by independent individuals who want to experiment."
While the confirmation of malicious exit nodes is important, it's not particularly surprising. Tor officials have long warned that Tor does nothing to encrypt plaintext communications once it leaves the network. That means ISPs, remote sites, VPN providers, and the Tor exit relay itself can all see the communications that aren't encrypted by end users and the parties they communicate with. Tor officials have long counseled users to rely on HTTPS, e-mail encryption, or other methods to ensure that traffic receives end-to-end encryption.
The researchers have proposed a series of updates to the "Torbutton" software used by most Tor users. Among other things, the proof-of-concept software fix would use an alternative exit relay to refetch all self-signed certificates delivered over Tor. The software would then compare the digital fingerprints of the two certificates. It's feasible that the changes might one day include certificate pinning, a technique for ensuring that a certificate presented by Google, Twitter, and other sites is the one authorized by the operator rather than a counterfeit one. Several hours after this article went live, Winter published this blog post titled What the "Spoiled Onions" paper means for Tor users.
participants (8)
-
coderman
-
Griffin Boyce
-
Guido Witmond
-
katana
-
Kelly John Rose
-
rysiek
-
Tom Ritter
-
Troy Benjegerdes