For the interim, the solution might be to have an extension that besides pushing PFS (and alerting when it doesn't work) would cache the Cert hashes or more and allow a browser (e.g. firefox) to run with all CAs as untrusted, but then do a verification on a per-site basis. The big hole in web page security is that there is the web page, then there is the extra info like javascript and css. So, for example, https://amazon.com might be accepted, but https://images-na.cdn.azws.com is in the background ready to rewrite the entire page. And the page will be broken until you manually "view source" and open a link and allow the cert/CA/page for the javascript/css/images/metadata.
On Thu, Jul 25, 2013 at 09:01:46PM -0400, tz wrote:
For the interim, the solution might be to have an extension that besides pushing PFS (and alerting when it doesn't work) would cache the Cert hashes or more and allow a browser (e.g. firefox) to run with all CAs as untrusted, but then do a verification on a per-site basis.
The big hole in web page security is that there is the web page, then there is the extra info like javascript and css.
So, for example, https://amazon.com might be accepted, but https://images-na.cdn.azws.com is in the background ready to rewrite the entire page.
And the page will be broken until you manually "view source" and open a link and allow the cert/CA/page for the javascript/css/images/metadata.
I've run my primary browser with no trusted CAs, manually TOFUing certificates for sites, for months on end. It's slightly easier than "view source" to use control-shift-K (in Firefox) and reload the page, then watch for resource load errors in the console. Some fairly small adjustments to browser UIs would make this use case much easier. The biggest problem is that Firefox's SSL exception implementation only allows a single certificate per hostname, so load-balanced hosts such as p.twimg.com which toggle between multiple valid certificates are annoying. (I also VPN this browser through a fairly trusted datacenter, so I'm not TOFUing over the local WLAN of course.) It's fairly helpful to use SSL errors as a firewall to help me avoid accidentally loading sites whose TOS I refuse to accept, such as G+ and Facebook. It also functions as a primitive adblock for some sites since you don't have to accept the certificates for doubleclick.net et al. -andy
Sorry for being slow, but what is TOFUing? On Fri, Jul 26, 2013 at 8:27 AM, Andy Isaacson <adi@hexapodia.org> wrote:
I've run my primary browser with no trusted CAs, manually TOFUing certificates for sites, for months on end. It's slightly easier than "view source" to use control-shift-K (in Firefox) and reload the page,
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Trust On First Use. It's a key-exchange method where you trust the first time you grab a key, and use that, instead of a cert-authority or anything like that. It's used for SSH iirc, though I could be wrong. The idea behind it is that unless the MITM performs a MITM the first time and every time thereafter, you'll at least notice the attack, and likely prevent it. I was going to provide a Wikipedia link, but I couldn't seem to find one, other than this one hidden in a user page. https://en.wikipedia.org/wiki/User:Dotdotike/Trust_Upon_First_Use On 07/26/2013 09:06 AM, tz wrote:
Sorry for being slow, but what is TOFUing?
On Fri, Jul 26, 2013 at 8:27 AM, Andy Isaacson <adi@hexapodia.org <mailto:adi@hexapodia.org>> wrote:
I've run my primary browser with no trusted CAs, manually TOFUing certificates for sites, for months on end. It's slightly easier than "view source" to use control-shift-K (in Firefox) and reload the page,
-----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.12 (GNU/Linux) Comment: Using GnuPG with undefined - http://www.enigmail.net/ iQIcBAEBAgAGBQJR8rjVAAoJED2aKxR1HF9BsDkQAOiqrPvw6ClM5mZ3zdMgzZQ1 jsyKmZMCOEUJtrlJA2LGN6ybhQ0ESCubrBD9izHOt80fTqpYoDkd27ziwGeEUw/m h3+VATV0zr0Pr569e71sIhsRs3rlGXJfDeoyDDJrb/t+fbSDXccecIpz8uQiByb6 hAAIqFGjFSozikAtdRfbeiXGBQQD6nlzzT6/FWZ5jygX4XElRvcF/ElEfsFJ2N6+ 4oRMt6irhirDzPSCFuXtbSrNXZ+GQ7k3YRt2uC6uLzHEjpatbdVw420AQlm3fEZs IN20NTIRHlJl81sB1a37d30JjqLI35f1HbUHBBuFO25ArUnTRQoN973D6vnSAZLj v8/LFCYM+rhpabpZ21e2kBywJoo+t1iy9506VbGNyfZV4xxxVPaBVpwmANfoK0SO MeHXfz8sTR7wjiMc/m735GLRCZMonYcejZ0BY9wDTBC9iCjaGB+6bFgcV4cop4vt WakfqQKp1j+qrly5sZcRZG8AWQzCGlUbEkfXuknmEVSxED0zEE6DZlnEbYOftvoG 8M1Z8hMAI+sO4mhbyDEBbsY3y+GbfyShLFmyxR82HXh7Vw/NyYjkE4Px9uoGGwPl hTgXOTxB/teA0jpVVO96SFG1YrhCt+98LlcCAKwM/xuv5jPVp9ipz0QuyF9C3AE3 EJfQHrhI/qyfoBCklYt+ =iakG -----END PGP SIGNATURE-----
TOFU... It's used for SSH iirc, though I could be wrong.
No, you're right. That that single, assumed to be legitimate, and first introduced key, is trusted and used for all subsequent encounters. Any later unvalidated change in key would indicate suspect brokenness. Authentication of said former key, via any particular mechanism, is a secondary bonus. For instance, you may first check mail to a given fingerprint gets you to the mail/context you expect. Then a web search of that fingerprint may yield independent bloggers affirming their similar expierience, then some reasonable trust of that key is established. Though it is encouraged that such lone keys be signed by some web of trust that you can then reach. This new environment of weak CA's will, in hope, yield a stronger more articulated sense of what we all are signing for each other.
Perhaps the best way would be an indicator that PFS is active. Think EV cert - they push that blue is safer than green. If Chrome and Firefox and others simply would try PFS first and indicate in a conspicuous way, like EA certs, that "you are safe but could be safer" v.s. "Safest possible", it must help push adoption.
What problem are we solving, exactly? No eavesdropping is simple enough. No MITM is not preventable without information known to come from the intended source. Presently we have "all knowers" called certificate authorities. We trust them as a collective not individually. Their security depending on their collective is a fatal mistake. The idea of an all-knower is very, very convenient for the design of these systems. Yet, is it required? Surely there must be a distributed, not decentralized* approach that works to spread information with certainty. The problem then lies with the link between the security record (signature, proof of private key) and the name record (DNS). Simply signing the DNS records would be enough, then the DNS records must be provided properly. This is moving the problem. Yet, it is moving the problem to the DNS provider, which also suffers from the centralization weakness that persists in such decentralized arrangements. Having a DHT in which several known friends are anchored might allow that DHT to "vote" on the subject. Every node will accumulate the votes from its trusted neighbors and vote on what the majority agrees on. Heuristic, but typically functional. And we swat two flies with one blow. SDNS, (Secure Distributed Name Server) a mapping from name to signed machine location data. In this future the overhead for security is as big as the signature for the SDNS record, and the encryption and decryption on the data itself. --Lewis *the current approach defies the boundary between centralized and decentralized. I believe that, in practice, we could better describe it as centralized.
There are two problems. First, CA AND/OR ToFU, or notaries or some other kind of acceptance of the certificates. That is a large issue, but the CA model is broken. It would be even more convenient not to have to bother with any authentication, encryption and passwords, but if we are going to bother with it, it may as well be actually secure. We need not trust them collectively - the difficulty comes when there are lots of different certs from the same site, but I might trust a google domain cert signed with a google signing cert over one signed by diginotar. Second, they generally don't escrow the ephemeral keys, but, if I understand correctly, if the key exchange does not have perfect forward secrecy, if the traffic is recorded, and the original private keys are exposed (subpoenaed, hacked, broken) any session is as well. Note that the exposure of one private key unlocks ALL such recorded sessions. This would apply even if I generate my own keypair and private cert. On Sat, Jul 27, 2013 at 5:56 PM, Lodewijk andré de la porte <l@odewijk.nl>wrote:
What problem are we solving, exactly? No eavesdropping is simple enough. No MITM is not preventable without information known to come from the intended source. Presently we have "all knowers" called certificate authorities. We trust them as a collective not individually. Their security depending on their collective is a fatal mistake. The idea of an all-knower is very, very convenient for the design of these systems.
participants (5)
-
Andy Isaacson
-
grarpamp
-
Justin Tracey
-
Lodewijk andré de la porte
-
tz