Lavabit and End-point Security
I find some comfort in the fact that they had to serve papers to Lavabit to get the information they wanted. To me this says Lavabit's security was so good they couldn't back door his machines. Or, maybe it was cover-up, to get the information "legally." But I'm guessing they really couldn't get what they wanted. I'd love to see some kind of write-up by Ladar about how he did this...maybe even a book. I expect he was just doing all the standard things any sys-admin should be doing. It would be great to see it all written down in one place, though, as a case study with details. Even better: Edward Swowden as a co-author.
2013/8/9 Sean Alexandre <sean@alexan.org>
Or, maybe it was cover-up, to get the information "legally." But I'm guessing they really couldn't get what they wanted.
This. They don't want to show people what power they have. So they use the "most public method", letters. They are very, very, very aware of what you might guess. You have to remember they could legally prevent him from saying he even received letters, they have done so in the past. Why haven't they now? Might it have to do with you assumptions? Or is it as innocent as genuinely not wanting to cause more harm than needed? Do you think the NSA is innocent?
On Sat, Aug 10, 2013 at 12:42:16PM +0200, Lodewijk andré de la porte wrote:
2013/8/9 Sean Alexandre <sean@alexan.org>
Or, maybe it was cover-up, to get the information "legally." But I'm guessing they really couldn't get what they wanted.
This. They don't want to show people what power they have. So they use the "most public method", letters. They are very, very, very aware of what you might guess. You have to remember they could legally prevent him from saying he even received letters, they have done so in the past.
Why haven't they now? Might it have to do with you assumptions? Or is it as innocent as genuinely not wanting to cause more harm than needed?
Do you think the NSA is innocent?
I can't really argue with that. I think it's very possible this is just "parallel contruction" where they want to cover their tracks and say they got things "legally." Still, I have to hope it's possible to run a service such as Lavabit and have it be so locked down that it can't be backdoored. Nothing can be 100% secure, but secure enough that it's very, very unlikely. I'd like to see a github project that has scripts (puppet?) to take a fresh Debian box and lock it down as much as possible, running only ssh. Those scripts could be used to create a CTF box sitting out on the open Internet, for others to try and hack into. Pen test it to death. Update the scripts. Make the config as perfect as possible. Then others could take those scripts and add more modules to them, for other services: exim, dovecot, apache, roundcube. People could pick and choose which they want to run. Put different boxes out there, as other CTF machines to pentest. Make it fun. Give people rewards, or some kind of recognition, if they can break into the box. "Encryption works," we know. End-point security's the weak link. This could be a way to shore that up. Thoughts?
Its usually easier to gain access to a resource by exploiting those who have the perms you seek. On Aug 10, 2013 1:37 PM, "Sean Alexandre" <sean@alexan.org> wrote:
On Sat, Aug 10, 2013 at 12:42:16PM +0200, Lodewijk andré de la porte
2013/8/9 Sean Alexandre <sean@alexan.org>
Or, maybe it was cover-up, to get the information "legally." But I'm guessing they really couldn't get what they wanted.
This. They don't want to show people what power they have. So they use
"most public method", letters. They are very, very, very aware of what you might guess. You have to remember they could legally prevent him from saying he even received letters, they have done so in the past.
Why haven't they now? Might it have to do with you assumptions? Or is it as innocent as genuinely not wanting to cause more harm than needed?
Do you think the NSA is innocent?
I can't really argue with that. I think it's very possible this is just "parallel contruction" where they want to cover their tracks and say they got things "legally."
Still, I have to hope it's possible to run a service such as Lavabit and have it be so locked down that it can't be backdoored. Nothing can be 100% secure, but secure enough that it's very, very unlikely.
I'd like to see a github project that has scripts (puppet?) to take a fresh Debian box and lock it down as much as possible, running only ssh.
Those scripts could be used to create a CTF box sitting out on the open Internet, for others to try and hack into. Pen test it to death. Update
wrote: the the
scripts. Make the config as perfect as possible.
Then others could take those scripts and add more modules to them, for other services: exim, dovecot, apache, roundcube. People could pick and choose which they want to run.
Put different boxes out there, as other CTF machines to pentest.
Make it fun. Give people rewards, or some kind of recognition, if they can break into the box.
"Encryption works," we know. End-point security's the weak link. This could be a way to shore that up.
Thoughts?
Its usually easier to gain access to a resource by exploiting those who have the perms you seek. These types of competitions are neat; skilled attackers aren't really incentivized to sink 0days on CTF games when there's a huge payoff for responsibly disclosing / not to mention the potential payoff of malicious use of an Apache code exec. Your best bet is relying on operating systems with a good track record, using a capabilities based security model (pax + grsec on nix). Routine administrative bits: least privileges, patches, hardened binaries, isolation.
On Fri, Aug 9, 2013 at 7:43 AM, Sean Alexandre <sean@alexan.org> wrote:
... this says Lavabit's security was so good they couldn't back door his machines....
I'd love to see some kind of write-up by Ladar about how he did this...maybe even a book.
i've been contemplating a write up about this, but the problem is once you advertise your methods they become less effective. there really is "security through obscurity" in this sense; when at a resource disadvantage, every little bit counts... if i were to summarize what i have found effective against dedicated and resourceful attackers (again, i can't go into details :) this would be the top 5: 1. use a common distro, but rebuild critical components - bootloader, initramfs, openssl, openssh, the kernel, gnutls, libgmp, use 64bit, etc. 2. use isolation and RBAC, Qubes, VirtualBox, VMWare, Parallels, remember that VM escapes are available and expected. defense in depth can never be too deep. 3. use constrained network access - identify anomalies, control bandwidth, firewall ingress and egress aggressively. this implies constant monitoring to detect such events. (another exercise left to the reader) 4. rootkit and backdoor your own systems - use the dirty tricks to observe and constrain your system before someone else uses dirty tricks to compromise your system. 5. don't forget physical security - this is the universal oversight and most effective end run around all other operational and technical security measures. there is a reason physical access so often implies "game over" and why black bag jobs are still and will continue to be effective against all targets. perhaps more later,
About physical access - there is one non-physical solution to this - hide the location of the server behind tor, proxies etc. Seems to work remarkably well for pirate bay. I cant imagine its that big a secret as to where the packets are routed from the current proxy to the current physical host, but seemingly NSA type resources have not been brought to bear against it. Step one for the attacker is to find it. Maybe physical tamper detection can wipe the RAM, cold reboot as the cage unlocked, or box is opened, and immediately switch to the back up server in a different tor hidden physical location. One thing that occurs to me is that aside from the laundering of NSA tip offs to FBI etc with faked plausible trails, that have been reported on lately; there was an aspect that they would be hesitant to reveal what they could tap, correlate etc, or under what circumstances they would abuse national security (military) resources for various levels of criminal activity (major, organized to minor, petty, or political misuse). But the very fact that Snowden did the world a favour in disclosing the illegal activities of the NSA and global partners, now people know what they are doing or can better imagine, and not discount as paranoia, consequently maybe once the dust has settled they will feel freer to feed ever more petty or political or corporate espionage related information. After all they'd no longer be risking knowledge of information capability, or political willingness. Everyone pretty much figures they're in it up to their elbows with corporate espionage (boeing vs airbus wiretaps), minor crimes with fabricated evidence trails (maybe they wont bother fabricating them even in future) and perhaps the political stuff though that is really evil and anti-democractic (eg tea-party member IRS audits, blackmail etc). It seems to me companies need to delegate code review and signing to a civil society charitable organization with smart use of jurisdictions. eg Germany (chaos computer club code signing silent circle code?), Switzerland, Iceland, or psuedonymous but high reputation individuals or groups. Or privacy groups which may have a more clear disinterest and immunity from financial blackmail (like USG will cancel contracts if ISP, internet service, or softwre company doesnt fold to NSL or other extra-legal threats). Or maybe EFF, privacy international etc. Via their lawyers they could retain a highly competent and pseudonymous team of technical reviews and code signing that companies that care to demonstrate their alignment to providing end to end secure services to their users would if it became popular given an explanation of why they were not protected by independent review based code signatures. Adam On Sun, Aug 11, 2013 at 02:27:54AM -0700, coderman wrote:
5. don't forget physical security - this is the universal oversight and most effective end run around all other operational and technical security measures. there is a reason physical access so often implies "game over" and why black bag jobs are still and will continue to be effective against all targets.
Torrents show. Bitcoin shows. Common protocol, many clients, graceful as possible failures, distributed everything. Else you'll always have a centralized something that can get broken. The alternative answer is that you're dealing with two problems. Political problems, from gag-order-ish affairs to licenses to prevent you from doing it, and operational problems, the implanted code, the coerced backdoor. Political problems call for political solutions. Distributing everything is an approach to evade them. Just like we can write code we can write a legal structure for our entities. Mega is doing just that. Put the right thing in the right country, evade certain punishable things but deal with their use cases. Basically you're looking at a system of laws, and you're programming a way to not be subject to them. You musn't forget that laws move, however slowly. (like dealing with changing APIs) Operational problems are historically dealt with by controlling the people working on the project. You should get those with iron loyalty and confidence in the greater good you're doing. That's nearly impossible to be sure about and NSL-type-things make it excruciatingly hard for them. Then layering, rounds of approval, people approving in different nations, etc. Which is a combined political and physical means of dealing with the problem. I'm pretty sure that ATM it is unfeasible to produce code that doesn't contain backdoors. Formal proofs are touchy and hard to read. Code gets complicated and large. Backdoors are elaborate and sneaky. But the political problems can be dealt with. And minimizing the code that can contain backdoors is also a good idea. You could also go for the never-done-in-production testing method where you have two (or more) distinct implementations of the same thing, and you see if the results are totally correct. That way someone would have to hide two backdoors, for two different programs, in the same payload without breaking the program the backdoor is not meant for. There's ways. It's a lot of work.
Dnia środa, 21 sierpnia 2013 13:20:53 Lodewijk andré de la porte pisze:
Torrents show. Bitcoin shows.
Common protocol, many clients, graceful as possible failures, distributed everything.
Else you'll always have a centralized something that can get broken.
This is so very true. Decentralisation is the only way to go, IMVHO. And the lower network level we can decentralise, the better. I'd like to see decentralisation-in-depth happening. As in: decentralised, peer-to-peer communication services in a dynamically routed network built on top of physical mesh. With that in mind I love what Project Byzantium is doing, for example. The elements are slowly getting into place, at some point we will get there, I'm sure. -- Pozdr rysiek
Alexander Galloway wrote a wonderful text on decentralized control titled Protocol: How Control Exists After Decentralization. Worth the read. -lee On Aug 21, 2013 8:54 PM, "rysiek" <rysiek@hackerspace.pl> wrote:
Dnia środa, 21 sierpnia 2013 13:20:53 Lodewijk andré de la porte pisze:
Torrents show. Bitcoin shows.
Common protocol, many clients, graceful as possible failures, distributed everything.
Else you'll always have a centralized something that can get broken.
This is so very true. Decentralisation is the only way to go, IMVHO. And the lower network level we can decentralise, the better.
I'd like to see decentralisation-in-depth happening. As in: decentralised, peer-to-peer communication services in a dynamically routed network built on top of physical mesh.
With that in mind I love what Project Byzantium is doing, for example. The elements are slowly getting into place, at some point we will get there, I'm sure.
-- Pozdr rysiek
This is so very true. Decentralisation is the only way to go, IMVHO. And the lower network level we can decentralise, the better.
I like the decentral model. But I wonder about how to affirmatively deny an influx of attacking nodes overtaking the network. It surely cannot be relegated to the simple user? So that seems 'hard' to me. For example, I think Tor may remain centralish rather than pure dhtish for that purpose. But what if the centrality was undertaken anonymously by some voting humans (or their analytic nodes). Their track recourd could certainly be public yet anonymous therein. You would at that point be trusting/subscribing their record, purely, as opposed to dht or some other means, purely. What would p2p-hackers@ have to say on this?
On Thu, Aug 22, 2013 at 01:51:16AM -0400, grarpamp wrote:
I like the decentral model. But I wonder about how to affirmatively deny an influx of attacking nodes overtaking the network. It surely cannot be relegated to the simple user? So that seems 'hard' to me.
You need each node reputation stored in a global distributed tamper-proof publishing system, obtained and acted upon by global quorum. This is not easy, but Bitcoin and Tahoe LAFS show how to build a more trusted network from untrusted components.
For example, I think Tor may remain centralish rather than pure dhtish for that purpose. But what if the centrality was undertaken
If you want to scale to millions if not billions of nodes, what are your options?
anonymously by some voting humans (or their analytic nodes). Their track recourd could certainly be public yet anonymous therein. You would at that point be trusting/subscribing their record, purely, as opposed to dht or some other means, purely. What would p2p-hackers@ have to say on this?
2013/8/25 Eugen Leitl <eugen@leitl.org>
For example, I think Tor may remain centralish rather than pure dhtish for that purpose. But what if the centrality was undertaken
If you want to scale to millions if not billions of nodes, what are your options?
You must have a really stupid P2P network if it doesn't scale. Can you even still call it p2p if it doesn't scale?
On Sun, Aug 25, 2013 at 5:39 PM, Lodewijk andré de la porte <l@odewijk.nl> wrote:
... You must have a really stupid P2P network if it doesn't scale. Can you even still call it p2p if it doesn't scale?
replicate broadcast functionality (most suited to wireless transmissions) in the unicast datagram model and you have p2p that doesn't scale. remember first gen gnutella?
On 08/22/2013 02:58 AM, Lee Azzarello wrote:
Alexander Galloway wrote a wonderful text on decentralized control titled Protocol: How Control Exists After Decentralization. Worth the read.
Really? From the MIT Press's blurb: "In Protocol, Alexander Galloway argues that the founding principle of the Net is control, not freedom, and that the controlling power lies in the technical protocols that make network connections (and disconnections) possible. He does this by treating the computer as a textual medium that is based on a technological language, code. Code, he argues, can be subject to the same kind of cultural and literary analysis as any natural language; computer languages have their own syntax, grammar, communities, and cultures. Instead of relying on established theoretical approaches, Galloway finds a new way to write about digital media, drawing on his backgrounds in computer programming and critical theory. "Discipline-hopping is a necessity when it comes to complicated socio-technical topics like protocol," he writes in the preface." Oh dear. Stephan
On Sun, Aug 11, 2013 at 5:27 AM, coderman <coderman@gmail.com> wrote:
if i were to summarize what i have found effective against dedicated and resourceful attackers (again, i can't go into details :) this would be the top 5:
1. use a common distro, but rebuild critical components - bootloader, initramfs, openssl, openssh, the kernel, gnutls, libgmp, use 64bit, etc.
By "rebuild" do you mean compile it yourself or are you talking full-up review and rewrite? The former should be no problem for anyone capable of setting up a secure hosting service. The latter is probably beyond the means of small teams in any commercially reasonable timeframe. -- Neca eos omnes. Deus suos agnoscet. -- Arnaud-Amaury, 1209
some questions, some answers, ... On Sun, Aug 11, 2013 at 2:27 AM, coderman <coderman@gmail.com> wrote:
... 1. use a common distro, but rebuild critical components - bootloader, initramfs, openssl, openssh, the kernel, gnutls, libgmp, use 64bit, etc.
this means rebuild hardened versions of these libraries from source; excluding insecure cipher suites in an OpenSSL build for example, altering architecture optimizations, supported features, in others, the goal being that an exploit targeted to a vanilla distribution will more likely fail with observable error or crash, rather than succeed silently. many exploits are very brittle in this respect, with any change in symbol offsets or capabilities rendering them completely ineffective.
2. use isolation and RBAC, Qubes, VirtualBox, VMWare, Parallels, remember that VM escapes are available and expected. defense in depth can never be too deep.
virtualization implies chained exploits for full compromise. combined with the above you've drastically increased the cost of a successful attack with modest effort. the likelihood of detection (by appearing vulnerable yet not being so) is also increased. remember that VMMs and hypervisors are themselves potentially vulnerable software systems suitable for hardening and customization.
3. use constrained network access - identify anomalies, control bandwidth, firewall ingress and egress aggressively. this implies constant monitoring to detect such events. (another exercise left to the reader)
data exfiltration can be very visible via network behavior if you're paying attention. cross referencing connection state in your upstream router vs. local OS view of sockets can identify discrepancies where compromise has concealed covert connections. malware communicating directly on an ethernet or wireless adapter outside of the OS is also visible at this junction.
4. rootkit and backdoor your own systems - use the dirty tricks to observe and constrain your system before someone else uses dirty tricks to compromise your system.
this is mostly a variant of #1 at a kernel / system level. like notepad.exe connecting to the internet, there are some syscall, file access, and network requests which are clearly anomalous and indicators of compromise.
5. don't forget physical security - this is the universal oversight and most effective end run around all other operational and technical security measures. there is a reason physical access so often implies "game over" and why black bag jobs are still and will continue to be effective against all targets.
this is a storied tangent unto itself... last but not least: you must develop a routine of continuous hardening and improvement. these steps are not done once and finished; they are elements within a larger strategy of operational rigor defending against motivated and capable attackers. asking for my "hardened linux build" is missing the point entirely!
On Sun, Aug 11, 2013 at 05:45:02AM -0700, coderman wrote:
some questions, some answers, ...
Thanks. I appreciate your point about how "security through obscurity" factors into this. I wonder, though, about putting as much of possible of this online somewhere with tutorials, scripts, forums, etc. that your more typical sys admin could find and use. They might not have everything, but enough to make their services 99.99% secure. Those that provide the info would probably still have some things to their own and be 99.9999% secure. Included in the scripts and info would be ways to record artifacts of an exploit, and quickly and securely store them where they could be used to patch. The cost for dropping a 0day on a service provider goes through the roof.
On Sun, Aug 11, 2013 at 10:39:55AM -0400, Sean Alexandre wrote:
your more typical sys admin could find and use. They might not have everything, but enough to make their services 99.99% secure. Those that provide the info would probably still have some things to their own and be 99.9999% secure.
Security doesn't work that way. Keeping your system secure is like walking a tightrope across a gorge filled with ravenous tigers every morning. There are a billion ways to fuck up and get owned/eaten by the tigers, and asking someone who's successfully walked the tightrope every day for 40 years "tell me your secret?" completely misses the point. The expert can share advice and point out when you're about to step off the tightrope, but no kind of advice can substitute for your own caution and experience. Pretending that a magic balance bar, or a magic technique that can be applied without careful thought, or a magic shoe that will make you stick to the rope, will save you is the kind of thing that works in a fairy tale but not in real life. The analogy breaks down, though, because in fact you can get totally owned, through and through; exfiltrated, impersonated, and strung up by a prosecutor before a secret grand jury before you even learn that your security has failed. At least the tiger has the courtesy of giving you pain when you fail. -andy
On Sun, Aug 11, 2013 at 08:55:42AM -0700, Andy Isaacson wrote:
Security doesn't work that way. Keeping your system secure is like walking a tightrope across a gorge filled with ravenous tigers every morning. There are a billion ways to fuck up and get owned/eaten by the tigers, and asking someone who's successfully walked the tightrope every day for 40 years "tell me your secret?" completely misses the point.
The expert can share advice and point out when you're about to step off the tightrope, but no kind of advice can substitute for your own caution and experience. Pretending that a magic balance bar, or a magic technique that can be applied without careful thought, or a magic shoe that will make you stick to the rope, will save you is the kind of thing that works in a fairy tale but not in real life.
I'm simply advocating for resources that would help sys admins develop the skills they need. Nothing more.
I disagree with the walking a thin bar analogy. The problem with security is that if it's open it's really open. I think it's better to compare the security with the skin. The more skin you hide the less easily it'll get poked through, but if you miss some spots you can still get all the blood unless you have something to keep it in. I suppose an analogy to the human body is the server is more explanatory. So I tend to want to ask "are there any holes anywhere" or "where is my armor the thinnest?". And I've found that "other person software" is the mayor hole in everything. Honestly, it's hardly ever the code you write yourself that's the problem. Also because hard lifting is done for you, but the point remains that there's something about big-kernels and systems packed with bulging software packages are just.. It's hell. So much skin. So hard to check.
one last cautionary tale: some time back i used the techniques discussed to harden some Android phones brought with me into a hostile environment. i had kernel level protections in place, hardened the system configuration and services, pared down apps to the minimum and constrained their access to the file system and network. this was months of effort. the first adversarial encounter went very well in my favor - all of the attempts to exploit my devices were thwarted at these various layers and via these protections, with the sole exception of a Google Voice Search hack that kept voice search active in an "open mic night" eavesdropping capability. this was quickly nullified via kill -STOP (Android won't re-spawn an app that is already running, and a stopped process proved quite effective at halting this repeated invocation of search used to capture audio.) fast forward to round two, and i doubled down on the kernel, system, and application level protections. even more scrutiny is applied to applications to avoid the misuse of legitimate functionality for malicious purpose. i am feeling confident! ... and then a baseband exploit easily walks under all of my protections at every layer, completely and fully 0wning my devices, with the only hint at anything amiss being the elevated thermal dissipation and power consumption from the radios performing data transmission, all while the Android OS believed the devices were silent in airplane mode. [informative interlude: software defined transceivers should be in every hacker toolbox; radio level attacks are otherwise invisible to you. they are also useful for many other purposes, perhaps one day even providing a solution to the untrustworthy proprietary firmware and baseband systems crammed into every mobile device these days.] --- incidentally, this also demonstrates why IOMMU / VT-d guest isolation of devices on the host bus is very useful, as a vulnerable NIC could otherwise provide complete access to privileged memory and interfaces just like the baseband exploit above... assuming your CPU itself is trustworthy! "trusting trust" continues to be a persistent and difficult problem, leaving us all vulnerable to some degree or another - it's just a function of cost and skill to compromise. turtles all the way down! ;P
On Sun, Aug 11, 2013 at 1:28 PM, coderman <coderman@gmail.com> wrote:
... and then a baseband exploit easily walks under all of my protections at every layer, completely and fully 0wning my devices,
"I'm sorry. My responses are limited. You must ask the right questions." weaponized baseband exploits are difficult, expensive, architecture specific, and not used capriciously. this, among other reasons, is why there is such a dearth of information on them despite being proven exploitable with a wide attack surface for many years. related: """ Rupp said state-sponsored attackers are already using baseband processor attacks in airports but declined to go into details beyond saying that attacks could be carried out without the need to trick smartphones owners into opening an email or visiting a malicious website. Attacks might involve building a rogue GSM base-station from commodity hardware or run from the infrastructure of a 'co-operative" telco. It might also be possible to run attacks against baseband processors of phones using Wi-Fi or Bluetooth interfaces, according to GSMK Cryptophone. "Once you have control over the app CPU, you can in principle use that to load any code you want from the network," Rupp explained. "Since you have already successfully escalated your privileges on the system, no user interaction is necessary." """ http://www.theregister.co.uk/Print/2013/03/07/baseband_processor_mobile_hack... "Baseband Attacks: Remote Exploitation of Memory Corruptions in Cellular Protocol Stacks" https://www.usenix.org/system/files/conference/woot12/woot12-final24.pdf "Anatomy of contemporary GSM cellphone hardware" https://gnumonks.org/trunk/presentation/2010/gsm_phone-anatomy/gsm_phone-ana... "Cellular baseband security" https://smartech.gatech.edu/handle/1853/43766 "Run-time firmware integrity verification: what if you can't trust your network card" http://cansecwest.com/csw11/Duflot-Perez_runtime-firmware-integrity-verifica...
On Sun, Aug 11, 2013 at 2:27 AM, coderman <coderman@gmail.com> wrote:
... 4. rootkit and backdoor your own systems - use the dirty tricks to observe and constrain your system before someone else uses dirty tricks to compromise your system.
a good presentation which suggests this technique, among other useful ideas: "Attack Driven Defense" http://www.slideshare.net/zanelackey/attackdriven-defense
never let a good thread die! some interesting discussion on opsec in this thread: https://www.schneier.com/blog/archives/2013/08/opsec_details_o.html i would note that claims about the documents and encryption key are weasel worded by UK and denied by all parties. if you can't observe a channel, DoS it, ...
participants (12)
-
Adam Back
-
Andy Isaacson
-
coderman
-
Eugen Leitl
-
grarpamp
-
Lee Azzarello
-
Lodewijk andré de la porte
-
rysiek
-
Sean Alexandre
-
Stephan Neuhaus
-
Steve Furlong
-
Travis Biehn