I'm partial to Joanna Rutkowska's statement that "Security by Isolation" is the best course followed for -users- of software. [in addition to all the patching and whatever.] Developers of that software, ultimately, are responsible for securing their stuff. As an aside - separating your complex system into multiple trust zones, from a development standpoint, is de-rigueur for secure design. Security heads have long been decrying cgi-bin. Most of the reason is that the threat surface is insane - for binaries you have user input that's not running in some sort of VM [php,perl,ruby,node.js, etc] and existing in memory entangled with executable instructions. Injection attacks, are, of course old-hat. The daemons could have done some hand-holding in this respect before passing off headers to ENV variables. The issue is that 'restricted chars' wasn't defined by a standard interface between daemon and cgi-bin script. The called function has a completely arbitrary set of restricted chars. /bin/bash, of course, isn't written to withstand env attacks - since the calling user controls the env / and bash is executed under that user's privileges. So it is, of a matter of course, inevitable to find vulnerability there. With one process isolating the client from the env, modifying the env as a result of the user's whims and then passing off to a sub-process that trusts the env implicitly. It is very unlikely that any TLA 'created' this vulnerability. The notion is entirely incredible. The existence of vulnerability in such a design is immediately obvious from anyone who takes more than a cursory look at it. That isn't to say that this specific attack was trivial to identify - that is to say from an architecture standpoint it should be evident that the handoff between httpd and cgi-bin is a location of extreme vulnerability. On a related note: Mirage OS looks like it's on a promising tack: [1]http://www.xenproject.org/developers/teams/mirage-os.html -Travis On Sat, Sep 27, 2014 at 12:49 PM, Lodewijk andré de la porte <[2]l@odewijk.nl> wrote: Know what you code, and what you run. Don't be fooled by words and shapes, code does what code does, that is all. We seriously need a way to detach code from mental models to expose hidden features. Basically, all computer law is rubbish because everything you run on your computer, exploits and all, is something you run by choice. But there's no way you could validate the sheer bulk of code. If you want to really solve security flaws it'll involve somehow validating the possibilities of the code run. It's a discipline that touches on visualization, automated testing and simplification. Simplification meaning, reducing possible states and "execution paths". And just making code easier to comprehend. The problem is that there's either no market for "truly secure" computing, or there's just nobody filling the gap. Banks with their Cobol are laughed at, mostly, and accused of lacking innovation. They do lack innovation in the technical field. And Cobol is definitely not an ideal language. But "truly secure" is worth a lot to them. L4 validated is a step in the right direction, but catches a lot of wind saying it's still imperfect and therefore worthless. I'm utterly bored by code review. Maybe it'd be better if there were some nicer tools to help out. I'm really sure someone has great recommendations regarding this. (That don't even require Cobol :) -- [3]Twitter | [4]LinkedIn | [5]GitHub | [6]TravisBiehn.com | [7]Google Plus References 1. http://www.xenproject.org/developers/teams/mirage-os.html 2. mailto:l@odewijk.nl 3. https://twitter.com/tbiehn 4. http://www.linkedin.com/in/travisbiehn 5. http://github.com/tbiehn 6. http://www.travisbiehn.com/ 7. https://plus.google.com/+TravisBiehn