As long as 'fixing the problem' is perceived to be more expensive than 'applying patches' then the problem will persist. Meanwhile the rest of the cognoscenti can air-gap key signing activities and use managed code / safe languages or throwaway VMs. Troy, as an end user you may be interested in pursuing Qubes for your daily OS if you're interested in 'browsing the internet' and 'remaining reasonably safe from state level attackers.' -Travis On Sat, Sep 27, 2014 at 9:57 PM, Troy Benjegerdes <hozer@hozed.org> wrote:
So every once in awhile I have fits of plausible paranoia, which lead me to second guess the motives of everyone arguing why it's 'so hard' to simplify things by doing something like removing bash from debian.
The fastest way for me to remove potential vulnerabilities in bash environment handling is remove the code, and the side effect is this, as a policy will hopefully make for more standard and maintainable code, rather than stuff that happens to rely on the specific implementation of a shell.
So yeah, it's probably unlikely this was intentionally created, however I think it's a good bet it was intentionally exploited at least once, and there is plenty of motivation for some social engineering to try to keep it exploitable.
My hope, however, is that inter-departmental competition inside TLA agencies and within nation-states is sufficient so that at least one security head who has similiar plausibly paranoid delusions, and decides to fund OpenBSD and/or bash-free debian as a short term, and some of the more interesting security by isolation stuff on the horizon.
I'm partial to Joanna Rutkowska's statement that "Security by Isolation" is the best course followed for -users- of software. [in addition to all the patching and whatever.]
Developers of that software, ultimately, are responsible for securing
stuff. As an aside - separating your complex system into multiple trust zones, from a development standpoint, is de-rigueur for secure design.
Security heads have long been decrying cgi-bin. Most of the reason is
the threat surface is insane - for binaries you have user input that's not running in some sort of VM [php,perl,ruby,node.js, etc] and existing in memory entangled with executable instructions.
Injection attacks, are, of course old-hat. The daemons could have done some hand-holding in this respect before passing off headers to ENV variables.
The issue is that 'restricted chars' wasn't defined by a standard interface between daemon and cgi-bin script. The called function has a completely arbitrary set of restricted chars. /bin/bash, of course, isn't written to withstand env attacks - since the calling user controls the env / and bash is executed under that user's privileges. So it is, of a matter of course, inevitable to find vulnerability there. With one process isolating the client from the env, modifying the env as a result of the user's whims and then passing off to a sub-process that trusts the env implicitly.
It is very unlikely that any TLA 'created' this vulnerability. The notion is entirely incredible. The existence of vulnerability in such a design is immediately obvious from anyone who takes more than a cursory look at it. That isn't to say that this specific attack was trivial to identify -
On Sat, Sep 27, 2014 at 02:48:24PM -0400, Travis Biehn wrote: their that that
is to say from an architecture standpoint it should be evident that the handoff between httpd and cgi-bin is a location of extreme vulnerability.
On a related note: Mirage OS looks like it's on a promising tack: http://www.xenproject.org/developers/teams/mirage-os.html
-Travis
On Sat, Sep 27, 2014 at 12:49 PM, Lodewijk andré de la porte < l@odewijk.nl> wrote:
Know what you code, and what you run. Don't be fooled by words and shapes, code does what code does, that is all.
We seriously need a way to detach code from mental models to expose hidden features. Basically, all computer law is rubbish because everything you run on your computer, exploits and all, is something you run by choice. But there's no way you could validate the sheer bulk of code. If you want to really solve security flaws it'll involve somehow validating the possibilities of the code run.
It's a discipline that touches on visualization, automated testing and simplification. Simplification meaning, reducing possible states and "execution paths". And just making code easier to comprehend.
The problem is that there's either no market for "truly secure" computing, or there's just nobody filling the gap. Banks with their Cobol are laughed at, mostly, and accused of lacking innovation. They do lack innovation in the technical field. And Cobol is definitely not an ideal language. But "truly secure" is worth a lot to them. L4 validated is a step in the right direction, but catches a lot of wind saying it's still imperfect and therefore worthless.
I'm utterly bored by code review. Maybe it'd be better if there were some nicer tools to help out. I'm really sure someone has great recommendations regarding this. (That don't even require Cobol :)
-- Twitter <https://twitter.com/tbiehn> | LinkedIn <http://www.linkedin.com/in/travisbiehn> | GitHub < http://github.com/tbiehn> | TravisBiehn.com <http://www.travisbiehn.com> | Google Plus <https://plus.google.com/+TravisBiehn>
--
---------------------------------------------------------------------------- Troy Benjegerdes 'da hozer' hozer@hozed.org 7 elements earth::water::air::fire::mind::spirit::soul grid.coop
Never pick a fight with someone who buys ink by the barrel, nor try buy a hacker who makes money by the megahash
-- Twitter <https://twitter.com/tbiehn> | LinkedIn <http://www.linkedin.com/in/travisbiehn> | GitHub <http://github.com/tbiehn> | TravisBiehn.com <http://www.travisbiehn.com> | Google Plus <https://plus.google.com/+TravisBiehn>