I'm partial to Joanna Rutkowska's statement that "Security by Isolation" is the best course followed for -users- of software. [in addition to all the patching and whatever.]
Developers of that software, ultimately, are responsible for securing their stuff. As an aside - separating your complex system into multiple trust zones, from a development standpoint, is de-rigueur for secure design.
Security heads have long been decrying cgi-bin. Most of the reason is that the threat surface is insane - for binaries you have user input that's not running in some sort of VM [php,perl,ruby,node.js, etc] and existing in memory entangled with executable instructions.
Injection attacks, are, of course old-hat. The daemons could have done some hand-holding in this respect before passing off headers to ENV variables.
The issue is that 'restricted chars' wasn't defined by a standard interface between daemon and cgi-bin script. The called function has a completely arbitrary set of restricted chars.
/bin/bash, of course, isn't written to withstand env attacks - since the calling user controls the env / and bash is executed under that user's privileges.
So it is, of a matter of course, inevitable to find vulnerability there. With one process isolating the client from the env, modifying the env as a result of the user's whims and then passing off to a sub-process that trusts the env implicitly.
It is very unlikely that any TLA 'created' this vulnerability. The notion is entirely incredible. The existence of vulnerability in such a design is immediately obvious from anyone who takes more than a cursory look at it. That isn't to say that this specific attack was trivial to identify - that is to say from an architecture standpoint it should be evident that the handoff between httpd and cgi-bin is a location of extreme vulnerability.
-Travis