"a skilled backdoor-writer can defeat skilled auditors"?

Stephan Neuhaus stephan.neuhaus at tik.ee.ethz.ch
Thu Jun 5 00:16:24 PDT 2014


On 2014-06-04, 20:22, Andy Isaacson wrote:
> On Wed, Jun 04, 2014 at 08:50:14AM -0400, Tom Ritter wrote:
>> On 4 June 2014 01:54, Stephan Neuhaus <stephan.neuhaus at tik.ee.ethz.ch>
>> wrote:
>>> If you fail the audit, it's your duty as a professional auditor to
>>>  provide evidence that there is something actually wrong with the
>>> software.  It's OK to single out some pieces of code for closer
>>> inspection because of code smells, but if you try your darnedest to find
>>> something wrong with it and can't, then either the code is OK or you're
>>> not good enough an auditor.  In either case, you can flag the code, you
>>> can recommend rewriting it according to what you think is better style,
>>> but you can't in good conscience fail the audit.
> 
> Stephan,
> 
> I strongly disagree.  There are implementations that are Just Too
> Complicated and are Impossible To Audit.  Such implementation choices
> *do*, empirically, provide cover for bugs; and as we as a society build
> more and more software into the fabric of our life-critical systems it's
> imperative that "the implementor liked this complexity and refuses to
> change it" gives way to the larger goals at stake.  The auditor
> absolutely must have leeway to say "no you don't get to write your own
> string processing, you are going to use the standard ones."

I think that we are mostly in agreement, except perhaps in wording. We
both agree that auditors rarely "pass/fail" software in a binary
fashion.  And as I wrote, the auditor absolutely has the leeway to
recommend rewriting.

But my gripe was with the "automatic fail" in the original post, to
which I said that this was "going too far".  If you do go that far
(i.e., don't just recommend changes, but "fail" the audit), your verdict
must be founded on evidence.  For example, if it were actually true that
complexity, "empirically, provides cover for bugs", that would be a
perfectly good argument in favour of failing an audit.  It's just that
I've worked for a few years in precisely this field and all the studies
I saw simply failed to show the necessary correlations. (The best study
I know, by Yonghee Shin and Laurie Williams, shows rho <= 0.3, and that
on the vulnerability-infested Mozilla JavaScript engine. See
http://collaboration.csc.ncsu.edu/laurie/Papers/p47-shin.pdf)  This
shows, I think, that auditors must be extra careful not to confuse
folklore with evidence.

You can say "this code is too complex for me to audit", and you can add
"this should give you food for thought and you should consider rewriting
it in a simpler style", but *as the auditor* you cannot say "I fail the
code because I can't audit it" unless auditability was a design
requirement. (For the *owners* of the code, their options are of course
much greater, but we were talking about this from the auditor's
perspective, and the OP talked about an "automatic fail" if the code
turned out to have certain smells.  If a smell isn't backed up by
evidence, it's just a personal prejudice or folklore.  Which,
incidentally, would be excellent new terms to replace "Best Practice" in
many cases.)

Again, please note that I agree with you that auditability and
simplicity, using braces even for single-line if-statements, library
functions rather than self-made string libraries, and all these other
things, ought to be design requirements, especially for
security-critical software, because they make auditing easier.  It's
just that if it wasn't, then you can fault the design requirements
(though that may be outside your remit as auditor), but you can't
"automatically fail" the implementation.

Fun,

Stephan



More information about the cypherpunks mailing list