Message du 04/06/14 00:58 De : "Andy Isaacson"
On Wed, Jun 04, 2014 at 12:35:20AM +0200, rysiek wrote:
In short several very skilled security auditors examined a small Python program — about 100 lines of code — into which three bugs had been inserted by the authors. There was an “easy,” “medium,” and “hard” backdoor. There were three or four teams of auditors.
1. One auditor found the “easy” and the “medium” ones in about 70 minutes, and then spent the rest of the day failing to find any other bugs.
2. One team of two auditors found the “easy” bug in about five hours, and spent the rest of the day failing to find any other bugs.
3. One auditor found the “easy” bug in about four hours, and then stopped.
4. One auditor either found no bugs or else was on a team with the third auditor — the report is unclear.
See Chapter 7 of Yee’s report for these details.
I should emphasize that that I personally consider these people to be extremely skilled. One possible conclusion that could be drawn from this experience is that a skilled backdoor-writer can defeat skilled auditors. This hypothesis holds that only accidental bugs can be reliably detected by auditors, not deliberately hidden bugs.
Anyway, as far as I understand the bugs you folks left in were accidental bugs that you then deliberately didn’t-fix, rather than bugs that you intentionally made hard-to-spot.
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - https://blog.spideroak.com/20140220090004-responsibly-bringing-new-cryptogra...
I have no problem believing it is thus, but can't help wondering if there are any ways to mitigate it.
My mitigation would be to make auditing a default-deny rule, rather than a default-allow.
Security auditing needs to be a holistic analysis, starting by re-engaging with the requirements, verifying that the design is a sensible and minimal approach to addressing the requirements, and verifying that the implementation is a sensible, safe, auditable, version controlled, approach to the design.
If the auditor at any point says "Well, I wouldn't have *recommended* that you implement your JSON parsing in ad-hoc C with pointer arithmetic and poor and misleading comments, but I can't find any *bugs* so I guess it must be OK" then that is an immediate fail.
This is the default deny: we default to assuming the system is insecure, and any sign that this might be true results in a failure.
Versus the current auditing method of default-allow: we run the audit, and if no *concrete* exploits or bugs are found before the auditors run out of time, then we trumpet that the system "has passed its audit".
Only if the design is sane, the implementation is sane, the development team is following best practices and defensive coding strategies, with a cryptographically and procedurally audited edit trail (immutable git commit logs signed and committed to W/O media) in a development environment that is safe by default rather than risky by default ...
... then you *might* have a chance of catching the intentional backdoor inserted by the APT malware on your team member's workstation.
Current efforts in this direction fall *very* far short of the utopia I describe.
-andy
Your proposal would cause 99% of software currently in use to be rejected and make the development costs increase as astronomically as to be compared to medical research. That would also smother the hopes of all people that see coding and computers as a way out of poverty. Like, outsourcing stuff to Asia would grind to a halt. I agree your proposal is good and doable, yet at a cost the world doesn't wish to pay. It wouldn't reduce innovation, probably would increase it, though. Also it would filter out all incompetents and posers, forcing them to adapt or look at burger flipping in McDonald's with other eyes ...