Closed source more secure than open source

Anonymous nobody at remailer.privacy.at
Thu Jul 4 17:06:06 PDT 2002


Ross Anderson's paper at
http://www.ftp.cl.cam.ac.uk/ftp/users/rja14/toulouse.pdf
has been mostly discussed for what it says about the TCPA.  But the
first part of the paper is equally interesting.

The author analyzes the security implications of software development
using open source vs closed source.  He sets up a mathematical model
for the number of bugs remaining after a certain amount of testing.
Based on this model, he finds that both open and closed source development
methodologies are equally secure.

However his model has some simplifications and assumptions which are
quite unrealistic.  A more careful analysis will show that closed source
is the superior development method.

Essentially, the model assumes that each bug has a certain independent
probability of being found by testers, its own "MTBF".  Based on this
model it turns out that the probability of a security failure after time
t is inversely proportional to t.

He then writes, "Consider now what happens if we make the tester's job
harder.  Suppose that after the initial alpha testing of the product,
all subsequent testing is done by beta testers who have no access to the
source code, but can only try out various combinations of inputs in an
attempt to cause a failure.  If this makes the testers job on average
L times harder, so the bugs are L times more difficult to find... then
the probability that the system will fail the next test is..." inversely
proportional to t*L.  "In other words, the system's failure rate has just
dropped by a factor of L, just as we would expect."

The result is that, with access to the source code, bugs are L times
easier to find, but they are removed L times faster.  This corresponds to
the open source model. With closed source there is no access to the code,
bugs are removed L times slower, but they are L times harder to find.
The net result is that both open source and closed source are equivalent
in terms of how fast bugs are found, therefore both will be equally
vulnerable to exploiters of security bugs.

There are several problems with this analysis.  First, it is really not
true that external beta testers will be slowed down significantly by
lack of access to source code.  For most programs, source code will be of
no benefit to external testers, because they don't know how to program.
Someone who is testing a spreadsheet or word processor will have virtually
no benefit from access to the source code.  They will have no choice
but, as described above, to "try out various combinations of inputs in
an attempt to cause a failure."  This is true regardless of whether the
source code is available or not.

Therefore the rate at which (external) testers find bugs does not vary
by a factor of L between the open and closed source methodologies,
as assumed in the model.  In fact the rates will be approximately equal.

Another problem is that there are really three groups of parties involved
here: developers, external testers, and attackers.  Attackers, who are
trying to find breaks in software, are often highly motivated and skilled.
They can read code.  For them, the factor of L does come into play.
If they have access to the source code, they can find bugs L times faster
than if they don't, in accordance with the author's model.

The result is that once a product has gone into beta testing and then into
field installations, the rate of finding bugs by authorized testers will
be low, decreased by a factor of L, regardless of open or closed source.
But the rate of finding bugs by unauthorized, skilled attackers will be
affected by the availability of source.  Closed source will impair their
effectiveness by a factor of L, just as with the testers, so the model in
the paper is accurate in that case.  Bug open source benefits attackers;
they can find bugs at a rate of 1/t, while the authorized testers are
finding bugs at the slower rate of 1/(t*L).  The open source case will
leave more bugs available for attack, and the attackers can use the source
code to find them more quickly.  Therefore open source is more vulnerable
to attack, and closed source is the superior development method.

The one class of programs where this is not true would be those for which
the external testers benefit from having source available, which would
be programs where the testers are programmers; i.e., development tools.
For these programs the testers and attackers would both be affected in the
same way by availability of source.  But for most programs, attackers will
gain much more by having source available than the beta testers would.





More information about the cypherpunks-legacy mailing list