[Cryptography] The Case for Formal Verification

coderman coderman at gmail.com
Fri Aug 30 12:19:39 PDT 2013


On Thu, Aug 29, 2013 at 11:50 PM, Eugen Leitl <eugen at leitl.org> wrote:
> ...
> Much of what has changed is proof technology, and it is a
> technology. The tools for doing formal verification are now, for the
> first time, just barely usable for real work on interesting programs,
> and getting better all the time...
>
> There are usually several arguments against formal verification:...
> 1) We don't know what to specify, so what help does proving a buggy
> specification do us?

this is the crux; where the human meets the machine is always a large,
evolving, complicated attack surface. e.g. usability and design level
requirements and behavior.

in the order of precedence of security risks, much bigger holes must
be addressed before formal verification provides return on time
invested.

if you're building verified compilers, or micro kernels, or core
libraries, this doesn't apply to you. ;)

i want seL4 in a Qubes isolation model, formally verified CryptoBox,


> 2) Who would bother writing a proof vastly larger than their program?

this makes no sense to me; patently absurd on the face of it.

why test code with clusters that are larger than your build systems?

why do we exist? ...

utility of quality measures can not be judged on superficial metrics
like "size in GB" or "processor hours".  anyone using this argument as
a disqualifier is not qualified to make such an assessment.



> 3) You can't prove programs of interesting size anyway.
> ...
> For 3 ("you can't prove anything big enough to be useful!"), the Quark
> project:
> http://goto.ucsd.edu/quark/
> showed you don't need to prove a program of interesting size. You can
> defend millions of lines of buggy code with a "software firewall" made
> of formally verified code.

this is a great approach and fits in well with other security through
isolation defense in depth.  combining the strengths of formal
verification at critical core points within a system, and then
leveraging that robust core to isolate, constrain, mediate between
higher level applications seem most reasonable, tractable, with the
best return on time invested.

if i had a wishlist it would be:
- 64bit CompCert (not just 64bit int support :)
- verified virtualization isolation model (seL4 Qubes like system?)
- verified crypto_sign_edwards25519sha512batch and
crypto_sign_nistp256sha512ecdsa implementations
- verified compression, regexp, and other common libraries that are
useful at the security boundary between isolated domains or
applications.

some of the work done for quark might be partially applicable to some
of the above, but most of the verification is browser specific
(related to things like messaging and tab isolation, proper cookie
handling, socket communication, etc.)

where's the github for Coq kernels?




> So, if you're interested, how do you get started doing such things?
> ...
> Coq is, sadly, needlessly hard for the beginner. It has poor
> documentation, bad error messages and bad error behavior. These are
> not inherent problems, they're problems just with this instance of
> things -- people could build better if there was enough interest, and
> I hope that as these technologies become more popular people will
> build far better versions of the tools.

some other good resources:

ProofWeb: http://prover.cs.ru.nl/login.php
 particularly the courses available for the online interface to Coq.

frama-c: http://frama-c.com/
 i just came across this, it looks quite useful, but have not used it
in any depth yet...



> ... we need more people in the world experimenting with
> verification if we're going to get truly trustworthy software going
> forward.


Lemma stating_the_obvious:
 (* formal verification as a useful component of defense in depth is
self-evident *)
Qed.


“The future is here. It's just not widely distributed yet.”
-- Gibson




More information about the cypherpunks mailing list