bitcoin as a global medium of exchange

Riad S. Wahby rsw at jfet.org
Thu Nov 28 12:19:44 PST 2013


Sean Lynch <seanl at literati.org> wrote:
> On Mon, Nov 25 2013, coderman wrote:
> > surely there is prior art?
> 
> I just remembered, there is in fact prior art, though it's state of the
> art, as in "nearly practical." They're called SNARKs: Succinct
> Non-interactive ARguments of Knowledge. You can take any computation and
> annotate it sufficiently that whoever runs the computation can generate
> a (constant size) proof that they did it correctly in (nearly) constant
> time. Andrew Miller posted about it on the Tahoe-LAFS mailing list. See
> http://eprint.iacr.org/2013/507.pdf for one implementation.
>
> I'm not sure if you could use this for a Bitcoin-style problem, though,
> since I have no idea if the difficulty can be adjusted smoothly. It's
> interesting that we can now take ALL computations problems and turn
> them into the "easy to verify" variety, however.

Interesting to see this mentioned here. Mike Walfish and Andrew Blumberg
have written a nice survey on the work in this area:
    http://eccc.hpi-web.de/report/2013/165
Justin Thaler provides another perspective:
    http://mybiasedcoin.blogspot.com/2013/09/guest-post-by-justin-thaler-mini-survey.html

I can also add a little context to this conversation---I've actually
done an independent re-implementation of the work of Ben-Sasson et al
that Sean linked above.

At a high level, my take is that this area has huge potential, but
neither this work nor other projects in the area (summarized below) are
really practical---yet. The overhead for the party constructing the
proof is at least 3 orders of magnitude more than the cost of running
the computation directly. The work of Ben-Sasson et al costs more like 6
orders of magnitude! (In principle, though, they are paying this price
to achieve greater computational generality.)

The work required of the party verifying the computation is also 
non-trivial: while *checking* the answer is quick, these systems (with
one exception) require pre-processing work on the computation being
verified, and the cost of this is several orders of magnitude larger
than simply computing the result directly. (Note, however, that this
cost can be amortized by outsourcing the same computation over many
different inputs.)

But there is some real hope here: the rate at which progress is being
made is impressive (two years ago overhead was more like 20 orders of
magnitude than three!), and there are contexts where the extra work is
worth the cost. Perhaps one of these is a cryptocurrency that relies on
proof of "useful" work.

With regard to smooth difficulty adjustment: since the goal of most of
these projects is to encode arbitrary computations, the problems of
difficulty adjustment and proof of work become orthogonal.

For those wanting a bit more detail:

Broadly speaking, there are four research groups working on this stuff:
  - Ben-Sasson et al at Technion/Tel Aviv/MIT
        http://scipr-lab.org/
  - Parno et al at Microsoft Research
        http://research.microsoft.com/apps/pubs/default.aspx?id=180286
  - Thaler et al at Harvard
        http://people.seas.harvard.edu/~jthaler/PracticalVerifiedComputation.html
  - Walfish et al at UT Austin/NYU (full disclosure: I work with these guys)
        http://cs.utexas.edu/pepper/

The work of Thaler et al is somehat distinct from the other three
because, while it is extremely efficient for certain computations,
it is not general: the efficiency improvements it offers are limited
to computations with a regular underlying structure. However, when
this requirement is met, it has very low overhead compared to the
other three systems, and it involves no costly pre-processing.

The three other groups have built systems that compile from a subset 
of C to a form whose result can be encoded into a probabilistically
checkable proof. The resulting PCP is huge, so in all three systems the
result is not the proof itself. Rather, the proof is queried and the
result used for verification. The method of querying is one
distinguishing factor between these systems.

Ben-Sasson et al and Parno et al share common ancestry in the work of
Gennaro et al, also at Microsoft Research.
    http://research.microsoft.com/apps/pubs/default.aspx?id=180285
In both cases, the query/response take the form of a noninteractive
argument. Basically, the queries are encrypted and bundled into the
description of the computation ahead of time, after which the
computation can be run by providing an input and requesting an answer.
These systems also support public verifiability, allowing the answer to
be checked by anyone holding the public verification key.

The work of Ben-Sasson et al achieves more generality by encoding
computations as the execution of a virtual microprocessor, then
verifying correct execution of the processor. This system supports 
full C semantics and achieves an efficient abstraction for verified RAM
(again, at a penalty of 3 *additional* orders of magnitude beyond the
other systems).

The UT/NYU group have also adapted a simplification of Gennaro et al,
but their system really rests upon refinements of the efficient argument
systems of Ishai et al.
    http://www.cs.ucla.edu/~rafail/PUBLIC/79.pdf
In this case, verification involves two separate interactions, one that
establishes a commitment to the proof and one that queries it. Though it
requires interaction, the advantage of this approach is that it is
cryptographically simpler. Concretely, the crypto is one of the most
expensive parts of the system, so this simplification results in good
performance relative to the other systems.

The UT/NYU group have also developed verifiable storage abstractions,
including the ability to verify computations (in zero knowledge) on
hidden state for which you hold a cryptographic commitment.
    http://eprint.iacr.org/2013/356

-=rsw



More information about the cypherpunks mailing list