Alleged "microkernel mathematically proven to be bug free"

Lodewijk andré de la porte l at odewijk.nl
Tue Jul 29 09:08:04 PDT 2014


>
> Care to elaborate? I can mathematically prove any theorem, then
> implement it in C and segfault the kernel. How is this different?


These proofs tie a meta-specification to the actual code. Look into the
sort of proofs they use. Basically they're saying "the code does what we
said it has to. Only if we accidentally told it to go skynet on us, it will
work perfectly". Things like segfaults would be pretty clearly not as
specified.

But of course the specification may also contain bugs as opposed to the
human-intended specification. But this is also something they explain, just
read about their proofs and proof system.

(This is all IIRC from last time I checked it, which is a pretty while ago)




2014-07-29 16:14 GMT+02:00 Georgi Guninski <guninski at guninski.com>:

> On Mon, Jul 28, 2014 at 07:04:05PM +0200, Lodewijk andré de la porte wrote:
> > 2014-07-28 18:35 GMT+02:00 Georgi Guninski <guninski at guninski.com>:
> >
> > > > I have a rough outline for a "cloud computing grid" that verifiably:
> > >
> > > verifiably? _really_ provable stuff is a very scarce resource IMHO
> > > (especially in crypto. do you need crypto? Do you need P \ne NP?).
> > >
> >
> > Yes, it needs crypto. I said verifably because "proving" an untamperable
> > hardware box (IOW: you can only plug it in, nothing else) is what the
> spec
> > says it is, is, well, impossible.
> >
>
> Don't think what you wish is possible, but
> don't mind you dreaming :)
>

Aha, but it is possible :)

Dream along if you like!

IIRC _theoretically_ it is possible to run
> trusted program on untrusted hardware
> with the catch that you need some additional
> "proof" - this was a paper can't find at the
> moment.
>

I think you're talking about trusted computing, and I'm not talking about
trusted computing :). I've seen some examples of trusted computing but the
operation set was always not Turing complete. It might work, actually, but
I don't see how.

The system I'm talking about it not just a software system. It also
involves hardware. The hardware will be created centrally, in batches.
Probably by a third party and certainly checked by several independent
observers to make sure it goes according to an open sourced specification.

The devices from those batches can be plugged in and maintained by
untrusted parties. IOW: they're shipped off to individuals who'll plug them
in anywhere. The devices are tamper-proofed to such an extend that that is
possible without exposing the sensitive data kept on them. The software
that (as verified by the third parties present in manufacture) runs on the
devices cannot snoop on the programs running on the system.

The devices network together, likely over the Internet but maybe also
radio. They Onion route all traffic. Etc. etc. Programs run on them but
they cannot know what the program is doing.

They're not updateable. They will all slowly fail and to make sure there's
still places you can run your software there is generations. So a next
batch forms a different computing grid.

I'm repeating bits and peaces, but I want to be sure that you can see that
it's a hardware design made useful by a software solution to fault
proofness and program "location".
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: text/html
Size: 4661 bytes
Desc: not available
URL: <https://lists.cpunks.org/pipermail/cypherpunks/attachments/20140729/3c39b33d/attachment-0001.txt>


More information about the cypherpunks mailing list