Alleged "microkernel mathematically proven to be bug free"
news: http://www.theregister.co.uk/2014/07/28/aussie_droneprotecting_hackerdetecti... site: http://sel4.systems/ AFAICT they used Isabelle for the proofs. Coq sucks much (not counting its developers).
<3 I have a rough outline for a "cloud computing grid" that verifiably: * Exists on many hardware nodes that can be run by untrusted third parties, as the hardware is "tamper evident" and the "cloud computing grid" is self healing and distributing. * Can run programs in a way that geographic location is unknowable - Onion routing, amongst other features, causes programs to seemingly run on every node in the network * Cannot inspect what runs on it - the microkernel programs cannot read what the programs are doing, nor can programs contact one another unless they are instances (that potentially run on another hardware node). * Is fault tolerant, automatically scaling, etc. * Can still be addressed from the Internet or other networks You could run something like your Bitcoin wallet in it. It's really geodistributed, automatically fault tolerant, etc. There's no way to tamper with the programs running, except as specified before a cluster was created. Right now you're always stuck with an insane amount of local law, hardware issues (and features), fault tolerance, accessibility, etc. With this you could run a program in a nigh-perfect vacuum. Your program would be networked, but not in any specific place. Nobody can know what it does, but it can compute and it can communicate as it wishes. Depending on the grid (grids have to be manifactured in one go, checked by several third parties, distributed, set and forget) there's some pricing scheme or whatnot. But it absolutely requires a verified microkernel. I'm *very *excited to see that we're making progress towards it. If you're a secure kernel developer, Onion routing expert, fantastic C coder or technology investor, and this triggered your interests, please contact me!
I have a rough outline for a "cloud computing grid" that verifiably:
verifiably? _really_ provable stuff is a very scarce resource IMHO (especially in crypto. do you need crypto? Do you need P \ne NP?).
But it absolutely requires a verified microkernel. I'm *very *excited to see that we're making progress towards it.
Is it really a progress? There are assumption free proofs of False in most formal proof systems. Design flaws like \lor instead of \land pass the verification process and later are considered design flaws, not bugs in proof IMHO. Knuth quote: "Beware of bugs in the above code; I have only proved it correct, not tried it."
2014-07-28 18:35 GMT+02:00 Georgi Guninski <guninski@guninski.com>:
I have a rough outline for a "cloud computing grid" that verifiably:
verifiably? _really_ provable stuff is a very scarce resource IMHO (especially in crypto. do you need crypto? Do you need P \ne NP?).
Yes, it needs crypto. I said verifably because "proving" an untamperable hardware box (IOW: you can only plug it in, nothing else) is what the spec says it is, is, well, impossible. You'll depend on some observers, audits and where possibly physical impossibilities, to show that indeed it is what it is. It's not proof, but with trust you can verify it. It's as good as it gets, but no better. I don't think that'd surprise anyone. Part of the design involves generations. Eventually you'll have a generation for which something is questionable in a certain way, nothing to do about it. Generations will slowly die as the hardware breaks and the network becomes too small (has too few resources) to uphold anonymity and fault-tolerance guarantees. Perhaps a generation will be hybrid with blackbloxes mailed to people and satellites in space. That generation would be expected to deteriorate fast in the first 100 years, and then hardly deteriorate at all for many after that. All these different grids will have some sort of resource pricing scheme, that may guarantee a certain runtime, or may be subject to monthly renewals at new market prices or whatnot. It's a very interesting part of the design. But yes, the anonymity profits /greatly/ from crypto. Interestingly I think it could well work without crypto, whereas hardly any other designs could. It'd lose a lot of convenience though! And P = NP doesn't mean crypto is useless, just less nice. At the very least there'll be OTPs! Then there may be (quantum) couple RNGs that can generate secure OTPs. I think there'll always be ways the newer generations of this idea can continue to function.
But it absolutely requires a verified microkernel. I'm *very *excited to see that we're making progress towards it.
Is it really a progress? There are assumption free proofs of False in most formal proof systems.
Design flaws like \lor instead of \land pass the verification process and later are considered design flaws, not bugs in proof IMHO.
Knuth quote: "Beware of bugs in the above code; I have only proved it correct, not tried it."
It's progress, because now all we need is to specify safely, or find a way to prove specifications. The rest is somewhat, maybe, experimentally operational. It'll depend on how well we use it, but it is a step in the right direction!
Theory-wise, a Turing-complete system can emulate any other turing complete system unless the emulatee has an ace up its sleeve; something unique to the hardware that canot be simulated without breaking authenticity. This is probably only *theoretically* possible with quantum computation, but it's *practically* possible with tamperproof hardware: TCMs used for good instead of evil, maybe? On 28 July 2014 18:04:05 GMT+01:00, "Lodewijk andré de la porte" <l@odewijk.nl> wrote:
2014-07-28 18:35 GMT+02:00 Georgi Guninski <guninski@guninski.com>:
I have a rough outline for a "cloud computing grid" that verifiably:
verifiably? _really_ provable stuff is a very scarce resource IMHO (especially in crypto. do you need crypto? Do you need P \ne NP?).
Yes, it needs crypto. I said verifably because "proving" an untamperable hardware box (IOW: you can only plug it in, nothing else) is what the spec says it is, is, well, impossible.
You'll depend on some observers, audits and where possibly physical impossibilities, to show that indeed it is what it is. It's not proof, but with trust you can verify it.
It's as good as it gets, but no better. I don't think that'd surprise anyone.
Part of the design involves generations. Eventually you'll have a generation for which something is questionable in a certain way, nothing to do about it. Generations will slowly die as the hardware breaks and the network becomes too small (has too few resources) to uphold anonymity and fault-tolerance guarantees.
Perhaps a generation will be hybrid with blackbloxes mailed to people and satellites in space. That generation would be expected to deteriorate fast in the first 100 years, and then hardly deteriorate at all for many after that.
All these different grids will have some sort of resource pricing scheme, that may guarantee a certain runtime, or may be subject to monthly renewals at new market prices or whatnot. It's a very interesting part of the design.
But yes, the anonymity profits /greatly/ from crypto. Interestingly I think it could well work without crypto, whereas hardly any other designs could. It'd lose a lot of convenience though! And P = NP doesn't mean crypto is useless, just less nice. At the very least there'll be OTPs! Then there may be (quantum) couple RNGs that can generate secure OTPs. I think there'll always be ways the newer generations of this idea can continue to function.
But it absolutely requires a verified microkernel. I'm *very *excited to see that we're making progress towards it.
Is it really a progress? There are assumption free proofs of False in most formal proof systems.
Design flaws like \lor instead of \land pass the verification process and later are considered design flaws, not bugs in proof IMHO.
Knuth quote: "Beware of bugs in the above code; I have only proved it correct, not tried it."
It's progress, because now all we need is to specify safely, or find a way to prove specifications. The rest is somewhat, maybe, experimentally operational. It'll depend on how well we use it, but it is a step in the right direction!
-- Sent from my Android device with K-9 Mail. Please excuse my brevity.
On Mon, Jul 28, 2014 at 07:04:05PM +0200, Lodewijk andré de la porte wrote:
2014-07-28 18:35 GMT+02:00 Georgi Guninski <guninski@guninski.com>:
I have a rough outline for a "cloud computing grid" that verifiably:
verifiably? _really_ provable stuff is a very scarce resource IMHO (especially in crypto. do you need crypto? Do you need P \ne NP?).
Yes, it needs crypto. I said verifably because "proving" an untamperable hardware box (IOW: you can only plug it in, nothing else) is what the spec says it is, is, well, impossible.
Don't think what you wish is possible, but don't mind you dreaming :) IIRC _theoretically_ it is possible to run trusted program on untrusted hardware with the catch that you need some additional "proof" - this was a paper can't find at the moment.
Care to elaborate? I can mathematically prove any theorem, then implement it in C and segfault the kernel. How is this different?
These proofs tie a meta-specification to the actual code. Look into the sort of proofs they use. Basically they're saying "the code does what we said it has to. Only if we accidentally told it to go skynet on us, it will work perfectly". Things like segfaults would be pretty clearly not as specified. But of course the specification may also contain bugs as opposed to the human-intended specification. But this is also something they explain, just read about their proofs and proof system. (This is all IIRC from last time I checked it, which is a pretty while ago) 2014-07-29 16:14 GMT+02:00 Georgi Guninski <guninski@guninski.com>:
On Mon, Jul 28, 2014 at 07:04:05PM +0200, Lodewijk andré de la porte wrote:
2014-07-28 18:35 GMT+02:00 Georgi Guninski <guninski@guninski.com>:
I have a rough outline for a "cloud computing grid" that verifiably:
verifiably? _really_ provable stuff is a very scarce resource IMHO (especially in crypto. do you need crypto? Do you need P \ne NP?).
Yes, it needs crypto. I said verifably because "proving" an untamperable hardware box (IOW: you can only plug it in, nothing else) is what the spec says it is, is, well, impossible.
Don't think what you wish is possible, but don't mind you dreaming :)
Aha, but it is possible :) Dream along if you like! IIRC _theoretically_ it is possible to run
trusted program on untrusted hardware with the catch that you need some additional "proof" - this was a paper can't find at the moment.
I think you're talking about trusted computing, and I'm not talking about trusted computing :). I've seen some examples of trusted computing but the operation set was always not Turing complete. It might work, actually, but I don't see how. The system I'm talking about it not just a software system. It also involves hardware. The hardware will be created centrally, in batches. Probably by a third party and certainly checked by several independent observers to make sure it goes according to an open sourced specification. The devices from those batches can be plugged in and maintained by untrusted parties. IOW: they're shipped off to individuals who'll plug them in anywhere. The devices are tamper-proofed to such an extend that that is possible without exposing the sensitive data kept on them. The software that (as verified by the third parties present in manufacture) runs on the devices cannot snoop on the programs running on the system. The devices network together, likely over the Internet but maybe also radio. They Onion route all traffic. Etc. etc. Programs run on them but they cannot know what the program is doing. They're not updateable. They will all slowly fail and to make sure there's still places you can run your software there is generations. So a next batch forms a different computing grid. I'm repeating bits and peaces, but I want to be sure that you can see that it's a hardware design made useful by a software solution to fault proofness and program "location".
These proofs tie a meta-specification to the actual code. Look into the sort of proofs they use. Basically they're saying "the code does what we said it has to. Only if we accidentally told it to go skynet on us, it will work perfectly". Things like segfaults would be pretty clearly not as specified.
By browsing the proofs, I have the impression they don't depend on the C code (and Isabelle parsing C appears not very trivial, though possible). If I backdoor the C code of the kernel, will the proof fail?
On Jul 29, 2014 6:14 PM, "Georgi Guninski" <guninski@guninski.com> wrote:
These proofs tie a meta-specification to the actual code. Look into the sort of proofs they use. Basically they're saying "the code does what we said it has to. Only if we accidentally told it to go skynet on us, it
will
work perfectly". Things like segfaults would be pretty clearly not as specified.
By browsing the proofs, I have the impression they don't depend on the C code (and Isabelle parsing C appears not very trivial, though possible).
If I backdoor the C code of the kernel, will the proof fail?
Isn't that the whole idea? It would really change the thing up if there's a backdoor!
On Tue, Jul 29, 2014 at 06:36:19PM +0200, Lodewijk andré de la porte wrote:
On Jul 29, 2014 6:14 PM, "Georgi Guninski" <guninski@guninski.com> wrote:
These proofs tie a meta-specification to the actual code. Look into the sort of proofs they use. Basically they're saying "the code does what we said it has to. Only if we accidentally told it to go skynet on us, it
will
work perfectly". Things like segfaults would be pretty clearly not as specified.
By browsing the proofs, I have the impression they don't depend on the C code (and Isabelle parsing C appears not very trivial, though possible).
If I backdoor the C code of the kernel, will the proof fail?
Isn't that the whole idea? It would really change the thing up if there's a backdoor!
/* * Copyright 2014 under AGPLv3, Troy Benjegerdes, <add your name> * as a derivative work of http://q3u.be/patent/q3ube * claim 9, A process for producing a new physical computer hardware design * utilizing a software program running on an existing design that contains all * software and specifications to produce a derivative physical design. * (recursive hardware) */ Well, technically, I suspect the proof would never complete running because the hash signature(s) of the backdoored kernel will not match the signature etched in silicon on the open-source hardware, and it will immediately wipe any crypto keys when there is a kernel hash check failure. Of course, if you fix any of the backdoors that existed when the silicon mask was generated the hash check will fail too. C'est la vie /* * end copyright notice */ You'll need to hurry up and get your backdoors in kernels, compilers, and EDA tools before we get this implemented. I think you've got 5-15 years yet before that window closes. Eventually I predict various intelligence agencies will eventually work together to validate the trusted open-source hardware, either because they actually understand the value, or because activist hacker groups rewrite their legislative funding mandates using existing backdoors. Lodewijk, please write this stuff down (preferably as patent claims) so we have the original open-source hardware prior art documented and protected before some corporate IP pirates find it.
On Tue, 2014-07-29 at 19:14 +0300, Georgi Guninski wrote:
These proofs tie a meta-specification to the actual code. Look into the sort of proofs they use. Basically they're saying "the code does what we said it has to. Only if we accidentally told it to go skynet on us, it will work perfectly". Things like segfaults would be pretty clearly not as specified.
By browsing the proofs, I have the impression they don't depend on the C code (and Isabelle parsing C appears not very trivial, though possible).
If I backdoor the C code of the kernel, will the proof fail?
In short, as others have said, you're wrong about the lack of dependency. See: http://www.nicta.com.au/pub?doc=7371 -- Sent from Ubuntu
On Mon, Jul 28, 2014 at 06:05:47PM +0200, Lodewijk andré de la porte wrote:
<3
I have a rough outline for a "cloud computing grid" that verifiably:
* Exists on many hardware nodes that can be run by untrusted third parties, as the hardware is "tamper evident" and the "cloud computing grid" is self healing and distributing. * Can run programs in a way that geographic location is unknowable - Onion routing, amongst other features, causes programs to seemingly run on every node in the network * Cannot inspect what runs on it - the microkernel programs cannot read what the programs are doing, nor can programs contact one another unless they are instances (that potentially run on another hardware node). * Is fault tolerant, automatically scaling, etc. * Can still be addressed from the Internet or other networks
You could run something like your Bitcoin wallet in it. It's really geodistributed, automatically fault tolerant, etc. There's no way to tamper with the programs running, except as specified before a cluster was created.
Right now you're always stuck with an insane amount of local law, hardware issues (and features), fault tolerance, accessibility, etc.
With this you could run a program in a nigh-perfect vacuum. Your program would be networked, but not in any specific place. Nobody can know what it does, but it can compute and it can communicate as it wishes. Depending on the grid (grids have to be manifactured in one go, checked by several third parties, distributed, set and forget) there's some pricing scheme or whatnot.
But it absolutely requires a verified microkernel. I'm *very *excited to see that we're making progress towards it.
If you're a secure kernel developer, Onion routing expert, fantastic C coder or technology investor, and this triggered your interests, please contact me!
I would be very interested in contract work for running Linux on L4 microkernels. You can see my experience here https://lkml.org/lkml/1999/3/7/59 But if you give me the code, and I can emulate the hardware with qemu, I can extract keys/code/whatever from anything running inside the 'secure' microkernel. If you want the 'tamper evident' hardware it must be open-source hardware and you will need on the order of at least a million USD to fabricate the silicon, and then another million to offer cash prizes at DEFCON for anyone that can hack the silicon & microkernel. So if you can secure 2 to 10 million, let's get it done. Otherwise talking about the theory of a formal-methods 'microkernel' running on today's buggy broken pre-trojaned hardware is unethical and dangerous. Do you know what your network card firmware is doing? Neither do I, but I bet the NSA and Chinese intelligence will have updates for you when the 'secure' kernel comes out. -- ---------------------------------------------------------------------------- Troy Benjegerdes 'da hozer' hozer@hozed.org 7 elements earth::water::air::fire::mind::spirit::soul grid.coop Never pick a fight with someone who buys ink by the barrel, nor try buy a hacker who makes money by the megahash
This is probably only *theoretically* possible with quantum computation, but it's *practically* possible with tamperproof hardware: TCMs used for good instead of evil, maybe?
What I'm planning doesn't have much in the way of uniqueness of hardware. The only unique thing is.. wait.. shouldn't I patent this first? The whole hardware device will be as trusted as any TPM would be. And probably be more trustworthy than any TPM that exists now. 't Is a tricky thing. The programs run in a vacuum, even they can't know where they run. That changes the purpose of a "TPM" significantly. 2014-07-28 19:53 GMT+02:00 Troy Benjegerdes <hozer@hozed.org>:
On Mon, Jul 28, 2014 at 06:05:47PM +0200, Lodewijk andré de la porte wrote:
<3
If you're a secure kernel developer, Onion routing expert, fantastic C coder or technology investor, and this triggered your interests, please contact me!
I would be very interested in contract work for running Linux on L4 microkernels.
There will be need for running software on the secure platform, but I suspect resources will be so pricy that Linux will be too fat. I'm not really sure about that left or right, so don't pin me down on it. In tone with microkernels it would be nice to make wrappers for portions of Linux APIs, also to ease porting.
You can see my experience here https://lkml.org/lkml/1999/3/7/59
But if you give me the code, and I can emulate the hardware with qemu, I can extract keys/code/whatever from anything running inside the 'secure' microkernel.
I'm not entirely sure what you mean by this. At least in my design there is absolutely no unique (platform) information available, that includes platform/application keys. If you need them you'll have to make them in your own userspace.
If you want the 'tamper evident' hardware it must be open-source hardware and you will need on the order of at least a million USD to fabricate the silicon,
From what I heard from those that make mining hardware about half a million is enough. But I'm afraid trustable silicon is only part of the problem. The trustable silicon will already require some sort of minimalism to allow for validation, but it gets worse. To prevent data observation the rest of the casing will have to be able to self-destruct (think harddrives coated in thermite), and therefore self-observe in known an unknown ways. It also musn't radiate patterns, which will be exceedingly hard to prove. It will involve Faraday's cages and the like. Such hardware design is has been attempted before, but is probably still less effort than the software.
and then another million to offer cash prizes at DEFCON for anyone that can hack the silicon & microkernel.
I was thinking of a few millions thrown at companies banks could trust, to validate proofs, check circuits, observe fabrication, etc. and eventually undersign responsibility for the validity of the end result.
So if you can secure 2 to 10 million, let's get it done.
I don't think that'd be enough. It requires a small horde of experts and a significant support infrastructure. On top of that the idea is exceedingly hard to convey. The idea that the cloud can be more secure then an in-house dedicated server is very foreign. It was to me, when I thought it up. The upside is that the income model is very simple. Just charge for resources on the computing grids.
Otherwise talking about the theory of a formal-methods 'microkernel' running on today's buggy broken pre-trojaned hardware is unethical and dangerous.
It's not *that *unethical and dangerous, it just wouldn't work! How can you claim to stay afloat when you're underwater? The hardware that makes op the grid will probably be a collection of simplistic components bought in VHDL-form. These simple, trusted, components will likely be in a shroud of not-vital components that do some practical stuff, like connecting to the Internet. Fresh ground to find those stupid patents, too.
Do you know what your network card firmware is doing? Neither do I, but I bet the NSA and Chinese intelligence will have updates for you when the 'secure' kernel comes out.
Are you really prepared to see how deep the rabbit hole goes <http://www.intel.com/content/www/us/en/intelligent-systems/intel-technology/intel-amt-design-remote-management-even-when-systems-powered-off.html> ?
On Mon, Jul 28, 2014 at 08:37:44PM +0200, Lodewijk andré de la porte wrote:
This is probably only *theoretically* possible with quantum computation, but it's *practically* possible with tamperproof hardware: TCMs used for good instead of evil, maybe?
What I'm planning doesn't have much in the way of uniqueness of hardware. The only unique thing is.. wait.. shouldn't I patent this first?
Yes. Look at http://q3u.be/patent/q3ube/ for an example, and if you want to use AGPLv3 I will send you the source files.
The whole hardware device will be as trusted as any TPM would be. And probably be more trustworthy than any TPM that exists now. 't Is a tricky thing. The programs run in a vacuum, even they can't know where they run. That changes the purpose of a "TPM" significantly.
2014-07-28 19:53 GMT+02:00 Troy Benjegerdes <hozer@hozed.org>:
On Mon, Jul 28, 2014 at 06:05:47PM +0200, Lodewijk andré de la porte wrote:
<3
If you're a secure kernel developer, Onion routing expert, fantastic C coder or technology investor, and this triggered your interests, please contact me!
I would be very interested in contract work for running Linux on L4 microkernels.
There will be need for running software on the secure platform, but I suspect resources will be so pricy that Linux will be too fat. I'm not really sure about that left or right, so don't pin me down on it. In tone with microkernels it would be nice to make wrappers for portions of Linux APIs, also to ease porting.
You can see my experience here https://lkml.org/lkml/1999/3/7/59
But if you give me the code, and I can emulate the hardware with qemu, I can extract keys/code/whatever from anything running inside the 'secure' microkernel.
I'm not entirely sure what you mean by this. At least in my design there is absolutely no unique (platform) information available, that includes platform/application keys. If you need them you'll have to make them in your own userspace.
If you want the 'tamper evident' hardware it must be open-source hardware and you will need on the order of at least a million USD to fabricate the silicon,
From what I heard from those that make mining hardware about half a million is enough. But I'm afraid trustable silicon is only part of the problem. The trustable silicon will already require some sort of minimalism to allow for validation, but it gets worse. To prevent data observation the rest of the casing will have to be able to self-destruct (think harddrives coated in thermite), and therefore self-observe in known an unknown ways. It also musn't radiate patterns, which will be exceedingly hard to prove. It will involve Faraday's cages and the like. Such hardware design is has been attempted before, but is probably still less effort than the software.
and then another million to offer cash prizes at DEFCON for anyone that can hack the silicon & microkernel.
I was thinking of a few millions thrown at companies banks could trust, to validate proofs, check circuits, observe fabrication, etc. and eventually undersign responsibility for the validity of the end result.
So if you can secure 2 to 10 million, let's get it done.
I don't think that'd be enough. It requires a small horde of experts and a significant support infrastructure. On top of that the idea is exceedingly hard to convey. The idea that the cloud can be more secure then an in-house dedicated server is very foreign. It was to me, when I thought it up.
The upside is that the income model is very simple. Just charge for resources on the computing grids.
Otherwise talking about the theory of a formal-methods 'microkernel' running on today's buggy broken pre-trojaned hardware is unethical and dangerous.
It's not *that *unethical and dangerous, it just wouldn't work! How can you claim to stay afloat when you're underwater? The hardware that makes op the grid will probably be a collection of simplistic components bought in VHDL-form. These simple, trusted, components will likely be in a shroud of not-vital components that do some practical stuff, like connecting to the Internet. Fresh ground to find those stupid patents, too.
Do you know what your network card firmware is doing? Neither do I, but I bet the NSA and Chinese intelligence will have updates for you when the 'secure' kernel comes out.
Are you really prepared to see how deep the rabbit hole goes <http://www.intel.com/content/www/us/en/intelligent-systems/intel-technology/intel-amt-design-remote-management-even-when-systems-powered-off.html> ?
I got far enough I decided it was far easier to say the hell with it, If I stop worrying and live my life asuming someone is always watching and *I do not care* the paranoia dissapates rather quickly. What I actually care about though is having the open source hardware so I can actually *fix* old equipment instead of reverse-engineering all the 35 layers of remote backdoors that break all the time. -- ---------------------------------------------------------------------------- Troy Benjegerdes 'da hozer' hozer@hozed.org 7 elements earth::water::air::fire::mind::spirit::soul grid.coop Never pick a fight with someone who buys ink by the barrel, nor try buy a hacker who makes money by the megahash
On Mon, Jul 28, 2014 at 06:26:50PM +0300, Georgi Guninski wrote:
news: http://www.theregister.co.uk/2014/07/28/aussie_droneprotecting_hackerdetecti... site:
I'm on their announcement list. Good news, question is who's going to pick it up and build a distro around that. Anyone here use Qubes OS?
AFAICT they used Isabelle for the proofs.
Coq sucks much (not counting its developers).
I didn't spend much time on this, just browsed some proofs. The proofs appear to not depend to on the C code AFAICT. Would trojanizing the C code invalidate the proofs? The haskell stuff appear to not contain all info about the C code. On Mon, Jul 28, 2014 at 06:26:50PM +0300, Georgi Guninski wrote:
news: http://www.theregister.co.uk/2014/07/28/aussie_droneprotecting_hackerdetecti... site: http://sel4.systems/
AFAICT they used Isabelle for the proofs.
Coq sucks much (not counting its developers).
That seems suspect. Most obvious security flaws in C are due to misimplemented C, right? Not the algorithms themselves? So the kernel can be theoretically secure but still be packed with buffer overflows and pointer errors? I'll wait until someone redoes it in a language designed for safe systems programming, like Rust. :) On 29/07/14 13:41, Georgi Guninski wrote:
I didn't spend much time on this, just browsed some proofs. The proofs appear to not depend to on the C code AFAICT. Would trojanizing the C code invalidate the proofs?
The haskell stuff appear to not contain all info about the C code.
On Mon, Jul 28, 2014 at 06:26:50PM +0300, Georgi Guninski wrote:
news: http://www.theregister.co.uk/2014/07/28/aussie_droneprotecting_hackerdetecti... site: http://sel4.systems/
AFAICT they used Isabelle for the proofs.
Coq sucks much (not counting its developers).
-- T: @onetruecathal, @IndieBBDNA P: +353876363185 W: http://indiebiotech.com
On 29/07/14 14:55, Lodewijk andré de la porte wrote:
2014-07-29 15:03 GMT+02:00 Cathal Garvey <cathalgarvey@cathalgarvey.me>:
So the kernel can be theoretically secure but still be packed with buffer overflows and pointer errors?
No.
Care to elaborate? I can mathematically prove any theorem, then implement it in C and segfault the kernel. How is this different? -- T: @onetruecathal, @IndieBBDNA P: +353876363185 W: http://indiebiotech.com
On Tue, Jul 29, 2014 at 03:55:38PM +0200, Lodewijk andré de la porte wrote:
2014-07-29 15:03 GMT+02:00 Cathal Garvey <cathalgarvey@cathalgarvey.me>:
So the kernel can be theoretically secure but still be packed with buffer overflows and pointer errors?
No.
Yes, I have such kernel and proof ;)
When you can write an interrupt or page fault handler in rust, let me know. On Tue, Jul 29, 2014 at 02:03:28PM +0100, Cathal Garvey wrote:
That seems suspect. Most obvious security flaws in C are due to misimplemented C, right? Not the algorithms themselves? So the kernel can be theoretically secure but still be packed with buffer overflows and pointer errors?
I'll wait until someone redoes it in a language designed for safe systems programming, like Rust. :)
On 29/07/14 13:41, Georgi Guninski wrote:
I didn't spend much time on this, just browsed some proofs. The proofs appear to not depend to on the C code AFAICT. Would trojanizing the C code invalidate the proofs?
The haskell stuff appear to not contain all info about the C code.
On Mon, Jul 28, 2014 at 06:26:50PM +0300, Georgi Guninski wrote:
news: http://www.theregister.co.uk/2014/07/28/aussie_droneprotecting_hackerdetecti... site: http://sel4.systems/
AFAICT they used Isabelle for the proofs.
Coq sucks much (not counting its developers).
-- T: @onetruecathal, @IndieBBDNA P: +353876363185 W: http://indiebiotech.com
pub 4096R/988B9099 2013-02-06 Cathal Garvey (Other accs: onetruecathal@twitter, cathalgarvey@github, cathalgarvey@gitorious, indiebiotech.com) <cathalgarvey@cathalgarvey.me> uid Cathal Garvey (Microstatus account) <onetruecathal@twitter.com> uid Cathal Garvey (Gitorious code hosting account) <cathalgarvey@gitorious.org> sub 4096R/65B3395F 2013-02-06
-- ---------------------------------------------------------------------------- Troy Benjegerdes 'da hozer' hozer@hozed.org 7 elements earth::water::air::fire::mind::spirit::soul grid.coop Never pick a fight with someone who buys ink by the barrel, nor try buy a hacker who makes money by the megahash
Lol, I haven't even written "hello world" in it yet, but I believe this is exactly the sort of future use-case Rust is for. Perhaps I'm mistaken, but surely someone's dreamt up a more modern take on C with memory safety that can be used instead? On 29/07/14 15:31, Troy Benjegerdes wrote:
When you can write an interrupt or page fault handler in rust, let me know.
On Tue, Jul 29, 2014 at 02:03:28PM +0100, Cathal Garvey wrote:
That seems suspect. Most obvious security flaws in C are due to misimplemented C, right? Not the algorithms themselves? So the kernel can be theoretically secure but still be packed with buffer overflows and pointer errors?
I'll wait until someone redoes it in a language designed for safe systems programming, like Rust. :)
On 29/07/14 13:41, Georgi Guninski wrote:
I didn't spend much time on this, just browsed some proofs. The proofs appear to not depend to on the C code AFAICT. Would trojanizing the C code invalidate the proofs?
The haskell stuff appear to not contain all info about the C code.
On Mon, Jul 28, 2014 at 06:26:50PM +0300, Georgi Guninski wrote:
news: http://www.theregister.co.uk/2014/07/28/aussie_droneprotecting_hackerdetecti... site: http://sel4.systems/
AFAICT they used Isabelle for the proofs.
Coq sucks much (not counting its developers).
-- T: @onetruecathal, @IndieBBDNA P: +353876363185 W: http://indiebiotech.com
pub 4096R/988B9099 2013-02-06 Cathal Garvey (Other accs: onetruecathal@twitter, cathalgarvey@github, cathalgarvey@gitorious, indiebiotech.com) <cathalgarvey@cathalgarvey.me> uid Cathal Garvey (Microstatus account) <onetruecathal@twitter.com> uid Cathal Garvey (Gitorious code hosting account) <cathalgarvey@gitorious.org> sub 4096R/65B3395F 2013-02-06
-- T: @onetruecathal, @IndieBBDNA P: +353876363185 W: http://indiebiotech.com
Go read http://jvns.ca/blog/2014/03/12/the-rust-os-story/ and start writing your own malloc If we had a L4-compatible kernel written in Rust that would be pretty damn cool. Now I just need to find someone to bill to play with this... On Tue, Jul 29, 2014 at 03:58:25PM +0100, Cathal Garvey wrote:
Lol, I haven't even written "hello world" in it yet, but I believe this is exactly the sort of future use-case Rust is for. Perhaps I'm mistaken, but surely someone's dreamt up a more modern take on C with memory safety that can be used instead?
On 29/07/14 15:31, Troy Benjegerdes wrote:
When you can write an interrupt or page fault handler in rust, let me know.
On Tue, Jul 29, 2014 at 02:03:28PM +0100, Cathal Garvey wrote:
That seems suspect. Most obvious security flaws in C are due to misimplemented C, right? Not the algorithms themselves? So the kernel can be theoretically secure but still be packed with buffer overflows and pointer errors?
I'll wait until someone redoes it in a language designed for safe systems programming, like Rust. :)
On 29/07/14 13:41, Georgi Guninski wrote:
I didn't spend much time on this, just browsed some proofs. The proofs appear to not depend to on the C code AFAICT. Would trojanizing the C code invalidate the proofs?
The haskell stuff appear to not contain all info about the C code.
On Mon, Jul 28, 2014 at 06:26:50PM +0300, Georgi Guninski wrote:
news: http://www.theregister.co.uk/2014/07/28/aussie_droneprotecting_hackerdetecti... site: http://sel4.systems/
AFAICT they used Isabelle for the proofs.
Coq sucks much (not counting its developers).
-- T: @onetruecathal, @IndieBBDNA P: +353876363185 W: http://indiebiotech.com
pub 4096R/988B9099 2013-02-06 Cathal Garvey (Other accs: onetruecathal@twitter, cathalgarvey@github, cathalgarvey@gitorious, indiebiotech.com) <cathalgarvey@cathalgarvey.me> uid Cathal Garvey (Microstatus account) <onetruecathal@twitter.com> uid Cathal Garvey (Gitorious code hosting account) <cathalgarvey@gitorious.org> sub 4096R/65B3395F 2013-02-06
-- T: @onetruecathal, @IndieBBDNA P: +353876363185 W: http://indiebiotech.com
pub 4096R/988B9099 2013-02-06 Cathal Garvey (Other accs: onetruecathal@twitter, cathalgarvey@github, cathalgarvey@gitorious, indiebiotech.com) <cathalgarvey@cathalgarvey.me> uid Cathal Garvey (Microstatus account) <onetruecathal@twitter.com> uid Cathal Garvey (Gitorious code hosting account) <cathalgarvey@gitorious.org> sub 4096R/65B3395F 2013-02-06
-- ---------------------------------------------------------------------------- Troy Benjegerdes 'da hozer' hozer@hozed.org 7 elements earth::water::air::fire::mind::spirit::soul grid.coop Never pick a fight with someone who buys ink by the barrel, nor try buy a hacker who makes money by the megahash
participants (7)
-
Cathal (Phone)
-
Cathal Garvey
-
Eugen Leitl
-
Georgi Guninski
-
Lodewijk andré de la porte
-
Ted Smith
-
Troy Benjegerdes