Re: [Cryptography] "/dev/random is not robust"
----- Forwarded message from John Gilmore <gnu@toad.com> ----- Date: Mon, 14 Oct 2013 18:53:58 -0700 From: John Gilmore <gnu@toad.com> To: dj@deadhat.com Cc: tytso@mit.edi, Cryptography List <cryptography@metzdowd.com>, cryptography@randombit.net Subject: Re: [Cryptography] "/dev/random is not robust" Message-Id: <201310150153.r9F1rwqQ011302@new.toad.com>
I'll be the first to admit that I don't understand this paper. I'm just an engineer, not a mathematician. But it looks to me like the authors are academics, who create an imaginary construction method for a random number generator, then prove that /dev/random is not the same as their method, and then suggest that /dev/random be revised to use their method, and then show how much faster their method is. All in all it seems to be a pitch for their method, not a serious critique of /dev/random. They labeled one of their construction methods "robustness", but it doesn't mean what you think the word means. It's defined by a mess of greek letters like this: Theorem 2. Let n > m, , ?? ??? be integers. Assume that G : {0, 1}m ??? {0, 1}n+ is a deterministic (t, ??prg )- pseudorandom generator. Let G = (setup, refresh, next) be defined as above. Then G is a ((t , qD , qR , qS ), ?? ??? , ??)- 2 robust PRNG with input where t ??? t, ?? = qR (2??prg +qD ??ext +2???n+1 ) as long as ?? ??? ??? m+2 log(1/??ext )+1, n ??? m + 2 log(1/??ext ) + log(qD ) + 1. Yeah, what he said! Nowhere do they seem to show that /dev/random is actually insecure. What they seem to show is that it does not meet the "robustness" criterion that they arbitrarily picked for their own construction. Their key test is on pages 23-24, and begins with "After a state compromise, A (the adversary) knows all parameters." The comparison STARTS with the idea that the enemy has figured out all of the hidden internal state of /dev/random. Then the weakness they point out seems to be that in some cases of new, incoming randomness with mis-estimated entropy, /dev/random doesn't necessarily recover over time from having had its entire internal state somehow compromised. This is not very close to what "/dev/random is not robust" means in English. Nor is it close to what others might assume the paper claims, e.g. "/dev/random is not safe to use". John PS: After attending a few crypto conferences, I realized that academic pressures tend to encourage people to write incomprehensible papers, apparently because if nobody reading their paper can understand it, then they look like geniuses. But when presenting at a conference, if nobody in the crowd can understand their slides, then they look like idiots. So the key to understanding somebody's incomprehensible paper is to read their slides and watch their talk, 80% of which is often explanations of the background needed to understand the gibberish notations they invented in the paper. I haven't seen either the slides or the talk relating to this paper. _______________________________________________ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography ----- End forwarded message ----- -- Eugen* Leitl <a href="http://leitl.org">leitl</a> http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://ativel.com http://postbiota.org AC894EC5: 38A5 5F46 A4FF 59B8 336B 47EE F46E 3489 AC89 4EC5
Seems like their standard is from Barak & Halevi, “A model and architecture for pseudo-random generation with applications to /dev/random.” – Resilience: an adversary must not be able to predict future PRNG outputs even if he can influence the entropy source used to initialize or refresh the internal state of the PRNG; – Forward security ( resp. backward security): an adversary must not be able to predict past (resp. future) outputs even if he can compromise the internal state of the PRNG. Seems like a lovely defense-in-depth goal, but I'd really like to read the original paper to figure out why I should care. -- ~j On 2013-10-15, at 08:49, Eugen Leitl <eugen@leitl.org> wrote:
----- Forwarded message from John Gilmore <gnu@toad.com> -----
Date: Mon, 14 Oct 2013 18:53:58 -0700 From: John Gilmore <gnu@toad.com> To: dj@deadhat.com Cc: tytso@mit.edi, Cryptography List <cryptography@metzdowd.com>, cryptography@randombit.net Subject: Re: [Cryptography] "/dev/random is not robust" Message-Id: <201310150153.r9F1rwqQ011302@new.toad.com>
I'll be the first to admit that I don't understand this paper. I'm just an engineer, not a mathematician. But it looks to me like the authors are academics, who create an imaginary construction method for a random number generator, then prove that /dev/random is not the same as their method, and then suggest that /dev/random be revised to use their method, and then show how much faster their method is. All in all it seems to be a pitch for their method, not a serious critique of /dev/random.
They labeled one of their construction methods "robustness", but it doesn't mean what you think the word means. It's defined by a mess of greek letters like this:
Theorem 2. Let n > m, , ?? ??? be integers. Assume that G : {0, 1}m ??? {0, 1}n+ is a deterministic (t, ??prg )- pseudorandom generator. Let G = (setup, refresh, next) be defined as above. Then G is a ((t , qD , qR , qS ), ?? ??? , ??)- 2 robust PRNG with input where t ??? t, ?? = qR (2??prg +qD ??ext +2???n+1 ) as long as ?? ??? ??? m+2 log(1/??ext )+1, n ??? m + 2 log(1/??ext ) + log(qD ) + 1.
Yeah, what he said!
Nowhere do they seem to show that /dev/random is actually insecure. What they seem to show is that it does not meet the "robustness" criterion that they arbitrarily picked for their own construction.
Their key test is on pages 23-24, and begins with "After a state compromise, A (the adversary) knows all parameters." The comparison STARTS with the idea that the enemy has figured out all of the hidden internal state of /dev/random. Then the weakness they point out seems to be that in some cases of new, incoming randomness with mis-estimated entropy, /dev/random doesn't necessarily recover over time from having had its entire internal state somehow compromised.
This is not very close to what "/dev/random is not robust" means in English. Nor is it close to what others might assume the paper claims, e.g. "/dev/random is not safe to use".
John
PS: After attending a few crypto conferences, I realized that academic pressures tend to encourage people to write incomprehensible papers, apparently because if nobody reading their paper can understand it, then they look like geniuses. But when presenting at a conference, if nobody in the crowd can understand their slides, then they look like idiots. So the key to understanding somebody's incomprehensible paper is to read their slides and watch their talk, 80% of which is often explanations of the background needed to understand the gibberish notations they invented in the paper. I haven't seen either the slides or the talk relating to this paper.
_______________________________________________ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
----- End forwarded message ----- -- Eugen* Leitl <a href="http://leitl.org">leitl</a> http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://ativel.com http://postbiota.org AC894EC5: 38A5 5F46 A4FF 59B8 336B 47EE F46E 3489 AC89 4EC5
Their objectives are not that bad, though you do have to wonder if "adversary got root and copied your RNG state" is that realistic - if they can do that wont they just install a rootkit or take your private key out of ram? But Schneier and co designed yarrow, and newer replacement fortuna to those design objectives. I guess there should be open source for that. If its clean code maybe someone should have a go at beating Torvalds over the head with logic until he replaces it. It does make a difference even for bootstrap issues. Years ago at ZKS I tried watching /dev/random while generating entropy, there was clearly not nice things about the way they were added. For example if you start a server or boot a machine with a known initialized state, and the machine is talking on the network releaseing RNG output, and the entropy what little there is is added incrementally so that you can bruteforce the gap (eg 20-bits of entropy added max in one lump) then you can infer what the new RNG state is. If the rate of entropy addition is low enough and the computer too chatty with RNG outputs over the network (eg maybe it gives you some in TCP sequence numbers or on demand just by hitting it on various ports) you can then retain knowledge of its state perpetually. Thats what Fortuna (and before it Yarrow) are about: you collect entropy into a lump before you add it, to protect against that problem. Adam On Wed, Oct 16, 2013 at 07:27:47PM +0000, Joseph Holsten wrote:
Seems like their standard is from Barak & Halevi, “A model and architecture for pseudo-random generation with applications to /dev/random.”
– Resilience: an adversary must not be able to predict future PRNG outputs even if he can influence the entropy source used to initialize or refresh the internal state of the PRNG; – Forward security ( resp. backward security): an adversary must not be able to predict past (resp. future) outputs even if he can compromise the internal state of the PRNG.
Seems like a lovely defense-in-depth goal, but I'd really like to read the original paper to figure out why I should care. -- ~j
On 2013-10-15, at 08:49, Eugen Leitl <eugen@leitl.org> wrote:
----- Forwarded message from John Gilmore <gnu@toad.com> -----
Date: Mon, 14 Oct 2013 18:53:58 -0700 From: John Gilmore <gnu@toad.com> To: dj@deadhat.com Cc: tytso@mit.edi, Cryptography List <cryptography@metzdowd.com>, cryptography@randombit.net Subject: Re: [Cryptography] "/dev/random is not robust" Message-Id: <201310150153.r9F1rwqQ011302@new.toad.com>
I'll be the first to admit that I don't understand this paper. I'm just an engineer, not a mathematician. But it looks to me like the authors are academics, who create an imaginary construction method for a random number generator, then prove that /dev/random is not the same as their method, and then suggest that /dev/random be revised to use their method, and then show how much faster their method is. All in all it seems to be a pitch for their method, not a serious critique of /dev/random.
They labeled one of their construction methods "robustness", but it doesn't mean what you think the word means. It's defined by a mess of greek letters like this:
Theorem 2. Let n > m, , ?? ??? be integers. Assume that G : {0, 1}m ??? {0, 1}n+ is a deterministic (t, ??prg )- pseudorandom generator. Let G = (setup, refresh, next) be defined as above. Then G is a ((t , qD , qR , qS ), ?? ??? , ??)- 2 robust PRNG with input where t ??? t, ?? = qR (2??prg +qD ??ext +2???n+1 ) as long as ?? ??? ??? m+2 log(1/??ext )+1, n ??? m + 2 log(1/??ext ) + log(qD ) + 1.
Yeah, what he said!
Nowhere do they seem to show that /dev/random is actually insecure. What they seem to show is that it does not meet the "robustness" criterion that they arbitrarily picked for their own construction.
Their key test is on pages 23-24, and begins with "After a state compromise, A (the adversary) knows all parameters." The comparison STARTS with the idea that the enemy has figured out all of the hidden internal state of /dev/random. Then the weakness they point out seems to be that in some cases of new, incoming randomness with mis-estimated entropy, /dev/random doesn't necessarily recover over time from having had its entire internal state somehow compromised.
This is not very close to what "/dev/random is not robust" means in English. Nor is it close to what others might assume the paper claims, e.g. "/dev/random is not safe to use".
John
PS: After attending a few crypto conferences, I realized that academic pressures tend to encourage people to write incomprehensible papers, apparently because if nobody reading their paper can understand it, then they look like geniuses. But when presenting at a conference, if nobody in the crowd can understand their slides, then they look like idiots. So the key to understanding somebody's incomprehensible paper is to read their slides and watch their talk, 80% of which is often explanations of the background needed to understand the gibberish notations they invented in the paper. I haven't seen either the slides or the talk relating to this paper.
_______________________________________________ The cryptography mailing list cryptography@metzdowd.com http://www.metzdowd.com/mailman/listinfo/cryptography
----- End forwarded message ----- -- Eugen* Leitl <a href="http://leitl.org">leitl</a> http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://ativel.com http://postbiota.org AC894EC5: 38A5 5F46 A4FF 59B8 336B 47EE F46E 3489 AC89 4EC5
participants (3)
-
Adam Back
-
Eugen Leitl
-
Joseph Holsten