[Cryptography] /dev/random is not robust

Eugen Leitl eugen at leitl.org
Thu Oct 17 01:23:58 PDT 2013


----- Forwarded message from Theodore Ts'o <tytso at mit.edu> -----

Date: Wed, 16 Oct 2013 15:11:12 -0400
From: Theodore Ts'o <tytso at mit.edu>
To: Jerry Leichter <leichter at lrw.com>
Cc: Sandy Harris <sandyinchina at gmail.com>, Cryptography <cryptography at metzdowd.com>
Subject: Re: [Cryptography] /dev/random is not robust
Message-ID: <20131016191112.GD11604 at thunk.org>
User-Agent: Mutt/1.5.21 (2010-09-15)

On Wed, Oct 16, 2013 at 01:58:31PM -0400, Jerry Leichter wrote:
> > None of this matters much if the enemy does not already have root on
> > your system.
> Backwards security is a prerequisite for building PFS, among other
> things.  Without it, if an attacker seizes your system, he can (in
> principle, but we're considering *potential capabilities*, not what
> we know how to do in detail today) "run the random number generator
> backwards"....

Actually, the scenario you describe above is called "forward security"
in the referenced paper.  That's granted.  It's also not a problem
with Linux's /dev/random, so what you are complaining about is a straw
man argument.

What I think folks (including myself) are much convinced by is the
importance of worrying about the other attacks detailed in the paper,
where the attacker is presumed to be able to control the entropy
sources to some arbitrary extent.  In the case of Linux's /dev/random,
that means the the adversary would be able to control the exact timing
of interrupts in the system in such a way that the entropy estimators
would be fooled.

> I'm amazed and disturbed by the nature of the responses to this
> paper.  They are *indistinguishable* from the typical PR blather we
> get from every commercial operation out there when someone reports a
> potential attack: It's just theory, it can't be translated into
> practice, we have multiple layers of security so even if one of them
> can be attacked the others still protect you, yadda yadda yadda.

So if someone would like to make a concrete suggestion, I'm certainly
willing to entertain a specific proposal.  I'll note first of all that
FreeBSD's use of Yarrow still uses an entropy estimator, and so it
doesn't answer the paper's complaint that "all entropy estimators are
crap, and we shouldn't trust any RNG that uses an entropy estimator".

I've also said that I might be willing to add some arbitrary threshold
were reads to /dev/urandom will block until we estimate that we've
received a certain amount of entropy --- although I'd first like to
make sure we get as much entropy as possible, so we don't block for
too long first.  There are some real practical problems on certain
embedded platforms where we don't have access to high resolution CPU
counters which as far as I'm concerned is a highr priority.

This however, is still going to cause academics to kvetch about how we
are hopeless, since they disbelieve in entropy estimators and insist
on an attack model where the attacker can arbitrarily influence
interrupt timing.

Furthermore, in the area of the cold start problem, which is the much
more real and more practical problem, Fortuna doesn't help, since it
in a cold start scenario, it has no idea when it's safe to allow
someone to draw from Fortuna --- since in order to do that would rely
on an entropy estimator which the academics disbelieve in!


> There's always been a strain of anti-academy bias - even
> anti-intellectualism - in the OSS community.  It's highly
> undesirable.  There's good academic academic work, and there's bad
> academic work.  Even with the domain of good academic work, some is
> of practical interest today, and some isn't.  (And some that isn't
> of interest today may be tomorrow.)

I prefer to call it "healthy skepticism".  As I've said elsewhere, I
recall the huge focus on formal verification of computer programs, and
the very large number of trees that were killed over that particular
academic fad.  If we consider the use of techniques that have
_actually_ improved security: valgrind, ASLR, static code analysis,
how many of them have actually come from academics?  The only one I
can think of is Coverity, and even there, most of the work was done in
a commercial setting, *not* an academic one.

(In another area, the academic focus on real-time scheduling (at least
five years ago, when I was focusing on that as the technical lead of a
Real-Time Linux effort at IBM in support of the US Navy's DDX-1000
next generation destroyer), was completely divergent from what we were
actually using in industry.  Why?  Because by the time the perfect
real-time scheduling algorithms were finished running, especially in a
dynamic environment, you were pretty much guaranteed to miss *all* of
your deadlines and the missle which had popped above the radar horizen
unexpectedly would have long ago exploded amidships!)

Going back to the random number generators, there are a lot of
practical issues, such as making sure the entropy collection is fast
enough that downstream kernel engineers and device driver maintainers
don't just turn off the entropy collection.  This can sometimes happen
in embedded kernels, and you might not ever know that this has
happened.  This is not an academic concern, but it's a real one.

On another front, I recently noticed that on my Debian Testing box,
the openssl librcrypto library is apparently not using /dev/urandom or
/dev/random by default.  Hence, if you don't have a ~/.rnd file, any
public/private key pairs that you might generate would have no entropy
at all!  How did I notice this?  Because I added a kernel trace point
so I could monitor how much use of /dev/random was being used by
various userspace progam.  I was originally concerned by overuse of
/dev/urandom where it wasn't needed, but then I discovered than
"openssl genrsa" and "ssh-keygen" is apparently not using /dev/urandom
or /dev/random at all(!!!).

(Fortunatly this does not appear to be the case on Debian Stable, so
it looks like a recent regression.  Or maybe it's a misconfiguration
on my end, but (a) I'm getting lost trying to figure out the mazy of
twisty compile-time and run-time configuration options of OpenSSL,
complicated by the Debian packaging system, and (b) even if it is
somehow "my fault", it shouldn't be that easy to have things silently
fail to have no entropy at all.)

So quite frankly, I have much bigger fish to fry, and I have to
prioritize my time, since I don't get paid to maintain the Linux
random number generator and so my time is not unlimited.

> This kind of "shoot the messenger" approach is just plain wrong.
> Look at the definition of robustness they come up with and tell me
> what parts of it aren't things you'd *like* to get in your RNG, if
> you could.  Can you come up with anything beyond hand-waving to show
> that the Linux RNG actually provides you with those properties?

#1.  I've read through the paper, and the there is nothing new in the
     paper that I consider as something I'd like in the RNG.  I do
     care about what they call "forwards security", where compromise
     of the random state pool does not compromise past results.
     That's not a new requirement, and it's one which I'm satisfied
     that we meet.

#2.  I'm not obligated to prove to *you* anything.  I don't get paid
     or my prospects for promotion do not go up by spending hours
     writing an peer-reviewed paper.  So if you are demanding a formal
     proof, in the form of an academic paper, instead of "mere
     handwaving", you can demand anything you want.  I get demands for
     free programming efforts for pet features to my OSS code all the
     time, and I know how to handle such requests/demands.

#3.  Let me turn this around, and ask *you* to give me concrete
     suggestions about changes you'd like to make, preferably in the
     form of a patch.  I'll tell you right away that both Fortuna and
     Yarrow, which use crypto hashing in the entropy mixing step, is
     going to be a non-starter from a performance point of view.  It's
     not hypothetical when I talk about embedded shops demanding of
     their Linux kernel developers That they disable entropy
     collection.  That has actually happened, and I've engaged said
     embedded kernel developer who got pressure from his management to
     do this on LKML to try to address that particular concern.  To
     that end, I've made the entropy collection even lighter-weight
     that will be merged into the next kernel merge window, and I
     believe I've done it in a way that preserves our security
     properties.

So when you ask me to worry about a hypothetical attack where an
adversary might be able to control all interrupt timing, and I'm
dealing with an actual attack where the adversary (also known as the
product manager :-) demanding that entropy collection be disabled,
please don't be offended when I don't take you all that seriously.

Especially when the "academically approved" RNG's don't fare all that
well in a world where all entropy estimators are f*cked and interrupt
timing and other entropy sources are subject to fine-grained control
by the remote attacker.  (Even Fortuna, if you are worried about the
cold start problem.)

Regards,

					- Ted

P.S.  If you actually read the /dev/random source code, and take a
look at the git commit logs, you'll see that I have made changes in
response to academic papers, and I make sure to give them full credit
in the comments.  My bias only comes because I've seen so much
academic work which has very little relationship to the problems that
I need to worry about as a practicing engineer.
_______________________________________________
The cryptography mailing list
cryptography at metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

----- End forwarded message -----
-- 
Eugen* Leitl <a href="http://leitl.org">leitl</a> http://leitl.org
______________________________________________________________
ICBM: 48.07100, 11.36820 http://ativel.com http://postbiota.org
AC894EC5: 38A5 5F46 A4FF 59B8 336B  47EE F46E 3489 AC89 4EC5



More information about the cypherpunks mailing list