Re: /dev/random for FreeBSD [was: Re: /dev/random for Linux]
Something I didn't mention earlier; we felt that letting the unwashed masses read /dev/*random was not a good idea, as they could deplete the pool of entropy all to easily for attack purposes.
That should be a system administration issue. If someone wants to make /dev/random readable only by root at their site, that's their business. I don't see any point in trying to enforce that in the kernel code.
Is not in the kernel, this is just the permissions on /dev/*random.
I don't agree that restricting read access is useful. First of all, if the pool of entropy is depleted, someone who tries to obtain entropy by reading /dev/random will know that they didn't get enough entropy. So assuming a program that actually checks return values from system calls, this is at worse a denial of service attack, and there are much easier ways of performing those srots of attacks: "while (1) fork()", for example.
Hmm. Lemme think about this...
Secondly, making /dev/random only readable by "privileged programs" means that people won't be able to compile their own version of PGP that can take advantage of the random number generator. Instead, they would have to use a setuid version of PGP, and I'm quite sure PGP wasn't written such that it would be safe to turn on its setuid bit.
How about SetGID? We were going for 660 root.kmem.
Finally, even if you did have trustworthy applications which you could setuid and only allow those programs to have access to /dev/random, someone who repeatedly ran those applications could still end up depleteing the pool of entropy.
So in the general case I would advise that /dev/random be left world readable, since you *do* want general user programs to have access to high quality random numbers.
Ponder... I'll put this forward.
Again, /dev/random can be set to whatever permissions the system administrator wants. Secondly, writing to /dev/random merely adds randomness to the pool, via the mixing algorithm. It won't actually permit people to *set* the state of the pool, and assuming that the state of the pool is not known before the write operation, writing to it won't allow the user to know what the state is after the write operation.
What happens if some attacker does: for (;;) { write_to_devrandom(NULL); check_to_see_if_state_is_crackable(); } ? "Gut feel" suggests to me that large ammounts of "predicted" input might be worse than the normal sort of system noise you have been using.
And, for race condition reasons, something which I need to implement soon is an ioctl(), usuable only by root, that simultaneously updates the entropy estimate *and* submits data to be mixed into the pool. (Why this is necessary should be obvious after a few minutes thought.)
Clue me in - I'm not quite with you? :-)
Are you sure about this? The stochastisity if this would be pretty hefty. Not only would our attacker have to get the _time_ that the interrupt occurred (if it interrupted our machine), he would then have to process in brute-force mode all possible times in his error range. What is more, more interrupts are coming in...
I didn't say that it would be trivial for an attacker to do this, but it's certainly *doable*. Some of the network traffic analyzers that have been made available (I think Sandia National Labs has one that does this), records down to millisecond accuracy when a packet was sniffed on the network.
Is this millisecond accuracy quantifiable in terms of bits of entropy? if so, the ethernet is surely safe?
For this reason, people shouldn't really trust initializing PGP's random number generator over a network connection, since it is possible for an adversary to obtain very high quality timings of when your telnet or rlogin packets appeared on the network, and hence be able to guess (within some error range) what the interkeyboard timings which PGP used to initialize its random number generator.
The adversary might have to try a large number of possibilities, but if the number of possibilities is less than a brute-force search, you definitely have a weakness --- a fact which Netscape learned to its embarassment a few weeks ago.
Again, if you can quantify the number of possibilities into bits of entropy, your code is good. Depending on current technology, this may have to change. M -- Mark Murray 46 Harvey Rd, Claremont, Cape Town 7700, South Africa +27 21 61-3768 GMT+0200 Finger mark@grumble.grondar.za for PGP key
In message <199510302148.XAA00832@grumble.grondar.za>, Mark Murray writes: [...]
I don't agree that restricting read access is useful. First of all, if the pool of entropy is depleted, someone who tries to obtain entropy by reading /dev/random will know that they didn't get enough entropy. So assuming a program that actually checks return values from system calls, this is at worse a denial of service attack, and there are much easier ways of performing those srots of attacks: "while (1) fork()", for example.
Hmm. Lemme think about this...
When /dev/random doesn't have "enough" enthropy left does reading from it return an error, or block? I would strongly suggest blocking, as the non-blocking behavur is not really all that useful. Either can simulate the other, but I think it comes down to: non-blocking worst-case: a program calls /dev/random, doesn't get randomness, ignores error code, poorly protects some valuable thing, as a result the valuable thing gets stolen. blocking worst-case: a program calls /dev/random, waits a long time to get random numbers, user curses the slow machine/program, valuable thing gets sent late, but is not stolen. non-blocking best-case failure: a program calls /dev/random, doesn't get randomness, informs smart user, who finds the bad guy sucking all bits from /dev/random, has them ejected from system. blocking best-case failure: same as worst-case (i.e. the worst-case is lots better, the best-case is worse). This can be transformed to the non-blocking best-case failure by clever programming (threads, or fork, or sigalarm), the people who do this are far more likely to actually try to issue a good error message then the people who get non-blocking by default.
Date: Mon, 30 Oct 1995 21:59:14 -0500 From: "Josh M. Osborne" <stripes@va.pubnix.com> When /dev/random doesn't have "enough" enthropy left does reading from it return an error, or block? I would strongly suggest blocking, as the non-blocking behavur is not really all that useful. It acts like many character devices and named pipes in that if there is no entropy available at all, it blocks. If there is some entropy available, but not enough, it returns what is available. (A subsequent read will then block, since no entropy will then be available.) Actually, what's currently in Linux doesn't work precisely like this, but it will soon. After talking a number of people on both sides of the block vs. non-blocking camp, this seemed to be a suitable compromise. At least one Major Workstation Vendor is planning on using this behavior for their /dev/random, to appear in a future OS release. If we all can standardize on this behavior, it'll make application writer's jobs that much easier. - Ted
Blocking vs. non-blocking is a standard issue in design of U*X devices. Standard solution: make it block by default, and accept an IOCTL to put it in non-blocking mode. There's even a POSIX way to do this: flags_or_err = fcntl(fd, F_GETFL, 0); {check for error} res = fcntl(fd, F_SETFL, flags_or_err | O_NONBLOCK); {check for error}
Besides non-blocking, it's very useful sometimes to support SIGIO/SIGURG for as many devices as possible. I know only too well that Sybase CT_lib uses this for Async mode. (I just tracked down what appears to be an HPUX process group bug preventing the signals from being delivered...) In any case, using SIGIO is a whole parallel method to using a select loop, and although it seemed like a hack when I found out they were using it, it has some elegance since they chain to other possible signal handlers in case other io descriptors are ready.
Blocking vs. non-blocking is a standard issue in design of U*X devices. Standard solution: make it block by default, and accept an IOCTL to put it in non-blocking mode. There's even a POSIX way to do this:
flags_or_err = fcntl(fd, F_GETFL, 0); {check for error} res = fcntl(fd, F_SETFL, flags_or_err | O_NONBLOCK); {check for error}
sdw -- Stephen D. Williams 25Feb1965 VW,OH (FBI ID) sdw@lig.net http://www.lig.net/sdw Consultant, Vienna,VA Mar95- 703-918-1491W 43392 Wayside Cir.,Ashburn, VA 22011 OO/Unix/Comm/NN ICBM/GPS: 39 02 37N, 77 29 16W home, 38 54 04N, 77 15 56W Pres.:Concinnous Consulting,Inc.;SDW Systems;Local Internet Gateway Co.;28May95
Date: Mon, 30 Oct 1995 23:48:24 +0200 From: Mark Murray <mark@grondar.za>
Secondly, making /dev/random only readable by "privileged programs" means that people won't be able to compile their own version of PGP that can take advantage of the random number generator. Instead, they would have to use a setuid version of PGP, and I'm quite sure PGP wasn't written such that it would be safe to turn on its setuid bit.
How about SetGID? We were going for 660 root.kmem. Bad idea; anyone who can run PGP could then get instant access to kmem cd /tmp ln -s /dev/kmem foo pgp -e tytso foo rm foo pgp foo.pgp
Again, /dev/random can be set to whatever permissions the system administrator wants. Secondly, writing to /dev/random merely adds randomness to the pool, via the mixing algorithm. It won't actually permit people to *set* the state of the pool, and assuming that the state of the pool is not known before the write operation, writing to it won't allow the user to know what the state is after the write operation.
What happens if some attacker does: for (;;) { write_to_devrandom(NULL); check_to_see_if_state_is_crackable(); } ? "Gut feel" suggests to me that large ammounts of "predicted" input might be worse than the normal sort of system noise you have been using. But keep in mind that what we're doing is XOR'ing the input data into the pool. (Actually, it's a bit more complicated than that. The input is XOR'ed in with a CRC-like function, generated by taking an irreducible polynomial in GF(2**128). But for the purposes of this argument, you can think of it as XOR.) So since you don't know what the input state of the pool is, you won't know what the output state of the pool. Also, you never get to see the actual state of the pool, even when you read out numbers from /dev/random. What you're getting is a *hash* of the pool. So if you can actually implement check_to_see_if_state_is_crackable(), then you've found a weakness in MD5 (or SHA, to which I'll probably be switching in the near future).
And, for race condition reasons, something which I need to implement soon is an ioctl(), usuable only by root, that simultaneously updates the entropy estimate *and* submits data to be mixed into the pool. (Why this is necessary should be obvious after a few minutes thought.)
Clue me in - I'm not quite with you? :-) Consider this scenario: 1) Process one writes randomness to /dev/random. 2) Process two immediately consumes a large amount of randomness using /dev/urandom, so that the effective randomness is now zero. 3) Process two uses the ioctl() to bump the entropy count by the amount of randomness added in step 1. Unfortunately, that entropy was already consumed in step 2.
I didn't say that it would be trivial for an attacker to do this, but it's certainly *doable*. Some of the network traffic analyzers that have been made available (I think Sandia National Labs has one that does this), records down to millisecond accuracy when a packet was sniffed on the network.
Is this millisecond accuracy quantifiable in terms of bits of entropy? if so, the ethernet is surely safe? Well, no. If you're only using as your timing the 100Hz clock, the adversary will have a better timebase than you do. So you may be adding zero or even no bits of entropy which can't be deduced by the adversary. This is even worse in the PGP keyboard timing case, since the adversary almost certainly can find a better time resolution to measure your incoming packets when compared to the timing resolution that most programs have. Far too many Unix systems only make a 100Hz clock available to the user mode, even if you have a better quality high resolution timing device in the kernel (for example, the Pentium cycle counting register). Again, if you can quantify the number of possibilities into bits of entropy, your code is good. Depending on current technology, this may have to change. The problem is that in order to do this requires making assumptions about what the capabilities of your adversary are. Not only does this change over time, but certain adversaries (like the NSA) make it their business to conceal their capabilities, for precisely this reason. So I like to be conservative and use limits which are imposed by the laws of physics, as opposed to the current level of technology. Hence, if the packet arrival time can be observed by an outsider, you are at real risk in using the network interrupts as a source of entropy. Perhaps it requires buidling a very complicated model of how your Unix scheduler works, and how much time it takes to process network packets, etc. ---- but I have to assume that an adversary can very precisely model that, if they were to work hard enough at it. People may disagree as to whether or not this is possible, but it's not prevented by the laws of physics; merely by how much effort someone might need to put in to be able to model a particular operating system's networking code. In any case, that's why I don't like depending on network interrupts. Your paranoia level may vary. - Ted
participants (5)
-
Josh M. Osborne -
Mark Murray -
Mike_Spreitzer.PARC@xerox.com -
sdw@lig.net -
Theodore Ts'o