Re: /dev/random for FreeBSD [was: Re: /dev/random for Linux]
How about SetGID? We were going for 660 root.kmem.
Bad idea; anyone who can run PGP could then get instant access to kmem
Fooey. Of course. Scratch that plan.
cd /tmp ln -s /dev/kmem foo pgp -e tytso foo rm foo pgp foo.pgp
Eeeeek!
? "Gut feel" suggests to me that large ammounts of "predicted" input might be worse than the normal sort of system noise you have been using.
But keep in mind that what we're doing is XOR'ing the input data into the pool. (Actually, it's a bit more complicated than that. The input is XOR'ed in with a CRC-like function, generated by taking an irreducible polynomial in GF(2**128). But for the purposes of this argument, you can think of it as XOR.) So since you don't know what the input state of the pool is, you won't know what the output state of the pool.
I chatted with a colleague at work, and he helped bend my mind right. I had the mistaken notion that adding lots of data would "overflow" and "dilute" the entropy to an attackable state.
Is this millisecond accuracy quantifiable in terms of bits of entropy? if so, the ethernet is surely safe?
Well, no. If you're only using as your timing the 100Hz clock, the adversary will have a better timebase than you do. So you may be adding zero or even no bits of entropy which can't be deduced by the adversary.
In a 386 or a 486 (under FreeBSD at least) there is a 1Mhz clock available. How would _this_ be? On the Pentium there is the <whatsit?> register which will give the board's oscillator (or 90 MHz) I believe.
This is even worse in the PGP keyboard timing case, since the adversary almost certainly can find a better time resolution to measure your incoming packets when compared to the timing resolution that most programs have. Far too many Unix systems only make a 100Hz clock available to the user mode, even if you have a better quality high resolution timing device in the kernel (for example, the Pentium cycle counting register).
Ah yes - _that_ register. :-) What then is a body to do? Preserve all _verifiable_ randomness like gold? Dish it out under some quota? A denial of service attack would be forever { cat /dev/random > /dev/null } Severely limiting most decent folk's chance at getting PGP to work. Right now I am considering making a piece of cheap hardware to deliver noise to a digital input. (Electronics is a stagnant hobby of mine) Interested? I may knock up a prototype in a month or so...
The problem is that in order to do this requires making assumptions about what the capabilities of your adversary are. Not only does this change over time, but certain adversaries (like the NSA) make it their business to conceal their capabilities, for precisely this reason.
Can they predict thermal noise in a cheap transistor? ]:->
So I like to be conservative and use limits which are imposed by the laws of physics, as opposed to the current level of technology. Hence, if the packet arrival time can be observed by an outsider, you are at real risk in using the network interrupts as a source of entropy. Perhaps it requires buidling a very complicated model of how your Unix scheduler works, and how much time it takes to process network packets, etc. ---- but I have to assume that an adversary can very precisely model that, if they were to work hard enough at it.
This is a strong argument for some form of specialised noise source. I have read of methods of getting this from turbulent air flow in a hard drive (an RFC, I believe).
People may disagree as to whether or not this is possible, but it's not prevented by the laws of physics; merely by how much effort someone might need to put in to be able to model a particular operating system's networking code. In any case, that's why I don't like depending on network interrupts. Your paranoia level may vary.
If I was running Fort Knox, I'd probably use Radioactive decay... (From my experience working at a cyclotron facility - these SOB's are _*RANDOM*_) M -- Mark Murray 46 Harvey Rd, Claremont, Cape Town 7700, South Africa +27 21 61-3768 GMT+0200 Finger mark@grumble.grondar.za for PGP key
Mark Murray writes:
Can they predict thermal noise in a cheap transistor? ]:->
As Perry pointed out in the last round on hardware noise generators, they may not be able to predict it, but they *may* be able to generate a field which will *influence* it. It's difficult to know for sure if your noise source is really random, and to what degree.
Date: Tue, 31 Oct 1995 19:15:35 +0200 From: Mark Murray <mark@grondar.za>
Is this millisecond accuracy quantifiable in terms of bits of entropy? if so, the ethernet is surely safe?
Well, no. If you're only using as your timing the 100Hz clock, the adversary will have a better timebase than you do. So you may be adding zero or even no bits of entropy which can't be deduced by the adversary.
In a 386 or a 486 (under FreeBSD at least) there is a 1Mhz clock available. How would _this_ be? On the Pentium there is the <whatsit?> register which will give the board's oscillator (or 90 MHz) I believe. What's HZ set at for FreeBSD? Most of the x86 Unixes have generally used HZ set at 100, because the interrupt overhead on a x86 isn't cheap, and so you want to limit the number of clock interrupts. You can sample the timing clock, but it turns out to be rather expensive to do so; several I/O instructions, which will require several delays if they have to go through your 8 MHz ISA bus. We've moved away from using the hardware clock on the 386 because of the overhead concerns. On the Penitum, we use the clock cycle counter. What then is a body to do? Preserve all _verifiable_ randomness like gold? Dish it out under some quota? A denial of service attack would be Well, verifiable randomness really is like gold. It's a valuable resource. On a time-sharing system, where you really want to equitably share *all* system resources perhaps there should be a quota system limiting the rate from which a user is allowed to "consume" randomness. On the other hand, most Unix systems *aren't* great at doing this sort of resource allocation, and there are enough other ways of launching denial of service attacks. "while (1) fork();" will generally bring most systems to their knees, even in spite of limitations of the number of processes per user. Most Unix systems don't protect against one user grabbing all available virtual memory. And so on.... forever { cat /dev/random > /dev/null } Severely limiting most decent folk's chance at getting PGP to work. If you have such a "bad user" on your system, and the PGP /dev/random code is written correctly, it will only be a denial of service attack. But it'll be possible to identify who the bad user is on your system, and that person can then be dealt with, just as you would deal with some user that used up all of the virtual memory on the system trying to invert a 24x24 matrix, or some such ---- in both scenarios, the ability for another user to run PGP is severely limited. There's nothing special about /dev/random in this sense; it's just another system resource which can be abused by a malicious user, just like virtual memory or process table slots.
So I like to be conservative and use limits which are imposed by the laws of physics, as opposed to the current level of technology. Hence, if the packet arrival time can be observed by an outsider, you are at real risk in using the network interrupts as a source of entropy. Perhaps it requires buidling a very complicated model of how your Unix scheduler works, and how much time it takes to process network packets, etc. ---- but I have to assume that an adversary can very precisely model that, if they were to work hard enough at it.
This is a strong argument for some form of specialised noise source. I have read of methods of getting this from turbulent air flow in a hard drive (an RFC, I believe). Yes, ultimately what you need is a good hardware number generator. There are many good choices; from radioactive decay, noise diodes, etc. I'm not entirely comfortable with the proposal of using air flow turbulance from a hard drive, myself, because the person who suggested this still hasn't come up with a decent physical model which tells us how many bits of true entropy this system really provides. What Don Davis did was to develop more and more sophisticated models, and demonstrated that his more sophistcated models weren't able to explain the "randomness" that he observed in the time that it took to complete certain disk requests. However, that doesn't prove that the "randomness" is really there; it's just that he couldn't explain it away. It might be that the NSA has a much better model than Don Davis was able to come up with, for example, and the amount of randomness from air turbulance really is a lot less than one might expect at first glance. Short of good hardware sources, the other really good choice is unobservable inputs. Hence, the Linux driver is hooked into the keyboard driver, and the various busmice drivers. Those are really wonderful sources of randomness, since they're generally not observable by an adversary, and humans tend to be inherently random. :-) - Ted
In article <199510311715.TAA05821@grumble.grondar.za>, Mark Murray <mark@grondar.za> wrote:
forever { cat /dev/random > /dev/null }
Severely limiting most decent folk's chance at getting PGP to work.
Ideally, if two processes are trying to read /dev/random at the same time, both would get data at half-speed. Doesn't it work that way already? -- Shields.
participants (4)
-
Mark Murray -
Scott Brickner -
shields@tembel.org -
Theodore Ts'o