Re: [dsfjdssdfsd] Any plans for drafts or discussions on here? (fwd)
Interesting thread going on at dsfjdssdfsd@ietf.org. Forwarded for our collective interest and amusement. //Alif -- Those who make peaceful change impossible, make violent revolution inevitable. An American Spring is coming: one way or another. ---------- Forwarded message ---------- Date: Thu, 23 Jan 2014 23:38:07 +0100 From: Krisztián Pintér <pinterkr@gmail.com> To: Michael Hammer <michael.hammer@yaanatech.com> Cc: "dsfjdssdfsd@ietf.org" <dsfjdssdfsd@ietf.org>, "ietf@hosed.org" <ietf@hosed.org> Subject: Re: [dsfjdssdfsd] Any plans for drafts or discussions on here? Michael Hammer (at Thursday, January 23, 2014, 9:49:32 PM):
This may get off-topic, but are there good software tools for testing entropy, that could help applications determine if the underlying system is giving them good input?
disclaimer: i'm no expert, it is just what i gathered. (i'm pretty much interested in randomness.) short answer: no long answer: in some situations yes. if you are handed a bunch of data, all you can do is to try different techniques to put an upper limit on the entropy. for example you can calculate the shannon entropy assuming independent bits. then you can hypothesize some interdependence, and see if you can compress the data. you can apply different lossless compression methods. the better compression you find puts an upper limit on the entropy. but never a lower limit. you can only do better if you have an idea about the process that created the data. for example you might assume that it is mostly thermal noise. you can assume that thermal noise has some frequency distribution, or energy or whatever, etc. within this assumption, you can determine the entropy content by measurements. but at this point, you are pretty much prone to two errors: 1, what if your assumption is wrong and 2, what if your physical model overestimates the unpredictability of the given system. example for the former: the signal might be largely controllable by an external EM interference, and then you measure not noise, but attacker controlled data. example for the latter: a smartass scientist might come up with a better physical model for thermal noise. it is also important to note that entropy is observer dependent. we actually talk about the entropy as seen by the attacker. but it is not straightforward to assess what is actually visible to an attacker and what is not. observation methods improve with time. _______________________________________________ dsfjdssdfsd mailing list dsfjdssdfsd@ietf.org https://www.ietf.org/mailman/listinfo/dsfjdssdfsd
From: J.A. Terranson <measl@mfn.org> Interesting thread going on at dsfjdssdfsd@ietf.org. Forwarded for our collective interest and amusement. ---------- Forwarded message ---------- Date: Thu, 23 Jan 2014 23:38:07 +0100 From: Krisztián Pintér <pinterkr@gmail.com> To: Michael Hammer <michael.hammer@yaanatech.com> Cc: "dsfjdssdfsd@ietf.org" <dsfjdssdfsd@ietf.org>, "ietf@hosed.org" <ietf@hosed.org> Subject: Re: [dsfjdssdfsd] Any plans for drafts or discussions on here? Michael Hammer (at Thursday, January 23, 2014, 9:49:32 PM):
This may get off-topic, but are there good software tools for testing entropy, that could help applications determine if the underlying system is giving them good input?
disclaimer: i'm no expert, it is just what i gathered. (i'm pretty much interested in randomness.) short answer: no long answer: in some situations yes. if you are handed a bunch of data, all you can do is to try different techniques to put an upper limit on the entropy. for example you can calculate the shannon entropy assuming independent bits. then you can hypothesize some interdependence, and see if you can compress the data. you can apply different lossless compression methods. the better compression you find puts an upper limit on the entropy. but never a lower limit.
Consider this: Suppose I handed you the digits of pi, the digits from the millionth digit to the two-millionth digit, and I asked you to determine if they are 'random'. By many tests, you'd conclude that they are random. (Or, at least 'normal' http://en.wikipedia.org/wiki/Normal_numbers ) But, in reality they are highly non-random, precisely because they are a million sequential digits of pi. But you wouldn't know that, if you didn't know that. Jim Bell
Consider this: Suppose I handed you the digits of pi, the digits from the millionth digit to the two-millionth digit, and I asked you to determine if they are 'random'. By many tests, you'd conclude that they are random. (Or, at least 'normal' http://en.wikipedia.org/wiki/Normal_numbers <http:///> ) But, in reality they are highly non-random, precisely because they are a million sequential digits of pi. But you wouldn't know that, if you didn't know
On Fri, Jan 24, 2014 at 2:00 PM, Jim Bell <jamesdbell8@yahoo.com> wrote: that. Practically, would it matter? Maybe. If an attacker knew that you were using Pi as your "random" stream, I guess that would reduce your "random" stream to a stream cypher with a key of about 24 bits. There are a lot of random-appearing number sequences. Are there enough to add a significant number of bits to the effective key? Against an attacker with the resources to compute and store the first billion digits of a lot of sequences? Meh. I'd started this response with the plan to argue that a slice of Pi is good enough for practical purposes, but I convinced myself otherwise. It's only good enough for security-by-obscurity. Meh. -- Neca eos omnes. Deus suos agnoscet. -- Arnaud-Amaury, 1209
participants (3)
-
J.A. Terranson
-
Jim Bell
-
Steve Furlong