hi, Okay- I need 5 bits to represent 32 coins.I count as coin 0,coin 1,... coin 31. If it is a perfectly random fair coin throwing experiment,then 50 percent of them will be heads. So I know that 16 of them will be heads. What we do is i simply place all the 32 coins on the table in a row or column. I look at the first coin and determine if it is a head or a tail. I repeat the same proccess till i count 16 heads. If I count 15 heads at coin 31, then I cant reduce the entropy. How ever, if i count 16 heads at coin 30,then I dont have to check that coin 31,I already know its a tail,so I have less than 5 bits of entropy. So if it is a perfectly random experiment,I wouldn't get 16 heads before i look at coin 31,which is the last coin and thats what you said-isn't it? So how did chaitin get to compress the information from k instances of the turing machine in http://www.cs.umaine.edu/~chaitin/summer.html under the sub-section redundant? he says- "Is this K bits of mathematical information? K instances of the halting problem will give us K bits of Turing's number. Are these K bits independent pieces of information? Well, the answer is no, they never are. Why not? Because you don't really need to know K yes/no answers, it's not really K full bits of information. There's a lot less information. It can be compressed. Why? " If the input programs are truely random-there is no redundancy and thats a contradiction to the claim in the paper. Thanks. Regards Sarath.
It's simple, if I am correct. The redundancy simply makes you care less about the specific instance you are looking at.
To represent 32 coins-i need 5 bits of information. Since the experiment is truely random-i know half of them will be heads,so in this case using 5 bits of information,i can determine all the coins that are heads and that are tails.
Same deal, unless you are counting pairs, in which case you cannot distinguish between the members of a pair. You need an extra bit to tell a head from a tail.
So-the question is what is the minimum number of bits or entropy required to determine which all coins are heads and which all coins are tails,is it 5 bits or 6 bits of information?
With 5 bits, you can count to 31, so you need 6.
Just my two tails.
__________________________________ Do you Yahoo!? Yahoo! SiteBuilder - Free, easy-to-use web site design software http://sitebuilder.yahoo.com
On Sunday, August 17, 2003, at 03:19 AM, Sarad AV wrote:
hi,
Okay- I need 5 bits to represent 32 coins.I count as coin 0,coin 1,... coin 31. If it is a perfectly random fair coin throwing experiment,then 50 percent of them will be heads.
So I know that 16 of them will be heads.
I hope you are not saying that you think there will always be 16 heads and 16 tails! Your comment below seems to suggest you think this is so. If so, you need to spend a lot of time thinking about probability.
What we do is i simply place all the 32 coins on the table in a row or column. I look at the first coin and determine if it is a head or a tail. I repeat the same proccess till i count 16 heads. If I count 15 heads at coin 31, then I cant reduce the entropy. How ever, if i count 16 heads at coin 30,then I dont have to check that coin 31,I already know its a tail,so I have less than 5 bits of entropy.
How does knowing what has already come before tell you that coin 31 is a tail without your having to look at it to see? It certainly sounds to me that you have a very weird, and very wrong, concept of probability. --Tim May "A democracy cannot exist as a permanent form of government. It can only exist until the voters discover that they can vote themselves money from the Public Treasury. From that moment on, the majority always votes for the candidate promising the most benefits from the Public Treasury with the result that a democracy always collapses over loose fiscal policy always followed by dictatorship." --Alexander Fraser Tyler
hi, Hope you can help on this. --- Tim May <timcmay@got.net> wrote:
I hope you are not saying that you think there will always be 16 heads and 16 tails!
In a perfectly random experiment,how many tails and how many heads do we get? thanks. Regards Sarath. __________________________________ Do you Yahoo!? Yahoo! SiteBuilder - Free, easy-to-use web site design software http://sitebuilder.yahoo.com
In a perfectly random experiment,how many tails and how many heads do we get? we don't know - or it wouldn't be random :) for a sufficiently large sample you *should* see roughly equal numbers of
Sarad AV wrote: heads and tails in the average case - but : for 32 coins in 2^32 tests you should see: one occurance of all heads (and one of all tails) 32 occurances of one tail, 31 heads (and 32 of one head, 31 tails) 496 occurances of two and so forth up the chain none of these are guaranteed - it *is* random after all - but given a sufficiently large number of tests, statistically you should see the above.
hi, Thank you-one more question. Will the information obtained from the 2^32 tests have a zero compression rate? If one of the occurance should yield all heads and one occurance yields all tails-there appears to be scope for compression. If the output is random,then it will have no mathametical structure,so I shouldn't be able to compress it at all. Regards Sarath. --- Dave Howe <DaveHowe@gmx.co.uk> wrote:
for a sufficiently large sample you *should* see roughly equal numbers of heads and tails in the average case - but : for 32 coins in 2^32 tests you should see: one occurance of all heads (and one of all tails) 32 occurances of one tail, 31 heads (and 32 of one head, 31 tails) 496 occurances of two and so forth up the chain none of these are guaranteed - it *is* random after all - but given a sufficiently large number of tests, statistically you should see the above.
__________________________________ Do you Yahoo!? Yahoo! SiteBuilder - Free, easy-to-use web site design software http://sitebuilder.yahoo.com
On Monday, August 18, 2003, at 06:37 AM, Sarad AV wrote:
hi,
Thank you-one more question. Will the information obtained from the 2^32 tests have a zero compression rate? If one of the occurance should yield all heads and one occurance yields all tails-there appears to be scope for compression.
This outcome is compressible because it has a short description.
If the output is random,then it will have no mathametical structure,so I shouldn't be able to compress it at all.
Our best current description of randomness is that something is random when it has no _shorter_ description than itself. (A point of view variously credited to Komogoroff, Chaitin, and Solomonoff.) But, as I said in my last post, before you try to understand algorithmic information theory, you need to learn the basics of probability. Without understanding things like combinations and permutations, binomial and Poisson distributions, the law of large numbers, standard deviations, etc., your philosophizing will be ungrounded. You can read articles some of us wrote here about 10-11 years ago by using Google on the obvious terms. --Tim May "We should not march into Baghdad. To occupy Iraq would instantly shatter our coalition, turning the whole Arab world against us and make a broken tyrant into a latter- day Arab hero. Assigning young soldiers to a fruitless hunt for a securely entrenched dictator and condemning them to fight in what would be an unwinable urban guerilla war, it could only plunge that part of the world into ever greater instability." --George H. W. Bush, "A World Transformed", 1998
Tim May <timcmay@got.net> wrote:
But, as I said in my last post, before you try to understand algorithmic information theory, you need to learn the basics of probability. Without understanding things like combinations and permutations, binomial and Poisson distributions, the law of large numbers, standard deviations, etc., your philosophizing will be ungrounded.
A good place to learn the basics: http://web.jfet.org/6.041-text/ -- Riad Wahby rsw@jfet.org MIT VI-2 M.Eng
Will the information obtained from the 2^32 tests have a zero compression rate? If one of the occurance should yield all heads and one occurance yields all tails-there appears to be scope for compression. In that one case, yes. The compression will vary based on the deviation from an ideally compressable sample - for *any* bit pattern 0x0000 to 0xFFFF you would expect to see it once in 2^32 trials (by definition) therefore you will get a mix of compressable and uncompressable patterns, with uncompressable
If the output is random,then it will have no mathametical structure,so I shouldn't be able to compress it at all. randomness is a funny thing. a truely random file can be *anything* that is the right length - including a excerpt from william shakespere (provided you have enough monkeys) it is most likely to be random garbage, for a large sample - but for a very small sample the odds of it being an ordered sample are surprisingly good.
Sarad AV wrote: patterns being more likely (simply because there are more of them in the full range of available patterns 0x0000 to 0xFFFF the obvious example here is a double coin test - two bits. an ordered sample would be both heads or both tails. a disordered sample would be one head and one tail. in practice, you would expect to see half the results from a larger trial (say 32 double-throws) with a ordered sample, and half disordered. as you reach three coins, the odds of a completely ordered result decrease (from 2 in 2^2, to 2 in 2^3). for four coins, you still have the same two compressable results. consider: HHHH HHHT HHTH HHTT HTHH HTHT HTTH HTTT THHH THHT THTH THTT TTHH TTHT TTTH TTTT in a large trial, you would expect to see each of these once every 2^4 tests, on the average. obviously HHHH and TTTT are very compressable. assuming a runlength encoding, I don't think any of the others are compressable at all.....
(Will whomever prepending this "Re: *** GMX Spamverdacht ***" header please STOP.) On Monday, August 18, 2003, at 09:01 AM, Dave Howe wrote:
randomness is a funny thing. a truely random file can be *anything* that is the right length - including a excerpt from william shakespere (provided you have enough monkeys) it is most likely to be random garbage, for a large sample - but for a very small sample the odds of it being an ordered sample are surprisingly good.
the obvious example here is a double coin test - two bits. an ordered sample would be both heads or both tails. a disordered sample would be one head and one tail. in practice, you would expect to see half the results from a larger trial (say 32 double-throws) with a ordered sample, and half disordered. as you reach three coins, the odds of a completely ordered result decrease (from 2 in 2^2, to 2 in 2^3). for four coins, you still have the same two compressable results. consider: HHHH HHHT HHTH HHTT HTHH HTHT HTTH HTTT THHH THHT THTH THTT TTHH TTHT TTTH TTTT
in a large trial, you would expect to see each of these once every 2^4 tests, on the average. obviously HHHH and TTTT are very compressable. assuming a runlength encoding, I don't think any of the others are compressable at all...."We should not march into Baghdad. To occupy Iraq would instantly shatter our coalition, turning the whole Arab world against us and make a broken tyrant into a latter- day Arab hero. Assigning young soldiers to a fruitless hunt for a securely entrenched dictator and condemning
Quibble: only surprising if one misunderstands probability. (Not saying you do, just quibbling with any claim that readily calculated probabilities can be "surprising.") them to fight in what would be an unwinable urban guerilla war, it could only plunge that part of the world into ever greater instability." --George H. W. Bush, "A World Transformed", 1998 For any sequence of n fair tosses, the "length after compression" of the outcome is roughly, modulo some constants and factor, (1 - 2^(-n)), where "1" is the uncompressed length. In other words, as n gets large nearly all strings have a "length after compression" that is close to the original length, i.e., little compression. As n gets arbitrarily large, an arbitrarily small fraction of strings have a short, concise description (are "compressed"). --Tim May
(Will whomever prepending this "Re: *** GMX Spamverdacht ***" header please STOP.)
Tim May wrote: that would be my email provider - and hence me. sorry. They suck in many ways, but they give an efficient free service with tls support; one of the ways they suck is to either a) hide some of your mail in a folder invisible to pop3 (so you have to use their german-only and non-fishable interface to go look) b) add that bloody stupid header to random emails that they think are spam (no idea what criteria is in use as it is all in german)
Quibble: only surprising if one misunderstands probability. (Not saying you do, just quibbling with any claim that readily calculated probabilities can be "surprising.") I meant surprising for Sarad - Much of this discussion pre-assumes that he *does* misunderstand probability but is willing to substitute our
collective insanity for his current ignorance :)
<snipping "We should not march into ...Transformed", 1998> Erm - what was that? a misplaced .sig?
hi, --- Dave Howe <DaveHowe@cmn.sharp-uk.co.uk> wrote: . (Not
saying you do, just quibbling with any claim that readily calculated probabilities can be "surprising.") I meant surprising for Sarad - Much of this discussion pre-assumes that he *does* misunderstand probability but is willing to substitute our
collective insanity for his current ignorance :)
No more of that-I will have a good read. I am basically confused of the fact
In a perfectly random experiment,how many tails and how many heads do we get? we don't know - or it wouldn't be random :) for a sufficiently large sample you *should* see roughly equal numbers of heads and tails in the average case.
We say that, we-don't know or it wont be random. Then we say that we must see roughly equal numbers of heads and tails for large trials. Thats what I fail to understand. The idea of a perfect random experiment was taken just to understand the concept. Thanks. Regards Sarath. __________________________________ Do you Yahoo!? Yahoo! SiteBuilder - Free, easy-to-use web site design software http://sitebuilder.yahoo.com
Sarad AV wrote:
We say that, we-don't know or it wont be random. Then we say that we must see roughly equal numbers of heads and tails for large trials. Thats what I fail to understand. its the difference between any one test (which will be completely unpredictable) and probabilities (where you know that, unless there is a weighting in force, the odds of any one of n options coming up will be 1 in n, so you would expect to see roughly equal numbers of each)
as an analogy - imagine a horse race where all five horses are roughly equal in fitness and rider skill. a bookie would give equal odds on each (not 1 in 5, as he has to make a profit, but no horse would be "worth" more than another). You would *expect* that, if the race was run enough times, each horse would run about a fifth of them - but that won't help you predict the result of any one race in particular, nor would it be impossible for one horse to win all the races, purely from luck.
On Tuesday, August 19, 2003, at 03:13 AM, Sarad AV wrote:
In a perfectly random experiment,how many tails and how many heads do we get? we don't know - or it wouldn't be random :) for a sufficiently large sample you *should* see roughly equal numbers of heads and tails in the average case.
We say that, we-don't know or it wont be random. Then we say that we must see roughly equal numbers of heads and tails for large trials. Thats what I fail to understand.
Start small. Do some experiments _yourself_. Take a coin out of your pocket. I assume your local coin has something that may be called a "head" and something that may be called a "tail." In any case, decide what you want to call each side. Flip the coin very high in the air and let it land on the ground without any interference by you. This is a "fair toss." (That subtle air currents may affect the landing is completely unimportant, as you will see even if you have doubts about it now.) Now let's try a little piece of induction on this one, single toss. Remember when you had said earlier that a "perfectly random coin toss" would have exactly equal numbers of heads and tails? Well, with a single toss there can ONLY be either a head or a tail. The outcome will be ONE of these, not some mixture of half and half. This proves, by the way, that any claim that a random coin toss must result in equal numbers of heads and tails in any particular experiment. Now toss the coin a second time and record the results. (I strongly urge you to actually do this experiment. Really. These are the experiments which teach probability theory. No amount of book learning substitutes.) So the coin has been tossed twice in this particular experiment. There is now the possibility for equal numbers of heads and tails....but for the second coin toss to give the opposite result of the first toss, "every time, to balance the outcomes," the coin or the wind currents would have to "conspire" to make the outcome the opposite of what the first toss gave. (This is so absurd as to be not worth discussing, except that I know of no other way to convince you that your theory that equal numbers of heads and tails must be seen cannot be true in any particular experiment. The more mathematical way of saying this is that the "outcomes are independent." The result of one coin toss does not affect the next one, which may take place far away, in another room, and so on.) In any case, by the time a third coin toss happens there again cannot be equal numbers of heads and tails, for obvious reasons. And so on. Do this experiment. Do this experiment for at least 10 coin tosses. Write down the results. This will take you only a few minutes. Then repeat the experiment and write down the results. Repeat it as many times as you need to to get a good feeling for what is going on. And then think of variations with dice, with cards, with other sources of randomness. And don't "dry lab" the results by imagining what they must be in your head. Actually get your hands dirty by flipping the coins, or dealing the cards, or whatever. Don't cheat by telling yourself you already know what the results must be. Only worry about the deep philosophical implications of randomness after you have grasped, or grokked, the essence. (Stuff about Kripke's possible worlds semantics, Bayesian outlooks, Kolmogoroff-Chaitin measures, etc., is very exciting, but it's based on the foundations.) --Tim May "We should not march into Baghdad. To occupy Iraq would instantly shatter our coalition, turning the whole Arab world against us and make a broken tyrant into a latter- day Arab hero. Assigning young soldiers to a fruitless hunt for a securely entrenched dictator and condemning them to fight in what would be an unwinable urban guerilla war, it could only plunge that part of the world into ever greater instability." --George H. W. Bush, "A World Transformed", 1998
At 08:45 AM 8/19/03 -0700, Tim May wrote: ...
(I strongly urge you to actually do this experiment. Really. These are the experiments which teach probability theory. No amount of book learning substitutes.)
Yep. I've often thought that one benefit to playing RPGs when I was younger was directly observing lots and lots of rolls of various kinds of dice. That gives you an intuition for how unlikely things can happen sometimes, for the difference between "very unlikely" and "impossible," etc.
So the coin has been tossed twice in this particular experiment. There is now the possibility for equal numbers of heads and tails....but for the second coin toss to give the opposite result of the first toss, "every time, to balance the outcomes," the coin or the wind currents would have to "conspire" to make the outcome the opposite of what the first toss gave. (This is so absurd as to be not worth discussing, except that I know of no other way to convince you that your theory that equal numbers of heads and tails must be seen cannot be true in any particular experiment. The more mathematical way of saying this is that the "outcomes are independent." The result of one coin toss does not affect the next one, which may take place far away, in another room, and so on.)
In fact, I believe this is the trick that makes it very easy to distinguish between sequences of coin flips that really happen, and ones that are made up by a human. The human tends to try to make things even out over time.
--Tim May
--John Kelsey, kelsey.j@ix.netcom.com PGP: FA48 3237 9AD5 30AC EEDD BBC8 2A80 6948 4CAA F259
On Monday, August 18, 2003, at 03:24 AM, Sarad AV wrote:
hi,
Hope you can help on this.
--- Tim May <timcmay@got.net> wrote:
I hope you are not saying that you think there will always be 16 heads and 16 tails!
In a perfectly random experiment,how many tails and how many heads do we get?
First, those who think there are "perfectly random" experiments or numbers are living in a state of sin, to paraphrase John von Neumann. Second, despite the above, good approximations to random behavior are easy to obtain: radioactive decays, lava lamps, Johnson noise in diodes, etc. The important aspects of probability theory emerge even with "imperfect" sources of apparent randomness. Third, the expected distribution of heads and tails in a series of 32 coin tosses is approximated closely by the binomial distribution. The key concepts are combinations and permutations. The expected probability of "all heads" is given by (0.5)^32. There are more chances of seeing 1 head and 31 tails, as the head can appear in any of the 32 positions. ("the combination of 32 things taken 1 at a time"). And so on, up to the maximum of 16 heads, 16 tails...although this particular outcome is not very likely. Fourth, my point was that there is relatively low probability that 32 tosses will result in exactly 16 heads and 16 tails. Given enough experiments, the distribution of outcomes will approximately follow the familiar bell-shaped curve, centered at 16H/16T, but with some chance of each of 0H/32T, 1H/31T,...., 31H/0T, 32H/0T. Fifth, not to sound harsh or snippy or sarcastic, but this is really basic stuff. There is a big gap in your education. Even if not taught this in 9th grade (or whatever the equivalent is in India), this is stuff that should be apparent through thinking about the way chance events occur. I urge you to suspend your "advanced math" questions until you have gone back over the more basic things. (It's crazy to try to understand entropy and the algorithmic information theory work of Greg Chaitin and others without knowing the 18th-century results on probability of people like Pascal, Poisson, etc.) There are many Web pages with class notes on probability, encyclopedia entries, etc. And I suggest experiments with coin tosses. And cards. And dice. --Tim May "A complex system that works is invariably found to have evolved from a simple system that worked ...A complex system designed from scratch never works and cannot be patched up to make it work. You have to start over, beginning with a working simple system." -- Grady Booch
participants (6)
-
Dave Howe
-
Dave Howe
-
John Kelsey
-
Riad S. Wahby
-
Sarad AV
-
Tim May