Random Data Compressed 100:1 (Guffaw)

georgemw at speakeasy.net georgemw at speakeasy.net
Tue Jan 8 12:33:04 PST 2002


On 8 Jan 2002, at 9:51, Michael Motyka wrote:

> Eric Cordian <emc at artifact.psychedelic.net> wrote :
> >Someone else needs to read the comp.compression FAQ.
> >
> >http://www.reuters.com/news_article.jhtml?type=technologynews&StoryID=498720
> >
> >-----
> >
> >NEW YORK (Reuters) - A Florida research start-up working with a team of
> >renowned mathematicians said on Monday it had achieved a breakthrough that
> >overcomes the previously known limits of compression used to store and
> >transmit data.
> >...
> >ZeoSync said its scientific team had succeeded on a small scale in
> >compressing random information sequences in such a way as to allow the
> >same data to be compressed more than 100 times over -- with no data loss.
> >That would be at least an order of magnitude beyond current known
> >algorithms for compacting data.
> >...
> >-- 
> >Eric Michael Cordian 0+
> >
> There may be a way to derive & patch together pseudorandom sequence
> generators, who knows. If there is anything real about this I wonder how
> long it takes to compress large blocks of arbitrary input data?
> Geological time scales anyone?
> 
> Mike
> 

A meaningless question.  As Eric correctly points out, true
random input simply cannot be compressed, it doesn't matter
how clever you are or how much time and computing power you
have access to.  Converseley, if you're just intereted in 
"compressing"  the results of your own pseudo-random number
generator, it's trivial to do it instantly, just store the seed
and the desired file size.

George
 
> 






More information about the cypherpunks-legacy mailing list