Re: standard for steganography?
Has anyone done statistical studies of low bits of pixels or sound samples? I suspect that they are often far from random. A flat 50% distribution in the low bits might standout like a sore thumb. I can imagine the the low bit can be distributed dependently on such things as the next to low bits or 60 cycle power at the recorder. Some AD converters are known to produce 60% ones or some such. Like mechanical typewriters, AD systems probably have there own idiosyncrasies. Given a flat stream of cipher data, there are techniques to reversably introduce such variations to mimic the biases of real AD converters without much data expansion. It is my wild guess and conjecture that with such statistical variation built in there would be no effective statistical test for a given file containing hidden messages.
On Mon, 28 Feb 1994, Norman Hardy wrote:
Has anyone done statistical studies of low bits of pixels or sound samples? I suspect that they are often far from random. A flat 50% distribution in the low bits might standout like a sore thumb. I can imagine the the low bit can be distributed dependently on such things as the next to low bits or 60 cycle power at the recorder. Some AD converters are known to produce 60% ones or some such. Like mechanical typewriters, AD systems probably have there own idiosyncrasies. Given a flat stream of cipher data, there are techniques to reversably introduce such variations to mimic the biases of real AD converters without much data expansion.
It is my wild guess and conjecture that with such statistical variation built in there would be no effective statistical test for a given file containing hidden messages.
Yes, pure white noise would be anamalous. I have suggested that one use a Mimic function with a "garbage grammar". Implemented correctly, it should withstand statistical analysis. What is an AD converter? And what are the techniques you speak of that mimic those AD converters? Sergey
I have played w/ stego some and w/ the present resolutions of images I dont find the images have enough complexity to really hide a message of a useable length, unless you break it up into several images. I use a function to measure the complexity of a image based on adjacent bit changes. The more complex an image the more bit changes. I measure it thus: # of adjacent bit changes in image/ # of bits in image = complexity if the complexity is too low or too high (this is counter intuitive) then you can't hide a message. Consider an image w/ only a few bit flippings, any message that is inserted will cause the visual image to be distorted in a noticable way (unless it is truely expressionistic). Now consider a image w/ every other bit flipped (maximum complexity) which is in effect a checkerboard. Any bits that get flipped change the pattern to a less complex one (ie the checkerboard is broken up). Also you have to consider the effects on edges and the standard deviation inherant in using anti-aliasing. This will cause bits on the edge to be switched incorrectly for the algorith in use. Since it is a trivial problem to measure the sd for various graphics packages this makes a nifty test bed for finding imbedding images. Blank or mono-chromatic areas also show the same type of errors. I am still working on it and hope to find an error in there somewhere but so far no go.
participants (3)
-
Jim choate -
norm@netcom.com -
Sergey Goldgaber