![](https://secure.gravatar.com/avatar/9f8c0aa0dfed6180955ed69114c42763.jpg?s=120&d=mm&r=g)
Just a thought I had the other day. Probably unconciously plagarized, but bear with me. What about using subliminal channels as a tool to signal software failure? That is, suppose we define some kind of condition in which the software could continue to work, but should not. In addition, simple cessation of function is not possible, or not advisable. For examples, all that comes to mind off the top of my head is "stolen" software...though perhaps one might use subliminal channels for diagnostic equipment if competitors are assumed to be listening in? When such a condition is met, the software modifies its output (which should be signed w/something which has a nice, big subliminal channel...SHA?) to signal the condition and the particulars. After modifying itself to produce the altered output, it deletes the code responsible for the modification. Unless caught in the act, or compared to a legitimate copy, the application appears no different than before. I was thinking in terms of crypto (or other) software that attempts to personalize itself to a particular machine. If someone steals the HD or grabs the keys and program, their output will be 'tainted', alerting legitimate users to the theft. Hardware disconnected from its normal environment might use such a channel to indicate its 'stolen' or 'temporarily down - come fix' status. This is security through obscurity...the chances of it working are about the chance that no one notices the change or finds the code responsible. I suppose the software industry (and the pirates) will be too happy to provide examples of many attempts to use such schemes. For this reason, I would only ask if it makes sense for limited distributions of software or hardware products. Is this kind of system already in use? Any ideas on making it more applicable to general distribution, or has this already been tried and discarded? David Molnar