In my experience, neural nets are good at generalizing across sparse data for recognizing patterns not seen before. GAs are more useful for converging (at an exponential rate giving Holland's schema theorem) on a solution to a problem. A GA is easier to train if the score is a continuous real number while most neural network implementations expect actual examples of what is in the set of things to be recognized. For a GA cryptoanalysis tool, a vector representing an experiment would be used as a genotype and the result could be the output of a specialized message detector (==1 if the text looks like plain English, ==.0001 if only a few words are seen, etc. (and of course it would need to detect file formats like that of compress)). Given this, a GA could find a solution. However, in learning theory, there are problems considered to be unlearnable and the standard example is encrypted information! The solution space could be like a plane with a single "needle" in it that is the solution with no hills in the general direction of the needle. This kind of solution space requires exhaustive search, unfortunately. It is difficult to characterize a solution space, but it is the key part--the mapping of a gene vector to a fraction representing the completeness of the solution is critical--and if it is completely flat with a needle, then it is not worth it. Alternatively, if it is completely random, then it also is not worth it. The solution space must be somewhere in these two extremes to be useful for a GA. Based on my limited experience with cbw (crypt breakers workbench), it is possible to get partial results (e.g., ex*lo*e -> explore and other words are then filled in) and zoom in the full solution, so based on that, a GA would be helpful. cbw is for an Enigma type machine and newer algorithms are much more sophisticated, so I don't know if the same kind of partial knowledge applies for RSA, DES3, IDEA cracking. Paul E. Baclace peb@procase.com
participants (1)
-
peb@PROCASE.COM