Language recognition is important in cryptanalysis because, among other applications, an exhaustive key search of any cryptosystem from ciphertext alone requires a test that recognizes valid plaintext.
For exhaustive key search on any reasonably good symmetric cipher (like DES), some simple entropy measure for n-bit-grams should suffice to distinguish random from non-random. These other approaches in this talk seem like overkill in this context. But then again, maybe we're trying to break Enigma. :-)
Modeling language as a finite stationary Markov process,
A finite stationary Markov process is large fancy math-speak for what a travesty generator does. "finite" means that the total number of states is finite, and that means you get to use matrices instead of kernel integrals, which means that your averagely educated scientist can follow this. "stationary" means that the transition matrix is not a function of time, that is, it's a constant matrix. This means that time appears only in an exponent. A "Markov process" is a transition from one state to another, probabilistically. (Approximately. All these definitions are meant to explain, not to define.) The talk looks interesting, to be sure, but it looks more significant for making a better /etc/magic for file(1) than it does for cryptanalysis. Eric