Re: There's a hole in your crypto...
Phil Fraering writes:
Why are the arguments on either side so emotional?
I'm rather hesitant to jump into this thread, but I think that one reason is that Fred's concerns have been misunderstood a bit. (If I'm wrong, I'm sure he'll correct me.) It seems that there are many people who are ready to leap to the defense of the honor of the programmers behind PGP, when they feel said honor is being impugned. I get the impression (as much from what I know of his background as from what he's said) that Fred is at least as concerned about PGP being a correct implementation of the various algorithms it involves as he is about back doors inserted by nefarious individuals. As I understand it, it is impossible to demonstrate the correctness of any program the size of PGP. And it would also not be possible to validate the compiler or the operating system. One thing I'm not sure of, though, is this: Would it be possible to verify a much smaller program, say, the RSA-in-3-lines-of-Perl? (Of course, you still would be left trying to verify the Perl interpreter, and the OS again.) And is there any way to build trusted system out of small, verifiable pieces? Since the way they're connected could also be questioned, I suspect that when you put enough of them together it's just as bad as the case of a single, monolithic program. But this isn't my area, so I don't know. Would it be possible to formally verify at least some parts of a large program like PGP? And would that add to the trustworthiness of the overall program? (Keeping in mind Fred's earlier remark about a seemingly-unrelated portion of the code overwriting the key.) -- David R. Conrad, ab411@detroit.freenet.org, http://web.grfn.org/~conrad/ Finger conrad@grfn.org for PGP 2.6 public key; it's also on my home page Key fingerprint = 33 12 BC 77 48 81 99 A5 D8 9C 43 16 3C 37 0B 50 No, his mind is not for rent to any god or government.
On Wed, 2 Aug 1995, David R. Conrad wrote:
Phil Fraering writes:
Why are the arguments on either side so emotional?
I'm rather hesitant to jump into this thread, but I think that one reason is that Fred's concerns have been misunderstood a bit. (If I'm wrong, I'm sure he'll correct me.)
It seems that there are many people who are ready to leap to the defense of the honor of the programmers behind PGP, when they feel said honor is being impugned.
I get the impression (as much from what I know of his background as from what he's said) that Fred is at least as concerned about PGP being a correct implementation of the various algorithms it involves as he is about back doors inserted by nefarious individuals.
As I understand it, it is impossible to demonstrate the correctness of any program the size of PGP. And it would also not be possible to validate the compiler or the operating system. One thing I'm not sure of, though, is this: Would it be possible to verify a much smaller program, say, the RSA-in-3-lines-of-Perl? (Of course, you still would be left trying to verify the Perl interpreter, and the OS again.)
And is there any way to build trusted system out of small, verifiable pieces? Since the way they're connected could also be questioned, I suspect that when you put enough of them together it's just as bad as the case of a single, monolithic program. But this isn't my area, so I don't know.
No. This was essentially proved during the first third of this century. But even if the program itself works, you have to check the OS, the motherboard & the processor. Did I say processor? Yes, I did. Anyone running on an 80586? Nathan
Would it be possible to formally verify at least some parts of a large program like PGP? And would that add to the trustworthiness of the overall program? (Keeping in mind Fred's earlier remark about a seemingly-unrelated portion of the code overwriting the key.)
-- David R. Conrad, ab411@detroit.freenet.org, http://web.grfn.org/~conrad/ Finger conrad@grfn.org for PGP 2.6 public key; it's also on my home page Key fingerprint = 33 12 BC 77 48 81 99 A5 D8 9C 43 16 3C 37 0B 50 No, his mind is not for rent to any god or government.
Nathan Zook writes:
And is there any way to build trusted system out of small, verifiable pieces? Since the way they're connected could also be questioned, I suspect that when you put enough of them together it's just as bad as the case of a single, monolithic program. But this isn't my area, so I don't know.
No. This was essentially proved during the first third of this century.
Well, I haven't gotten a reply from Nathan Zook on this assertion, so can anyone else back it up with some references? Perhaps we're discussing different contexts, but proving correct systems composed of correct components is still a subject of active research. nathan
On Thu, 3 Aug 1995, Nathan Loofbourrow wrote:
Nathan Zook writes:
And is there any way to build trusted system out of small, verifiable pieces? Since the way they're connected could also be questioned, I suspect that when you put enough of them together it's just as bad as the case of a single, monolithic program. But this isn't my area, so I don't know.
No. This was essentially proved during the first third of this century.
Well, I haven't gotten a reply from Nathan Zook on this assertion, so can anyone else back it up with some references? Perhaps we're discussing different contexts, but proving correct systems composed of correct components is still a subject of active research.
nathan
Sorry about that. Your message must have died when I splatted the dear "Professor" (bow, bow, bow). There is "active research". Why is a mystry to me. Godel's proof was the completetion of several works. On of the earily demonstrated that no axiom system can be demonstrated to consistent by a weaker one. Now the "reasearch" in this area has consisted, in part, of translating algorithms into statements in axiomatic systems. The problem is that either we cannot prove that these systems are consistent or they are extremely limited in what they can do. (In particular, recursion seems to be anthema.) But the word proof in the previous sentence has to be taken with a grain of salt, because any axiom system that we use to prove things about another axiom system has to be at least as complicated. This is why the "not a Turing machine" assertion that the "Professor" is important. We know that Turing machine is undecidable, so if we want to limit behavior, we can't have one. BUT---we don't know that being a Turing machine is equivalent to having "unpredictable" behavior. Furthermore, a "proof" of the "not a Turing machine" assertion is going to have to be done by--you guessed it--a computer. And this computer is running a program which definitely IS a Turing machine, if it is capable of "proving" that other (suitably non-trivial) programs are not Turing machines. Why must this be done on a computer? Because the program under consideration is thousands of machine instructions long. And each instruction will be translated into dozens of statements in the axiom system. So any attempted proof will be beyond human ability. Note that the above arguments do not require the physical exsistance of computers to make, which is why I refered to the "first third of this century", when these ideas were discovered. In reality, the fact that the program itself has been compiled (or was it written in machine code?), that it uses an operating system (or does it address all of the hardware independedly of other programs?), and runs on a processor (maybe a 80586?) should be enough to convince serious critics of the futility of the exercise. But the nagging question remains: Why can't we build up big blocks from little ones? While there is a sort of "Turing horizon" beyond which programs are known to be unpredictable, let me attempt to address the problem another way, to redefine our intuition to be more in touch with reality. The situation we are dealing with amounts to the phenomina of "spontaneous complexity". First, some physical examples. Take an object moving in a Newtonian space, with nothing else there. Give initial conditions, tell me what happens next. No problem. Take two objects. No problem. Take three objects. Big problem. Why? Perhaps we just haven't figured out the mathematics yet. Okay, take five objects. Why five? Because it is known that with a particular initial condition for five objects, all objects will "leave the universe" in a finite amount of time (!!!!!). Now what if you bump them a little bit? Certainly not all combinations of initial conditions lead to this situation. Which is which? Can this behavior be "built up" from two-object situations? It is important to note that this type of complexity was in fact discovered by Poincare' and others shortly after the turn of the century. Some of his sketches clearly are forerunners of the Mandelbrot set--he was considering these types of ideas. (The complexity issues lost out first to relativity and then to quantum mechanics in the competition for the minds of researchers.) Then there is the Mandelbrot set--which points are in and which are out? Are you sure? (Go ahead and limit yourself to rational points--we are talking computers.) Take S^1, the unit circle in the Complex plane. Define a series of functions f_1, f_2... on S^1 as follows: f_i(z) = z^i. Each point with rational multiple of pi argument will limit to one, but no irrational points will. What is important to note is that there is a set uniform set of measure 0 on S^1 such that the behavior in the limit of this set is completely unpredicted by the behavior of the rest of the set. Perhaps you prefer to map S^1 to S^1 by repeated applications of f_2? Then only the binary rationals settle down. So in each case, complex (in the technical sense) behavior is exhibited by outlandishly simple systems. Sohow the _interactions_ of these simple and predictable systems become unpredictable. That is why I consider this to be a closed subject. Nathan
No crypto/privacy relevance, delete or flame now.... Nathan writes:
This is why the "not a Turing machine" assertion that the "Professor" is important. We know that Turing machine is undecidable, so if we want to limit behavior, we can't have one. BUT---we don't know that being a Turing machine is equivalent to having "unpredictable" behavior. Furthermore, a "proof" of the "not a Turing machine" assertion is going to have to be done by--you guessed it--a computer. And this computer is running a program which definitely IS a Turing machine, if it is capable of "proving" that other (suitably non-trivial) programs are not Turing machines.
I think this is a bit misguided. The Turing machine (TM) is an extremely general abstract model of computation. The gargantuan hunk of code that runs the Space Shuttle can be viewed as a Turing machine, as can a "Hello world" program written in Visual BASIC. So, there's not really a question about whether or not we're talking about Turing machines (unless perhaps you want to discuss quantum theorem provers and QTMs :) Now, Rice's Theorem says that all non-trivial properties of TMs are undecidable. If I pick a "non-trivial" property, I can't conceivably build a TM ("write a program", if you like) that, upon input of the specification of an arbitrary TM, can tell whether or not that TM exhibits the property I picked. This does not mean that I can't decide whether some particular TMs have that property or not -- I can. I just can't write down a procedure that handles the general case. Also, this theorem clearly hinges on the meaning of "trivial". From what I've seen, a very strict interpretation is largely appropriate; nearly everything except the least exciting of trivial low-level properties of TMs seems to come out to be "non-trivial" in this regard. The proof of the theorem is more precise about this, naturally, but I've found this useful as a working colloquial definition. -Futplex August 7, 1995 "Enola Gay, you should have stayed at home yesterday" -OMD
No crypto/privacy relevance, delete or flame now.... ... Now, Rice's Theorem says that all non-trivial properties of TMs are undecidable. If I pick a "non-trivial" property, I can't conceivably build a TM ("write a program", if you like) that, upon input of the specification of an arbitrary TM, can tell whether or not that TM exhibits the property I picked. This does not mean that I can't decide whether some particular TMs have that property or not -- I can. I just can't write down a procedure that handles the general case.
Also, this theorem clearly hinges on the meaning of "trivial". From what I've seen, a very strict interpretation is largely appropriate; nearly everything except the least exciting of trivial low-level properties of TMs seems to come out to be "non-trivial" in this regard. The proof of the theorem is more precise about this, naturally, but I've found this useful as a working colloquial definition.
Issue 1: Undecidable for arbitrary programs does not mean undecidable for every program. For all finite programs with finite input sequences, all properties are decidable. Complexity may make proofs for large programs infeasible at this time, but that is all. Now back to the point of the discussion. For certain classes of programs, we can prove many things that are relevant to information protection. Furthermore, as we attempt these proofs, we may find and fix the program anomolies (i.e., bugs) that would cause the program to fail in an undesirable way. Therefore, the proof techniques give us two benefits - they help us fix the programs, and they help increase the assurance that the programs do precisely what they are supposed to do and nothing else. Issue 2: The notion that mathematics somehow excludes linguistic proofs (forwarded I believe by a user with "may" in their email address) is nonsense. Mathematics at its core is based on linguistic notions that are defined in plain language. These notions develop a system of rules which may be applied to decide the veracity of a proposition. The rules themselves form a language with syntax and semantics just as the language that defines them has syntax and semantics. The notion of separating language from mathematics is a fine and interesting one, but it certainly does not apply to any mathematics currently in widespread use. A proof done without mathematical symbols is no less a proof. Issue 3: Let's get back to the point of this discussion. What can we really prove about algorithms? I have made the assertion that an intersting property for the purposes of assessing integrity, availability, and confidentiality for servers like the W3 server and the gopher server is the limitation of information flow. I have backed up my assertion with a demonstration in the form of programs that do this and English demonstrations that that is of real value. Does anyone disagree? Why? Is there a reason this same analytical technique cannot be used on PGP or other cryptosystems to demonstrate that there are no back doors (other than perhaps in the underlying inadequacy of the overall technique)? How hard is it to do this for such programs? What programming structures make this difficult? Will it reveal many programming errors and therefor be a useful general purpose tool for writing better programs? Just thought I would stir things up a bit. -- -> See: Info-Sec Heaven at URL http://all.net Management Analytics - 216-686-0090 - PO Box 1480, Hudson, OH 44236
On Mon, 7 Aug 1995, Futplex wrote:
No crypto/privacy relevance, delete or flame now....
Nathan writes:
This is why the "not a Turing machine" assertion that the "Professor" is important. We know that Turing machine is undecidable, so if we want to limit behavior, we can't have one. BUT---we don't know that being a Turing machine is equivalent to having "unpredictable" behavior. Furthermore, a "proof" of the "not a Turing machine" assertion is going to have to be done by--you guessed it--a computer. And this computer is running a program which definitely IS a Turing machine, if it is capable of "proving" that other (suitably non-trivial) programs are not Turing machines.
I think this is a bit misguided. The Turing machine (TM) is an extremely general abstract model of computation. The gargantuan hunk of code that runs the Space Shuttle can be viewed as a Turing machine, as can a "Hello world" program written in Visual BASIC. So, there's not really a question about whether or not we're talking about Turing machines (unless perhaps you want to discuss quantum theorem provers and QTMs :)
If a statement is vacuous, it needs refining :-). If I were to state that "Program X is not a Turing Machine", I would be stating that program X does not model all Turing machines throught its input. It is the ability of some Turing machines to model all Turing machines through their input that makes them undecidable.
Now, Rice's Theorem says that all non-trivial properties of TMs are undecidable. If I pick a "non-trivial" property, I can't conceivably build a TM ("write a program", if you like) that, upon input of the specification of an arbitrary TM, can tell whether or not that TM exhibits the property I picked. This does not mean that I can't decide whether some particular TMs have that property or not -- I can. I just can't write down a procedure that handles the general case.
The problem here is that it is the interesting cases with which we are concerned. If someone wants to write a computer program to "verify" my proof of the RSA algorithm, fine. But I have to be convinced that there program does what they claim before I care. And since their program takes mathematical theorems as input, it is already demonstrating near-Turing ( :-P) behavior.
Also, this theorem clearly hinges on the meaning of "trivial". From what I've seen, a very strict interpretation is largely appropriate; nearly everything except the least exciting of trivial low-level properties of TMs seems to come out to be "non-trivial" in this regard. The proof of the theorem is more precise about this, naturally, but I've found this useful as a working colloquial definition.
I'll buy that.
-Futplex
Nathan
On Thu, 3 Aug 1995, Nathan Loofbourrow wrote:
Nathan Zook writes:
And is there any way to build trusted system out of small, verifiable pieces? Since the way they're connected could also be questioned, I suspect that when you put enough of them together it's just as bad as the case of a single, monolithic program. But this isn't my area, so I don't know.
No. This was essentially proved during the first third of this century.
...
There is "active research". Why is a mystry to me. Godel's proof was the completetion of several works. On of the earily demonstrated that no axiom system can be demonstrated to consistent by a weaker one. Now the "reasearch" in this area has consisted, in part, of translating algorithms into statements in axiomatic systems. The problem is that either we cannot prove that these systems are consistent or they are extremely limited in what they can do. (In particular, recursion seems to be anthema.) But the word proof in the previous sentence has to be taken with a grain of salt, because any axiom system that we use to prove things about another axiom system has to be at least as complicated.
You hit the nail right on the head when you said: "or they are extremely limited in what they can do" That's exactly the point. We cannot prove programs with general purpose functionality to be secure, becasue they are not. But we may well be able to prove a lot of security properties about programs that are not general purpose. For example, a Web server that only does GET and a gopher server (not gopher plus) and a mail server may all fit the bill. An by coincidence, these are exactly the sorts of programs we want to be able to prove security properties about.
This is why the "not a Turing machine" assertion that the "Professor" is important. We know that Turing machine is undecidable, so if we want to limit behavior, we can't have one. BUT---we don't know that being a Turing machine is equivalent to having "unpredictable" behavior. Furthermore, a "proof" of the "not a Turing machine" assertion is going to have to be done by--you guessed it--a computer. And this computer is running a program which definitely IS a Turing machine, if it is capable of "proving" that other (suitably non-trivial) programs are not Turing machines.
I think in the case of simple (i.e. short and written for the purpose) programs these proofs could reasonably be done by hand. In fact, I think we could create a theorum verifier that we could prove to only verify true theorums as true. Some theorums would never be proven one way or the other, and others might be proven false, but some things, particularly the ones we need to bootstrap the theorum proof technology and things like the properties of a secure W3 server, could fit intop this schema.
Why must this be done on a computer? Because the program under consideration is thousands of machine instructions long. And each instruction will be translated into dozens of statements in the axiom system. So any attempted proof will be beyond human ability.
Not in the case of programs like the secure W3 and Gopher servers. They are under 100 lines long. They are also designed to allow easy proof of the desired properties. ...
So in each case, complex (in the technical sense) behavior is exhibited by outlandishly simple systems. Sohow the _interactions_ of these simple and predictable systems become unpredictable.
But this is only true for certain classes of systems. By designing other classes of systems explicitly designed to not have those properties, we can build up substantial systems with demonstrable protection properties.
That is why I consider this to be a closed subject.
I thionk you should reopen your thinking. -- -> See: Info-Sec Heaven at URL http://all.net Management Analytics - 216-686-0090 - PO Box 1480, Hudson, OH 44236
participants (5)
-
ab411@detroit.freenet.org -
fc@all.net -
futplex@pseudonym.com -
Nathan Loofbourrow -
Nathan Zook