RSA has been proved correct
Serendipity strikes. I was reading the logic programming/theorem proving chapter of my new Russell and Norvig book on AI, and came across something I once knew about but had forgotten: the Boyer-Moore theorem prover was applied to the RSA algorithm and the correctness of it was verified. Correctness in the sense of showing that outputs match formal specs, for all inputs. The paper is: Boyer, R.S and Moore, J.S. (1984). Proof checking the RSA public key encryption algorithm, "American Mathematical Monthly," 91(3):181-189. Now this does not mean: - that implementations cannot have flaws, backdoors, etc. - that larger systems which use RSA cannot have flaws, backdoors, etc. What it says is that there is hope that formal verification of critical modules is possible. I can't imagine too many areas of software engineering that are more critical to modularize and verify than crypto and digital money sorts of things. Huge monolithic programs are vastly more difficult--probably intractable--to verify. The "crypto library" project(s)--I use the plural because there have been several such projects--are good ideas. Small modules that do one thing and one thing only are best for building larger modules robustly. --Tim May .......................................................................... Timothy C. May | Crypto Anarchy: encryption, digital money, tcmay@sensemedia.net | anonymous networks, digital pseudonyms, zero 408-728-0152 | knowledge, reputations, information markets, Corralitos, CA | black markets, collapse of governments. Higher Power: 2^756839 | Public Key: PGP and MailSafe available. "National borders are just speed bumps on the information superhighway."
Tim quoth: |> I was reading the logic programming/theorem proving chapter of my new |> Russell and Norvig book on AI, and came across something I once knew about |> but had forgotten: the Boyer-Moore theorem prover was applied to the RSA |> algorithm and the correctness of it was verified. Correctness in the sense |> of showing that outputs match formal specs, for all inputs. |> The paper is: Boyer, R.S and Moore, J.S. (1984). Proof checking the RSA |> public key encryption algorithm, "American Mathematical Monthly," |> 91(3):181-189. Given the enormous difficulty of ensuring security in a world of ubiquitous distributed computing, I'm as big a fan as any of formal methods. But Tim's post hammers home the big fault of formal methods: the possibility that people will come to rely upon them. I have paranoid visions of people finally accepting formal methods in a decade or so, and then becoming dependent on them... forgeting the enormous potential for error that will always exist in such systems. If somebody told me that intentionally letting a few violent criminals free each year is a good idea because it would keep me on my toes, I would think that person is an idiot. But I'm not entirely convinced that it is a bad idea to avoid formal methods because they could breed complacency. Cheers, JWS
...
Given the enormous difficulty of ensuring security in a world of ubiquitous distributed computing, I'm as big a fan as any of formal methods. But Tim's post hammers home the big fault of formal methods: the possibility that people will come to rely upon them. I have
"Logic is a system whereby one may go wrong with confidence." - Patterson ...
Cheers,
JWS
sdw -- Stephen D. Williams 25Feb1965 VW,OH (FBI ID) sdw@lig.net http://www.lig.net/sdw Consultant, Vienna,VA Mar95- 703-918-1491W 43392 Wayside Cir.,Ashburn, VA 22011 OO/Unix/Comm/NN ICBM/GPS: 39 02 37N, 77 29 16W home, 38 54 04N, 77 15 56W Pres.: Concinnous Consulting,Inc.;SDW Systems;Local Internet Gateway Co.;28May95
On Sat, 5 Aug 1995 solman@MIT.EDU wrote:
Tim quoth:
|> I was reading the logic programming/theorem proving chapter of my new |> Russell and Norvig book on AI, and came across something I once knew about |> but had forgotten: the Boyer-Moore theorem prover was applied to the RSA |> algorithm and the correctness of it was verified. Correctness in the sense |> of showing that outputs match formal specs, for all inputs.
|> The paper is: Boyer, R.S and Moore, J.S. (1984). Proof checking the RSA |> public key encryption algorithm, "American Mathematical Monthly," |> 91(3):181-189.
Given the enormous difficulty of ensuring security in a world of ubiquitous distributed computing, I'm as big a fan as any of formal methods. But Tim's post hammers home the big fault of formal methods: the possibility that people will come to rely upon them. I have paranoid visions of people finally accepting formal methods in a decade or so, and then becoming dependent on them... forgeting the enormous potential for error that will always exist in such systems.
If somebody told me that intentionally letting a few violent criminals free each year is a good idea because it would keep me on my toes, I would think that person is an idiot. But I'm not entirely convinced that it is a bad idea to avoid formal methods because they could breed complacency.
Cheers,
JWS
The problem is that these "formal methods" are themselves unproved and, in the general sense, unprovable. Using a computer program to verify RSA is like using number theory to verify some proof in set theory--you may succede, but so what? The RSA algorithm works because of some basic (and not quite so basic) facts of number theory. Number theory is assumed in the design of computers, of processors, of operating systems, and of programs. To put the question succinctly, would you trust a theorem "prover" to verify its own accuracy? The RSA algorithm: Select primes p and q and an exponenet e, such that gcd(e,p-1) = gcd(e,q-1) = 1. (In practice, we would want log_2(q) << e >> log_2(p). Publish e and pq. Find d_1, d_2 such that d_1 and d_2 are inverses of e in Z_p-1 and Z_q-1 respectively. A message Y (from 0 to pq-1) is transformed into X = Y^e mod qp. When you recieve a message X, let X_1 = X mod p and X_2 = X mod q. Let Y_1 = X_1^d_1 mod p and Y_2 = X_2^d_2 mod q. Use the Chinese Remainder Theorem to find Z (from 0 to pq-1) such that Z = Y_1 mod p1 and Z = Y_2 mod p2. Theorem: Z = Y. Pf: Let p_1 = p and p_2 = q a) Observe that in F_p_i, (Y^e)^d_i = Y^(e*d_i) = Y^(r*(p-1)+1) = Y^((p-1)*r) * Y= (Y^(p-1))^r * Y = 1^r * Y = Y. b) There exist p,q,e triples. If we let the order of our selection be p,e,q then we observe that we are free in our selection of p, and that our selection of e is not very constrained. ((p-1/)2 +- 1 being obvious examples). We then observe than any arithmatic progression of integers which does not obviously consist entirely of composites must contain an infinite number of primes, and observe that the condition that gcd(q-1,e) = 1 defines just such a progression. c) The d's can be found See Euclid's Algorithm c) Z_pq is the (ring) direct sum of Z_p and Z_q QED (Observe that the Chinese Remainder Theorem works on arbitrary ring direct sums.) Why this excercise? Because not _one_ of the cited theorems is modern. The only thing in this proof unknown to Fermat, Galios, Euclid, and the Middle Age Chinese is that bit about arithmatic progressions and primes, which may have been known to Fermat or Euclid. If I am informed that a theorem "prover" has "verified" this theorem, then I am led to believe that the "prover" is not obviously broken. My confidence (as an algorithm--this is a separate issue from decryption resistance) in RSA has _NOTHING_ to do with what some theorem "prover" may or may not have to say about it. Such statements serve only to inform that these "provers" are broken (if they don't like it), or that they concievably do "verify" proofs (if they do). OTOH, the theorems and axiom systems corresponding to these theorem "provers" are very complex, and quite subtle at points. Plug in the lastest attempt at Fermat's Last (as opposed to his Little) theorem, and tell me if its good. Do the classification of finite groups. I know, there is a 125-page attempt at the Poincare' conjecture. Try it. If these "provers" find heretofore unobserved flaws, THEN I'll concede that they would be useful tools in mathematics--in uncovering flaws. But they _still_ don't "prove" that these theorems are correct. They only convince themselves. But convincing me that I should believe them involves convincing me that there has been no failure in the program--at any of the levels I've previously discussed. And, by the way, this is why the general mathematical community is still suspicious of the 4-color theorem. In fact, the orginal "proof" contained a number of flaws. All discovered were all easily patched, but the fact that they existed in the first place means that we have no reason to believe that something subtle is yet to be discovered. Nathan
there is a 125-page attempt at the Poincare' conjecture. Try it. If these "provers" find heretofore unobserved flaws, THEN I'll concede that they would be useful tools in mathematics--in uncovering flaws. But they _still_ don't "prove" that these theorems are correct. They only convince themselves. But convincing me that I should believe them involves convincing me that there has been no failure in the program--at any of the levels I've previously discussed. And, by the way, this is why the general mathematical community is still suspicious of the 4-color theorem. In fact, the orginal "proof" contained a number of flaws. All discovered were all easily patched, but the fact that they existed in the first place means that we have no reason to believe that something subtle is yet to be discovered.
All you need to do is verify yourself (i.e. formally prove the correctness of the theorem prover) to rely on the results of the theorem prover. It's the web of trust model. And before you jump up and say "but how can you prove the theorem prover, maybe Godel...", there is a very simple theorem prover that is provable via mathematical induction. If you don't trust induction, then I don't know what to tell you. (it's like denying the Peano postulates) The theorem prover works like this: Start off with your axiom set and your rules of production. (for instance, a context free grammar, that might say "if x+y=z is a theorem, then x+(y+1)=z+1 is a theorem") And enumerate all possible theorems on the parse tree. If you reach theorems of length N that exceed the theorem you're trying to prove than either it is a nontheorem or undecidable. This procedure will never produce a "yes" answer for a false theorem, although it will fail to prove some theorems. And it is news to me that "the general mathematical community is still suspicious of the four color theorem." Not only are they not suspicous of the theorem, they aren't suspicious of the proof. It's been verified and reproduced over and over again, and it has also been shortened down from the original (I believe 2000+ special graph cases) to just over 400. Physicists aren't suspicuous of relativity either. [note above: the theorem prover fails if the production rules allow theorem shortening. The system must be primitive recusive, but there are many restricted domains of theorem proving which are.] The classification of the simple groups was a 1000+ page written proof. Which one is would you trust to have a mistake somewhere? The computer checked one, or the human checked one? This general line of discussion is getting out of hand. You can't *prove* anything for sure. Even if it seems logical to you, how do you know your own mind isn't buggy? How do you know you're not hallucinating? Even something as simple as Euclids proof of the infinitude of primes. You think it's been proved? ha! You are merely delusional. The rest of us sane people saw the disproof years ago, but every time you start to read the disproof, your mind goes into its own little universe and starts substituting in screen memories making you *think* you just read a proof, not a disproof. Ultimately, you can't even trust yourself. The world is a risky place, and sometimes you just have to live with the fact that one day, something you chose to place faith in and rely on is going to be pulled from underneath you. -Ray
On Sun, 6 Aug 1995, Ray Cromwell wrote:
All you need to do is verify yourself (i.e. formally prove the correctness of the theorem prover) to rely on the results of the theorem prover.
Oh, is THAT all? (more on this later)
denying the Peano postulates) (Perish the thought!)
The theorem prover works like this:
<elided brute-force prover> Do you advise using the unabridged Archimedian Sieve to determine if a number is prime?
a false theorem, although it will fail to prove some theorems.
Just feed it ~X as well.
And it is news to me that "the general mathematical community is still suspicious of the four color theorem." Not only are they not suspicous of the theorem, they aren't suspicious of the proof. It's been verified and reproduced over and over again, and it has also been shortened down from the original (I believe 2000+ special graph cases) to just over 400.
Now we hit an impasse. I thought the 4-color theorem was considered done as well, until a professor of mine contradicted me on this point. It sounds like you might have more current info. (Mine is 2-3 years old.)
[note above: the theorem prover fails if the production rules allow theorem shortening. The system must be primitive recusive, but there are many restricted domains of theorem proving which are.]
Very restricted, I would think. Most cases of universalizing would, I'ld guess.
The classification of the simple groups was a 1000+ page written proof. Which one is would you trust to have a mistake somewhere? The computer checked one, or the human checked one?
That depends on my trust of the human and of the checker. Frankly, I'ld be leary of either. (And I was told in Algebra that it was 10k+, but the point is basically the same.)
This general line of discussion is getting out of hand. You can't *prove* anything for sure. Even if it seems logical to you, how do you know your own mind isn't buggy?
This is one of the points that I have pushed on this issue. Our intution on many of these matters is _way_ off until you train it. And still, there are famous cases of failure by trained people.
Ultimately, you can't even trust yourself.
That's why we go to school. Okay, on the subject of verifying theorem checkers: First, you mention some distance into your post, that the theorems being checked must be "primitively recursive". That rather limits you away from interesting theorems, wouldn't you say? In particular, you couldn't dream of touching those earlier mentioned biggies. But that's not all. Try the Galois theorems. Fermat's little theorem? Barre' catagory theorem? Chinese Remainder Theorem? Fundamental Theorem of Algebra? (In topology, in complex anal, in algebra...) So I assume that the theorem checkers being deployed aren't limited to handling primitively recursive systems. You now are dealing with a rather extensive program. And proving that a general theorem prover works is _not_ something to just sit & do. ....... Nathan
participants (5)
-
Nathan Zook -
Ray Cromwell -
sdw@lig.net -
solman@MIT.EDU -
tcmay@sensemedia.net