Perhaps it is you who should do so. Showing a problem to be asymptotically Omega(N!) or Omega(N^N) (or Theta of either) is effectively proving it to be NP-hard--that is, it grows in non-polynomial time.
Probelms are not asymptotically Omega(N!) or Omega(N^N). Solutions are. I don't believe anyone has ever proved that an NP-hard problem can not be solved in polynomial time. Not being able to solve a problem in polynomial time with current techniques does not make it unsolvable. MIT people always surprise me.
Jordan Dimov <jdimov@cis.clarion.edu> wrote:
Probelms are not asymptotically Omega(N!) or Omega(N^N). Solutions are.
Nitpick if you will; you and I both know what the statement means, hence your ability to respond to it.
I don't believe anyone has ever proved that an NP-hard problem can not be solved in polynomial time.
"NP-hard ... a complexity class of problems that are intrinsically harder than those that can be solved by a nondeterministic Turing machine in polynomial time" (Algorithms and Theory of Computation Handbook, 19--20).
Not being able to solve a problem in polynomial time with current techniques does not make it unsolvable.
While I agree that the limitations of current techniques do not dictate what is possible, it _is_ possible to show that a certain problem has a best-case order of growth (for something simple, think of gate-level addition; its best case is provably Theta(log(N)) ). In this case, what Tim means is that work is being done towards showing that the best-case order of growth for factorization is faster than polynomial, hence it is NP-hard.
MIT people always surprise me.
Please, expound. -- Riad Wahby rsw@mit.edu MIT VI-2/A 2002 5105
On Tue, 17 Oct 2000, Riad S. Wahby wrote:
Jordan Dimov <jdimov@cis.clarion.edu> wrote:
I don't believe anyone has ever proved that an NP-hard problem can not be solved in polynomial time.
"NP-hard ... a complexity class of problems that are intrinsically harder than those that can be solved by a nondeterministic Turing machine in polynomial time" (Algorithms and Theory of Computation Handbook, 19--20).
Strictly speaking, this is a conjecture, which is what Jordan was pointing out in the original post. We do not have any proof for the above statement, just our long experience trying and failing to solve these problems. Different people give different weight to this experience. There really are people out there who think P = NP. I am NOT one of these. In any case, factoring is not known to be NP-hard, in the technical sense which I'll mention below. In fact, the following "evidence" indicates that factoring is in some sense easier than general NP-hard problems: the running time for GNFS is O(e^1.92 (log N) N^1/3) for a number with N bits while the running time for the best generic SAT solving algorithm I know of is in the neighborhood of O(1+something^N) -- there's a better algorithm known for factoring than for generic SAT. But that's nothing more than suggestive, especially since it doesn't involve algorithms for other NP-hard problems. Another example of a problem which is believed to be in NP - P and yet not NP-hard is graph isomorphism. There is an O(n^ln n) algorithm known, yet the problem is not known to be NP-hard. At the same time, that bound is just about polynomial...
Not being able to solve a problem in polynomial time with current techniques does not make it unsolvable.
While I agree that the limitations of current techniques do not dictate what is possible, it _is_ possible to show that a certain problem has a best-case order of growth (for something simple, think of gate-level addition; its best case is provably Theta(log(N)) ).
Yes, that's right, lower bounds *can* be proved. Unfortunately, they tend to be very *hard* to prove. Especially for general computations. A superpolynomial lower bound on the work required to solve a problem in NP would separate P from NP, so I don't think one such is known right now. So AFAIK the question is open. Different people have different attitudes towards its resolution. This is all very nice, but it runs a high risk of becoming based solely on feeling and assertion very quickly. Often you can get a very nice lower bound in a so-called "restricted model" in which only a few operations are allowed -- the n log n lower bound for sorting given in many CS algorithm classes is of this type. The bound is correct, but it says literally nothing about radix sort, bucket sort, etc. because those sorts use operations not in the model spoken about by the bound. The cryptographic analogue might be the so-called "generic group model" -- a model in which all you have is a group's generators, the ability to compose elements, and the ability to test for identity. You *can* prove that there is no algorithm in this model which solves discrete log in time better than about O(2^n/2) (I may be off by a bit). But for the particular group Z_p^*, there is a much much better algorithm for finding discrete logs which takes advantage of that group's special structure (i.e. GNFS). This should suggest some of the difficulty in acheiving lower bounds which we as cryptographers might care about. For more suggestion of such difficulty, a book on complexity theory like Papadimitriou might be worth looking at.
In this case, what Tim means is that work is being done towards showing that the best-case order of growth for factorization is faster than polynomial, hence it is NP-hard. ^^^^^^^^^^^^^^^^^^^
You want to say "Hence it is in NP - P". The term "NP-hard" has a technical meaning, namely that the problem can be "reduced" to all problems in NP. Here "reduced" is another technical term which in turn needs to be defined carefully. I've screwed up that definition before and it's too late to look it up, so I regret that I'll leave it as is unless someone really wants it. It is known that if NP != P, then there is a hierarchy of decision problems which are neither in P nor NP-hard; the proof unfortunately takes the form of a diagonalization style construction on Turing machines and so doesn't tell us anything about what the natural problems might be. The result *does* tell us, however, that it is *not* enough for a problem to have a superpolynomial lower bound on decision for it to be NP-hard -- there's more work which must be done to show NP-hardness. It's possible that factoring is neither NP-hard nor in P, but still hard enough to be useful for cryptography. Similarly, it is possible that graph isomorphism is not NP-hard, yet is so close to P as to be practically efficient (and not at all useful for cryptography). (BTW - as an aside, "NP-complete" means that a problem is both in NP and NP-hard ... a problem may be actually harder than NP, in EXP or something, and still be NP-hard. or we may not actually _know_ whether the problem is in NP or not). In any case, whether or not a problem is NP-hard is sort of irrelevant. As a general notion, NP-completeness seems disappointing for cryptography. There are a lot of known NP-complete problems, but few of these seem to be hard enough on average to build cryptosystems with. Even fewer can be used for public-key cryptography. Some more discussion of these points may be found in Russell Impagliazzo's paper on "A Personal View of Average Case Complexity," which is on his UCSD page. If you look at the major problems used for public key cryptosystems, you see the Diffie Hellman assumption, factoring, and RSA. None of these AFAIK is known to be NP-hard and no one expects them to be. There was even a theorem due to Brassard in 1979 which suggested that the problem of "breaking" public key cryptosystems cannot be NP-hard...but it only applies to deterministic cryptosystems, so it doesn't say that much. For more on that, see Oded Goldreich and Shafi Goldwasser "On the Possibility of Basing Cryptography on the Assumption P \neq NP", in the theory of cryptography library at UCSD, now eprint.iacr.org. Note that when they say "cryptography" they mostly mean "public key cryptography." On the other hand, this notion of _reduction_ , of showing that one problem is "as hard as" another, has been VERY useful. This is how you can prove that OAEP is a good padding scheme - you give a reduction between breaking an OAEP-padded RSA message and breaking RSA directly. You can prove that there aren't any stupid subtle padding mistakes... Anyway, that was a lot of silly detail. From what I can tell, the point is just this: as Tim pointed out, there are problems for which the best known algortihms cause growth which dwarfs any sort of cracking farm we can imagine. If the adversary builds a 1000x bigger machine, we add 200 bits to the key and he's back at spending 10 million years. But as Jordan pointed out, no one knows that these are the best algorithms or useful lower bounds on solving the problems. Now it becomes a discussion on what you believe the NSA can do or can't do, the relative smartness of mathematicians inside and outside the NSA, and all that other stuff. Fine, but it's a discussion which runs the risk of becoming quasi-theological very quickly...and frankly it's one which just isn't that interesting the way it is usually run. I think it is more interesting to figure out what kinds of expertise the NSA might have that the academic sector doesn't, and especially interesting to find some way of confirming or denying. A way which doesn't involve "a friend of a friend of a friend with a .mil address stationed in Saudi Arabia for 4 months during Desert Storm." Tamper-resistant hardware expertise has been mentioned here; we had that patent notice a few months ago about speech transcription for ECHELON; we have SKIPJACK, KEA, and now SHA-2 to play with; TEMPEST research is now quasi-legendary; there's likely more fun things to figure out about these guys. -David
On Tue, 17 Oct 2000, Riad S. Wahby wrote:
While I agree that the limitations of current techniques do not dictate what is possible, it _is_ possible to show that a certain problem has a best-case order of growth (for something simple, think of gate-level addition; its best case is provably Theta(log(N)) ).
I think addition is bounded by something more like o(n) - carry propagation limits it. Anyway, your point is accurate. There are proven NP-hard problems and there have even been attempts to find ones with easy inverses for use in crypto. I don't think any of them have succeeded.
In this case, what Tim means is that work is being done towards showing that the best-case order of growth for factorization is faster than polynomial, hence it is NP-hard.
Quite. However, there are some things that must be considered if we're really paranoid. For example, it is well known that the usual model of a deterministic Turing machine does not always bound the complexity of a problem 'nicely' even if current serial machines are used. Certain string matching algorithms, for instance, can be proved to be in Theta(n log n) if only binary comparison is used even while solutions exist in Theta(n (log n)^a) with a less than 1 when the full ordering properties of the problem can be exploited. That is the sort of stuff which makes one wonder whether using radically different basic operations (like the ones based on holography, which I do not think have been proven to be strictly equal to a serial, polynomial time computation) could perhaps make a difference more pronounced than simply taking care of a fixed number of orders in complexity. But I stray. Currently the extraordinary evidence simply isn't there. Even if one has to be careful about making overbroad conclusions based on today's theory of computation, it is highly unlikely that e.g. dramatically accelerated factorization would currently or even in the future exist. Sampo Syreeni <decoy@iki.fi>, aka decoy, student/math/Helsinki university
Sampo A Syreeni <ssyreeni@cc.helsinki.fi> wrote:
I think addition is bounded by something more like o(n) - carry propagation limits it. Anyway, your point is accurate. There are proven NP-hard problems and there have even been attempts to find ones with easy inverses for use in crypto. I don't think any of them have succeeded.
If you're talking about the carries actually falling through full adders, there is a method called carry lookahead that computes carries in a tree (with branching factor 2). Computing the sum bits from the carries is a constant time independent of the number of bits in the adder, so the process is overall Theta(log(N)). On the other hand, if you're talking about speed of propagation through traces, you're right for large gate size. However, as gates get smaller, the speed of propagation becomes less important, and the adder speed asymptotically approaches Theta(log(N)). -- Riad Wahby rsw@mit.edu MIT VI-2/A 2002 5105
On Wed, 18 Oct 2000, Riad S. Wahby wrote:
Sampo A Syreeni <ssyreeni@cc.helsinki.fi> wrote:
I think addition is bounded by something more like o(n) - carry propagation limits it. Anyway, your point is accurate. There are proven NP-hard problems and there have even been attempts to find ones with easy inverses for use in crypto. I don't think any of them have succeeded.
If you're talking about the carries actually falling through full adders, there is a method called carry lookahead that computes carries in a tree (with branching factor 2). Computing the sum bits from the carries is a constant time independent of the number of bits in the adder, so the process is overall Theta(log(N)).
Actually, even that's not quite true. Carry lookahead guarantees worst-case performance on addition of O(log2 N) where N is the size of the numbers being addded. However, there is a "shortcut" available, on average, once per 2 bits in the addition where the carry can be determined fully even before carry-lookahead gets there. These are the "base cases" of carry lookahead, where both operand bits are equal to one, or both operand bits are equal to zero. You don't need to determine the carry into those places to determine the carry bits out of them. This means that you can build hardware, in practice, that does addition in an average case proportional to the log of the distance between (1,1)pairs in the operands. In the worst case, that's (log2 N) but in the average case it's (Log2 (log2 N)). The downside of this is that it involves an O((2^N)/N) number of gates, where conventional carry lookahead involves only O(N^2) number of gates, and that it makes the amount of time required to complete an addition depend on the arguments. Chip designers generally don't do this, more because of the second reason than the first. :-) Math, anyone? Bear
participants (5)
-
dmolnar
-
Jordan Dimov
-
Ray Dillinger
-
Riad S. Wahby
-
Sampo A Syreeni