jamesd@netcom.com (James A. Donald) writes:
n is the number of bits, and factoring can be done in considerably less than 2^(n/2)
When discussing complexity it is usual to use a measure of problem size that corresponds to the physical size of the answer or the question.
Thus thus if you are factoring a 1024 bit number, n is 1024, not 2^1024
Ah. Thank you -- it's amazing the number of obviously wrong answers I received to my question, all of them taking an authoritative tone (from "your algorithm doesn't work" (it does) to "your algorithm takes enormous amounts of memory" (in fact, it takes 3n)). Makes one realize (again) how sceptical one must be towards answers received on the 'net, "even" from cypherpunks. [This isn't to slam anyone, just to suggest that people take a little more time to think before hitting the 'r' key.] -- L. Todd Masco | Bibliobytes books on computer, on any UNIX host with e-mail cactus@bb.com | "Information wants to be free, but authors want to be paid."
When discussing complexity it is usual to use a measure of problem size that corresponds to the physical size of the answer or the question.
Not quite. The length of the answer is not typically used in measures of complexity. The 'n' in O(n^2), et al., is the length of the input. Exactly that, and nothing more. The length used is the number of symbols used to encode the input from some finite alphabet of symbols. Thus, the lengths are determined up to a constant factor related to the logarithm of the size of the alphabet.
Thus thus if you are factoring a 1024 bit number, n is 1024, not 2^1024
Yes. Getting the wrong 'n' will make complexity theory meaningless and impenetrable. Eric
participants (2)
-
hughes@ah.com -
L. Todd Masco