Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> it's not "constant time" in the sense of having O(1) time complexity with regards to the size of the inputs

Yes it is. The presented algorithm is constant time in the exponent, i.e. 2 + O(1), where this exponent is not impacted by the size of the inputs n and c. Much like any other complexity analysis, an algorithm is O(1) as long as O(1) is asymptotically the "largest part" of the running time. As the size of n increases, the exponent 2+O(1) increasingly dominates execution time.



"Constant time in the exponent" is nonsense, I'm afraid.

A bounded exponent of n would be the same thing as "polynomial time", but the thing that's bounded (and indeed arbitrarily close to 2 for large n) is the exponent of log n.

The running time of the algorithm presented in this paper is n (log n)^(2+o(1)). This ...

... is not constant; it increases with n, a bit faster than linearly.

... has o(1), not O(1), in the exponent; the two mean different things. O(1) means "bounded", o(1) means "tends to zero". The claim isn't that the running time is <= n times polynomial(log n) but that it's <= n times "at most approximately a quadratic polynomial in log n".

... doesn't in fact depend mostly on that exponent; the most important factor is the n, not the (log n)^(2+o(1)). If that 2 were a 100, the n factor would still (asymptotically) matter more.

For instance, suppose n=2^100 and our logs are to base 2. Then the running time of this algorithm is approximately some constant times 2^100 (that's the n factor) times 100^2 (that's the factor with log n in it). 2^100 is much, much, much bigger than 100^2.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: