So in integer factorization there are two different quantities that one might naturally want to denote by N: the number you want to factor and the number of bits required to specify that number (i.e, the input length).
The convention is to let N be the number to be factored, in which case log_2(N) is the input length. Hence, an algorithm that runs in time "polynomial (in the input length)" would have to run in time log(N)^k for some k.