I remember from assembly that integer division instructions yield both the quotient and remainder. So, in python will the built-in divmod()
function be better performance-wise than using the %
and //
operators (suppose of course one needs both the quotient and the remainder)?
q, r = divmod(n, d)
q, r = (n // d, n % d)
To measure is to know (all timings on a Macbook Pro 2.8Ghz i7):
The
divmod()
function is at a disadvantage here because you need to look up the global each time. Binding it to a local (all variables in atimeit
time trial are local) improves performance a little:but the operators still win because they don't have to preserve the current frame while a function call to
divmod()
is executed:The
//
and%
variant uses more opcodes, but theCALL_FUNCTION
bytecode is a bear, performance wise.In PyPy, for small integers there isn't really much of a difference; the small speed advantage the opcodes have melts away under the sheer speed of C integer arithmetic:
(I had to crank the number of repetitions up to 1 billion to show how small the difference really is, PyPy is blazingly fast here).
However, when the numbers get large,
divmod()
wins by a country mile:(I now had to tune down the number of repetitions by a factor of 10 compared to hobbs' numbers, just to get a result in a reasonable amount of time).
This is because PyPy no longer can unbox those integers as C integers; you can see the striking difference in timings between using
sys.maxint
andsys.maxint + 1
: