I'm trying to time a python 3.6. program from jupyter notebook, but it seems like the magic command %timeit
adds a lot of extra overhead, giving me wrong stats.
From Jupyter notebook:
%timeit a=1
10000000 loops, best of 3: 84.1 ns per loop
From cmdline
python -m timeit 'a=1'
100000000 loops, best of 3: 0.0163 usec per loop
So in this case the command line timeit
runs millions of times faster than the jupyter notebook timeit
. What is the reason for this, and is there a way to fix it so timeit
from jupyter notebook can give correct measurements?
You are not reading those numbers correctly. IPython is reporting timings in nanoseconds (note the
ns
abbreviation). Python is reporting the timings in microseconds (usec
).1 microsecond is 1000 nanoseconds; normalising to nanoseconds Python reported 16.3 nanoseconds, so it was only 5 times as fast.
However, I can't reproduce your findings. Using the same Python binary in a virtualenv to both run IPython and directly:
and in a Jupyter notebook, again with the same virtualenv; this essentially drives ipython, so as expected there is no real difference:
So thats 11.9 vs 12.1 vs 11.8 nanoseconds; too close to call a difference.