I ran into trouble using (understanding) the Python timeit function. When I use timeit.timeit(“code snippet”, number = n) the result grows proportionally with n, whereas I thought it should give me the best time out of n trials of a single execution of the code snippet.
The following code demonstrates what I mean:
import math
import timeit
import matplotlib.pyplot as plt
n_values = [10**i for i in range(8)]
t_values = [[timeit.timeit("pass", number=n) for n in n_values] for _ in range(10)]
for t in t_values:
plt.plot(n_values, t)
plt.xlabel('number')
plt.ylabel('time (seconds)')
plt.title('timeit.timeit("pass", number=n)')
plt.xscale('log')
plt.yscale('log')
plt.grid(True, which='both', linestyle='--', linewidth=0.5)
plt.show()
In addition to the time growing with n, I find that the first series of results (blue line) is an order of magnitude slower than the rest.
Clearly I am not using the timeit function properly.
