How does numba code supposedly count to 1 billion in far less time than 1 billion clock cycles?

203 Views Asked by At

I tried to count up, from any input number, by 1 billion using numba, to see how much slower it is than C code. I accidentally found that it takes (on my setup with Python 3.8) around 80 microseconds, not seconds. As you can see below, I tried to slow it down with a conditional and an extra step, but that had no effect. I also did not even input a signature for the datatype of the argument and/or the output.

As I understand it, the decorator @njit(cache=True) transforms the python code to LLVM binary machine code that is then cached to be run as needed when the decorated function is called.

The following function is so much faster than even 1 billion clock cycles. Is the function analyzed in depth and then safely transformed to equivalent faster functions?

from numba import njit

@njit(cache=True)
def countit(count):
    end = count + 1000000000
    while count < end:
        if count%2:
            count += 1
        else:
            count += 2
            count -= 1
    return count

I did make sure that the result wasn't cached when doing the timings. If you compile the function, and then run it just once with a brand new argument, it will finish in far less than a millisecond.

0

There are 0 best solutions below