Running Python on GPU using numba

6.4k Views Asked by At

I am trying to run python code in my NVIDIA GPU and googling seemed to tell me that numbapro was the module that I am looking for. However, according to this, numbapro is no longer continued but has been moved to the numba library. I tried out numba and it's @jit decorator does seem to speed up some of my code very much. However, as I read up on it more, it seems to me that jit simply compiles your code during run-time and in doing so, it does some heavy optimization and hence the speed-up.

This is further re-enforced by the fact that jit does not seem to speed up the already optimized numpy operations such as numpy.dot etc.

Am I getting confused and way off the track here? What exactly does jit do? And if it does not make my code run on the GPU, how else do I do it?

1

There are 1 best solutions below

0
On

You have to specifically tell Numba to target the GPU, either via a ufunc:

http://numba.pydata.org/numba-doc/latest/cuda/ufunc.html

or by programming your functions in a way that explicitly takes the GPU into account:

http://numba.pydata.org/numba-doc/latest/cuda/examples.html http://numba.pydata.org/numba-doc/latest/cuda/index.html

The plain jit function does not target the GPU and will typically not speed-up calls to things like np.dot. Typically Numba excels where you can either avoid creating intermediate temporary numpy arrays or if the code you are writing is hard to write in a vectorized fashion to begin with.