Custom kernel hangs when running in ipyparallel

298 Views Asked by At

I am trying to use ipyparallel with a custom kernel I have installed in a conda env. My tools are built with matplotlib 2.0.2. I am running on a Jupyter Hub, with the default Python3 kernel pointing to matplotlib 1.5.3. I can see the version of matplotlib from the respective engines with this example:

import ipyparallel
import matplotlib

def myFunc(n):
    import matplotlib
    status = "mpl version=%s, and num=%d" % (matplotlib.__version__, 
                                             n * 10)
    return status

rc=ipyparallel.Client(profile='MJBtest')
all_proc = rc[:]
all_proc.block=True

print("Local: ", matplotlib.__version__)

inlist = [i for i in range(3)]
print("Now calling map_sync")
result = all_proc.map_sync(myFunc, inlist)
print("Parallel result : ", result)

which returns

Local:  1.5.3
Now calling map_sync
Parallel result :  ['mpl version=1.5.3, and num=0', 'mpl version=1.5.3, and num=10', 'mpl version=1.5.3, and num=20']

as I expect, because I am running in the Python 3 default kernel. I have built a customized kernel called "cetb3" by creating a custom kernel with the tools I want, activating it, and creating a kernelspec file with this command:

ipython kernel install --user --name cetb3

In the cetb3 environment, I can run python, import matplotlib and I see that the version is matplotlib 2.0.2. From this same cetb3 env, I also created a test profile with:

ipython profile create --parallel --profile=MJBtest

In the Jupyter Hub, I can switch the kernel to cetb3, import matplotlib and see that it is at v2.0.2. However, when I start a cluster from MJBtest, and try to run the same code as above with the cetb3 kernel, the cell hangs after the "Now calling map_sync" line and never returns:

Local:  2.0.2
Now calling map_sync

I thought that I might have to create an ipython profile that uses my custom kernel, and I tried adding the name of my profile to the cetb3 kernelspec file: "--profile=MJBtest" but when I did this, the kernel wouldn't even start. I am unclear whether I have to tell my kernel about my profile or vice-versa (and how I might do this) or if there is some other mechanism altogether for pushing my custom environment out to my ipyparallel engines.

So I worked with the sys admin on our supercomputer and it turns out that they had configured some customized ipython profiles that were starting the engine cluster using the ipengine command. In the ipcluster_config.py file, prior to the ipengine command, I was able to specify my custom environment by adding my conda env bin path to the beginning of the PATH env variable, and then calling source activate for the conda env I wanted to be available on each engine.

0

There are 0 best solutions below