How to run Scikit's gp_minimize in parallel?

500 Views Asked by At

I am unable to make skopt.gp_minimize run at multiple cores. According to the documentation, the parameter n_jobs should set the number of cores. However, setting n_cores>1 seems to have no effect. Here is a minimal example that reproduces the problem:

from skopt import gp_minimize
import time
import datetime


def J(paramlist):
    x = paramlist[0]
    time.sleep(5)
    return x**2



print "starting at "+str(datetime.datetime.now())

res = gp_minimize(J,                  # the function to minimize
                  [(-1.0, 1.0)],
                  acq_func="EI",      # the acquisition function
                  n_calls=10,         # the number of evaluations of f
                  n_random_starts=1,  # the number of random initialization points
                  random_state=1234,
                  acq_optimizer="lbfgs",
                  n_jobs=5,
                  )

print "ending at "+str(datetime.datetime.now())

I am trying to optimize J. In order to verify if calls to J happen in parallel, I put a delay in J. The optimizer is set up for 10 function calls so I'd expect it to run for ~50 seconds in series and ~10 seconds if executed on 5 cores as specified.

The output is:

starting at 2022-11-28 12:32:30.954389
ending at 2022-11-28 12:33:23.403255

meaning that the runtime was 53 seconds and it did not run in parallel. I was wondering whether I'm missing something in the optimizer. I use Anaconda with the following scikit versions:

conda list | grep scikit
scikit-learn              0.19.2          py27_blas_openblasha84fab4_201  [blas_openblas]  conda-forge
scikit-optimize           0.3                      py27_0    conda-forge
0

There are 0 best solutions below