Kriging (Gaussian Process Regression) in SciKit-Learn - access to optimized theta values

262 Views Asked by At

How can I access the 'optimized' thetas from the GPR (Kriging-type) in scikit-learn? I want to get the theta for each of the variable/parameter, to verify the influence of those variables/parameters on the model's output.

I have tried the following code, but what I found (for simple RBF Kernel) is not what I seek. By the Theta I mean the "width" scale factor of the Gaussian bell curve (RBF).

print('model kernel: ', model.kernel_)
model kernel:  0.188**2 * RBF(length_scale=3.19)

print('model parameters: ', model.get_params())
model parameters:  {'alpha': 1e-10, 'copy_X_train': True, 'kernel__k1': 1**2, 'kernel__k2': RBF(length_scale=1), 'kernel__k1__constant_value': 1, 'kernel__k1__constant_value_bounds': (1e-05, 100000.0), 'kernel__k2__length_scale': 1.0, 'kernel__k2__length_scale_bounds': (0.001, 1000.0), 'kernel': 1**2 * RBF(length_scale=1), 'n_restarts_optimizer': 20, 'normalize_y': False, 'optimizer': 'fmin_l_bfgs_b', 'random_state': None}

print('kernel parameters: ', model.kernel_.get_params())
kernel parameters:  {'k1': 0.188**2, 'k2': RBF(length_scale=3.19), 'k1__constant_value': 0.0354057439588758, 'k1__constant_value_bounds': (1e-05, 100000.0), 'k2__length_scale': 3.1913782768411574, 'k2__length_scale_bounds': (0.001, 1000.0)}
0

There are 0 best solutions below