How mlxtend StackingRegressor with multiple cpu?

305 Views Asked by At

I would like to use mlxtend StackingRegressor to ensemble XGBoost,LGBM and Catboost .But I am not sure how much cpu I will use in this method.

For example:

In XGboost:

import xgboost as xgb
xgb_pars = {'nthread': -1}
xgb1=XGBRegressor(**xgb_pars)

Then I know I will use up all cpu core in this algorithm

But what if I try it with mlxtend StackingRegressor?

I guess this method will use cpu that I arranged to each algorithm.

Example: XGBoost:2 LGBM:2 CatBoost:2 Meta regressor:1

So finally I am using 7 cores.

1

There are 1 best solutions below

0
On

Nope, the code seems to fit models one after another, see here. So first you will use 2 cores to train XGB, when it finishes- 2 cores for LGBM, and so on.

BTW, thanks for sharing mlxtend- i was not aware of it. Seems to have many useful tools, that i had to develop myself and thus to re-invent the wheel :) The only unfortunate thing seems to be missing docs, but there are inlined docs and a very good set of examples