XGBRegressor: The results of GPU calculation and calculation without GPU are different

142 Views Asked by At

My code is here:

xgb = XGBRegressor(
    max_depth= int(3.1847420232679196),
    n_estimators = int(27.03977712011383),
    subsample =  0.9130850193972424,
    tree_method = 'gpu_hist',
    gpu_id=0
)
xgb.fit(x_train, y_train)
r2_score(xgb.predict(x_test), y_test), r2_score(xgb.predict(x_train), y_train)

and the result is (0.9322279800331514, 0.9838467922872913).but when i don’t use the GPU the result is different.

xgb = XGBRegressor(
    max_depth= int(3.1847420232679196),
    n_estimators = int(27.03977712011383),
    subsample =  0.9130850193972424,
)
xgb.fit(x_train, y_train)
r2_score(xgb.predict(x_test), y_test), r2_score(xgb.predict(x_train), y_train)

the result is (0.6763052034789518, 0.9805904489567225). My GPU: NVIDIA GeForce MX250.

when i use this code on other computer(the cuda is 2080ti),the result is different too.

1

There are 1 best solutions below

0
On

This difference is expected, due to the nature of calculations performed on CPU and GPU.

GPU calculations can sometimes lead to small differences in floating-point precision compared to CPU calculations. In most cases, these differences are negligible, but they can accumulate and affect the final results.

Additionally, XGBoost training has an inherent randomness to it, and in your code you did not specify a random_state in your model, meaning that the same run on a CPU will likely be different.