In my regression problem, when using LGBM's built-in poisson loss, I got very good model performance. Now I'm trying to reproduce LGBM's poisson loss in my customized objective function.
Here's the LGBM's source code of its poission function: https://github.com/microsoft/LightGBM/blob/7076cb8a3ac3a7b32dcf37be5593dddf27bf7f16/src/objective/regression_objective.hpp#L446
max_delta_step=0.7
is configured here: https://github.com/microsoft/LightGBM/blob/7076cb8a3ac3a7b32dcf37be5593dddf27bf7f16/include/LightGBM/config.h#L792
Therefore, when creating customized objective function, my understanding for poission loss can be written as:
max_delta_step = 0.7
grad_poisson = np.exp(y_pred) - y_true
hess_poisson = np.exp(y_pred + max_delta_step)
But using this code without changing any other settings in LGBM, I got much worse model performance....
Does anyone know how to reproduce LGBM's poission loss in a customized objection function?