Other answers have mentioned that providing the exact objective gradient and hessian will speed up the process as it doesn't have to use an approximation. If this is my objective function, how do I figure out the gradient and hessian?
def er(x,df=df):
return -((x*df['pred']).sum())
isn't it just -df['pred'] and 0 since df['pred'] are just constants? x would be the variable I am trying to optimize.