Understanding Lasso and Ridge Regression

57 Views Asked by At

Currently reading up on Ridge and Lasso regression, have some questions to clarify.

Understanding that they both impose penalities on the coefficients (albeit different types), I was wondering if Lasso and Ridge techniques should be applied on all regressors or can it be applied to only a subset (maybe some variables are dropped after EDA)?

Understanding that Lasso has the ability to drive coefficients to 0, which arguably is a form of variable selection in that regard. Then should Lasso only be run on all variables?

Additionally, same for net regularisation as it combines both L1 and L2 penalities, then can it also be applied to a subset of variables?

Apologies if there are any errors in my knowledge, looking as to how Ridge and Lasso should be implemented. Thanks

0

There are 0 best solutions below