I want to implement L1 Regularization in sklearn's MLPClassifier. Here is my code where alpha=0.0001 is the default for L2 regularization. I want to use L1 Regularization instead of L2.
# evaluate a Neural Networks with ReLU and L1 norm regularization
from numpy import mean
from numpy import std
from sklearn.model_selection import KFold
from sklearn.model_selection import cross_val_score
from sklearn.neural_network import MLPClassifier
# prepare the cross-validation procedure (10X10)
cv = KFold(n_splits=10, random_state=1, shuffle=True)
# create model L2 Regularization ["alpha" here is used as a hyperparamter for L2
regularization]
model = MLPClassifier(alpha=0.0001, hidden_layer_sizes=(100,), activation='relu',
solver='adam')
# evaluate model
scores = cross_val_score(model, X, y, scoring='accuracy', cv=cv, n_jobs=-1)
# report performance
print('Accuracy: %.3f (%.3f)' % (mean(scores), std(scores)))
It is not possible. Scikit-learn has many very long discussions about support for neural networks and decided against it. They provide extremely basic/rigid implementation and that is it. For customisation you need to look at keras, tf, torch, jax etc.
Even scikit learn itself recommends other libraries for that https://scikit-learn.org/stable/related_projects.html#related-projects