You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Minor suggestion:
Using all the weights (including bias) in regularization might end up in constraining aformentioned bias for non-normilized training data.
e.g:
class l1_regularization():
""" Regularization for Lasso Regression """
def __call__(self, w):
return self.alpha * np.linalg.norm(w) # this will constrain the bias too
It's extremely easy to fix, since you add bias as the zero'th column in data:
Your observation about the potential impact of including bias in L1 regularization for Lasso regression is correct. The bias term (typically the zeroth weight) should sometimes be treated differently to avoid inadvertently penalizing it excessively when applying L1 regularization.
Your suggested modification, excluding the bias term from the norm calculation in L1 regularization, is a sound approach to address this issue. It effectively adjusts the regularization calculation to avoid penalizing the bias term along with other weights.
Minor suggestion:
Using all the weights (including bias) in regularization might end up in constraining aformentioned bias for non-normilized training data.
e.g:
It's extremely easy to fix, since you add bias as the zero'th column in data:
The new regularization should exclude zero'th weight from norms (and it's less than one line fix :)
Same for the l2 and l1_l2
The text was updated successfully, but these errors were encountered: