Software engineering


Download 341.69 Kb.
bet3/21
Sana20.12.2022
Hajmi341.69 Kb.
#1035265
1   2   3   4   5   6   7   8   9   ...   21
Bog'liq
MASHINA-LEARNING2

Regularized Regression
When a linear regression model contains many independent variables, their coefficients will be poorly determined, and the model will have a tendency to fit extremely well to the training data (data used to build the model) but fit poorly to testing data (data used to test how good the model is). This is known as overfitting or high variance.
One popular technique to control overfitting is regularization, which involves the addition of a penalty term to the error or loss function to discourage the coefficients from reaching large values. Regularization, in simple terms, is a penalty mechanism that applies shrinkage to model parameters (driving them closer to zero) in order to build a model with higher prediction accuracy and interpretation. Regularized regression has two advantages over linear regression:
Prediction accuracy
The performance of the model working better on the testing data suggests that the model is trying to generalize from training data. A model with too many parameters might try to fit noise specific to the training data. By shrinking or setting some coefficients to zero, we trade off the ability to fit complex models (higher bias) for a more generalizable model (lower variance).
Interpretation
A large number of predictors may complicate the interpretation or communication of the big picture of the results. It may be preferable to sacrifice some detail to limit the model to a smaller subset of parameters with the strongest effects.
The common ways to regularize a linear regression model are as follows:
L1 regularization or Lasso regression
Lasso regression performs L1 regularization by adding a factor of the sum of the absolute value of coefficients in the cost function (RSS) for linear regression, as mentioned in Equation 4-1. The equation for lasso regularization can be represented as follows:
C o s t F u n c t i o n = R S S + X * X j=1 p P j
L1 regularization can lead to zero coefficients (i.e., some of the features are completely neglected for the evaluation of output). The larger the value of X , the more features are shrunk to zero. This can eliminate some features entirely and give us a subset of
predictors, reducing model complexity. So Lasso regression not only helps in reducing overfitting, but also can help in feature selection. Predictors not shrunk toward zero signify that they are important, and thus L1 regularization allows for feature selection (sparse selection). The regularization parameter ( X ) can be controlled, and a lambda value of zero produces the basic linear regression equation.
A lasso regression model can be constructed using the Lasso class of the sklearn package of Python, as shown in the code snippet that follows:

Download 341.69 Kb.

Do'stlaringiz bilan baham:
1   2   3   4   5   6   7   8   9   ...   21




Ma'lumotlar bazasi mualliflik huquqi bilan himoyalangan ©fayllar.org 2024
ma'muriyatiga murojaat qiling