This interactive demo illustrates how regularization helps to control model complexity in polynomial regression. Adjust the sliders to see how the polynomial degree, regularization strength (λ), and data noise affect the model's fit.
Coefficient | Value | Term |
---|
In polynomial regression, we fit a model of the form:
$$f(x) = w_0 + w_1x + w_2x^2 + \ldots + w_nx^n$$
Loss Functions:
Unregularized: $$\mathcal{L} = \sum_{i=1}^{m} (y_i - f(x_i))^2 $$
Ridge Regularization (L2): $$\mathcal{L} = \sum_{i=1}^{m} (y_i - f(x_i))^2 + \lambda \sum_{j=1}^{n} w_j^2 $$
Lasso Regularization (L1): $$\mathcal{L} = \sum_{i=1}^{m} (y_i - f(x_i))^2 + \lambda \sum_{j=1}^{n} |w_j| $$
The regularization parameter λ controls the strength of the penalty. Higher values enforce stronger regularization, resulting in smaller coefficients.
Why Regularization Matters:
Comparing Regularization Types:
Try it: Increase the polynomial degree and observe how unregularized models start to overfit. Then increase λ to see how regularization smooths the curve and stabilizes predictions.