Linear Regression Target Practice
TriWei AI Lab
Linear Regression Target Practice
Fit a line with gradient descent, inspect the loss surface, and verify gradients numerically.
How to play + what to look for
- Goal: match the hidden line by learning
w(slope) andb(intercept). - Click Train to run gradient descent; watch train vs test MSE diverge if you overfit noisy data.
- The right plot is a loss landscape over (w,b). The red dot is your current parameters.
- Keyboard: T=Train, R=Reset, G=Gradient check, N=Normal equation.
This is a toy. Real models have many parameters, non-convex loss, and regularization.
Learning objectives
- Concept focus: understand linear regression as fitting a line using mean squared error.
- Core definition: the gradient of the MSE with respect to parameters is proportional to prediction error times input.
- Common mistake: using too large a learning rate which causes divergence or interpreting test error as training error.
- Why it matters: linear regression underpins many ML models and is the simplest case of convex optimization.
- Toy disclaimer: this demo uses a single feature and noise; real datasets may require multiple variables and regularization.
You’re trying to “hit the target” by fitting a line to data. You can drag parameters manually, or let gradient descent do the work (batch or SGD). This is deliberately small and visual: real ML stacks have many more moving parts.
Data + Fit
Loss Landscape (w,b)
Contours show MSE over a grid in parameter space. The dot is your current (w,b).
Math + Sources
Model: \(\hat y = wx + b\). MSE loss: \(L = \frac{1}{n}\sum_i(\hat y_i - y_i)^2\). Gradients: \(\partial L/\partial w = \frac{2}{n}\sum_i(\hat y_i-y_i)x_i\), \(\partial L/\partial b = \frac{2}{n}\sum_i(\hat y_i-y_i)\).
The normal equation for linear regression (in matrix form) is \(\theta=(X^TX)^{-1}X^Ty\) for \(\theta=[w,b]^T\). See Stanford CS229 notes: cs229.stanford.edu/main_notes.pdf.
For intuition on computational graphs / chain rule (used in later labs), see CS231n: cs231n.github.io/optimization-2/.
Collaboration Credits
These interactive labs are the result of a close collaboration between a human author and an AI assistant (ChatGPT). The AI contributed algorithmic refinements, numerical safeguards and visual improvements, while the human designed the pedagogical structure, reviewed all code, and ensured educational accuracy. Mathematical formulas and derivations are referenced to reputable course notes and textbooks. All code runs entirely in the browser; no data is sent to any server.