Linear Regression
Definition
The equation of linear regression is as follows.
where \mathbf {y} is the prediction target vector, \mathbf{X} is the feature matrix, where each row is a feature vector, the first column of which only contains 1s, \boldsymbol {\beta} is the parameter to learn, \boldsymbol {\varepsilon } is the error term.
Solution
The solution is also called estimation of the parameters. The most straight-forward solution is the ordinary least squares (OLS) estimation. We will also introduce maximum likelihood estimation (MLE) and maximum a posteriori (MAP), which are the most widely used estimation methods in machine learning.
Ordinary Least Squares
This method directly minimize the square of the error term \boldsymbol {\varepsilon }. We see the error term as an vector, we want to minimize the length of the vector, which is the same as minimizing the square of the length of the vector. The notation for the length of a vector is {\bigl \|}\boldsymbol {\varepsilon }{\bigr \|}. Its square is {\bigl \|}\boldsymbol {\varepsilon }{\bigr \|}^2 The loss function is:
Here we use capital \mathbf{Y} since it can also be a matrix for multivariate regression. We need to calculate the partial derivative of the loss function with respect to \boldsymbol {\beta}, which is also called the gradient since \boldsymbol {\beta} is a vector. Then we set the gradient to zero to minimize the loss since the loss is convex.
Set the first order derivative with respect to \boldsymbol \beta to zero. Need matrix diferentiation rules