Multiple Linear Regression
Multiple linear regression is a statistical technique used to model the relationship between a dependent variable and multiple independent variables by fitting a linear equation to the observed data.
Goals
By the end of this lesson, you should be able to:
- transform data for polynomial model
hypothesis
, polynomial
, update function
, matrix form
Introduction
In the previous notes, we only have one independent variable or one feature. In most cases of machine learning, we want to include more than one feature or we want to have a hypothesis that is not simply a straight line. For the first example, we may want to consider not only the floor area but also the storey level to predict the resale price of HDB houses. For the second example, we may want to model the relationship not as a straight line but rather as quadratic. Can we still use linear regression to do these?
This section discusses how we can include more than one feature and how to model our equation beyond a simple straight line using multiple linear regression.
Hypothesis
Recall that in linear regression, our hypothesis is written as follows.
where is the only independent variable or feature. In multiple linear regression, we have more than one feature. We will write our hypothesis as follows.
In the above hypothesis, we have features. Note also that we can assume to have with as its coefficient.
We can write this in terms of a row vector, where the features are written as
Note that the dimension of the feature is because we have which is a constant of 1.
The parameters can be written as follows.
Our system equations for all the data points can now be written as follows.
In the above equations, the superscript indicate the index for the data points from 1 to , assuming there are data points.
To write the hypothesis as a matrix equation we first need to write the features as a matrix for all the data points.
with this, we can now write the hypothesis as a matrix multiplication.
Notice that this is the same matrix equation as a simple linear regression. What differs is that contains more than two parameters. Similarly, the matrix is now of dimension where is the number of data points and is the number of parameters. Next, let's see how we can calculate the cost function.
Cost Function
Recall that the cost function is written as follows.
We can rewrite the square as a multiplication instead and make use of matrix multplication to express it.
Writing it as matrix multiplication gives us the following.
This equation is exactly the same as the simple linear regression.
Gradient Descent
Recall that the update function for gradient descent algorithm for a linear regression is given as follows.
In the case of multiple linear regression, we have more than one feature and so we need to differentiate for each . Doing this will result in a system of equation as follows.
Note that for all .
We can now write the gradient descent update function using matrix operations.
Substituting the equation for gives us the following.
Again, this is exactly the same as the simple linear regression.
This means that all our equations have not changed and what we need to do is create the right parameter vector and the matrix . Once we constructed these vector and matrix, all the other equations remain the same.
Polynomial Model
There are time that even when there is only one feature we may want to have a hypothesis that is not a straight line. An example of would be if our model is a quadratic equation. We can use multiple linear regression to create hypothesis beyond a straight line.
Recall that in multiple linear regression, the hypothesis is writen as follows.
To have a quadratic hypothesis, we can set the following:
And so, the whole equation can be written as
In this case, the matrix for the features becomes as follows.
In the notation above, we have put the index for the data point inside a bracket to avoid confusion with the power.
We can genearalize this to any power of polynomial where each power is treated as each feature in the matrix. This means that if we want to to model the data using any other polynomial equation, what we need to do is to transform the matrix in such a way that each column in represents the right degree of polynomial. Column zero is for , column one is for , column two is for , and similarly all the other columns until we have column n for .
The parameters can be found using the same gradient descent that minimizes the cost function.