Approach 2: Maximizing Likelihood¶
1. Simple Linear Regression¶
Model Structure¶
Using the maximum likelihood approach, we set up the regression model probabilistically. Since we are treating the target as a random variable, we will capitalize it. As before, we assume
only now we give \(\epsilon_n\) a distribution (we don’t do the same for \(x_n\) since its value is known). Typically, we assume the \(\epsilon_n\) are independently Normally distributed with mean 0 and an unknown variance. That is,
The assumption that the variance is identical across observations is called homoskedasticity. This is required for the following derivations, though there are heteroskedasticity-robust estimates that do not make this assumption.
Since \(\beta_0\) and \(\beta_1\) are fixed parameters and \(x_n\) is known, the only source of randomness in \(Y_n\) is \(\epsilon_n\). Therefore,
since a Normal random variable plus a constant is another Normal random variable with a shifted mean.
Parameter Estimation¶
The task of fitting the linear regression model then consists of estimating the parameters with maximum likelihood. The joint likelihood and log-likelihood across observations are as follows.
Our \(\hat{\beta}_0\) and \(\hat{\beta}_1\) estimates are the values that maximize the log-likelihood given above. Notice that this is equivalent to finding the \(\hat{\beta}_0\) and \(\hat{\beta}_1\) that minimize the RSS, our loss function from the previous section:
In other words, we are solving the same optimization problem we did in the last section. Since it’s the same problem, it has the same solution! (This can also of course be checked by differentiating and optimizing for \(\hat{\beta}_0\) and \(\hat{\beta}_1\)). Therefore, as with the loss minimization approach, the parameter estimates from the likelihood maximization approach are
2. Multiple Regression¶
Still assuming Normally-distributed errors but adding more than one predictor, we have
We can then solve the same maximum likelihood problem. Calculating the log-likelihood as we did above for simple linear regression, we have
Again, maximizing this quantity is the same as minimizing the RSS, as we did under the loss minimization approach. We therefore obtain the same solution: