GLMs¶
Ordinary linear regression comes with several assumptions that can be relaxed with a more flexible model class: generalized linear models (GLMs). Specifically, OLS assumes
The target variable is a linear function of the input variables
The errors are Normally distributed
The variance of the errors is constant
When these assumptions are violated, GLMs might be the answer.
GLM Structure¶
A GLM consists of a link function and a random component. The random component identifies the distribution of the target variable \(y_n\) conditional on the input variables \(\bx_n\). For instance, we might model \(Y_n\) as a Poisson random variable where the rate parameter \(\lambda_n\) depends on \(\bx_n\).
The link function specifies how \(\bx_n\) relates to the expected value of the target variable, \(\mu_n = E(Y_n)\). Let \(\eta\) be a linear function of the input variables, i.e. \(\eta_n = \bbeta^\top \bx_n\) for some coefficients \(\bbeta\). We then chose a nonlinear link function to relate \(\mu\) to \(\eta\). For link function \(g\) we have
In a GLM, we calculate \(\eta\) before calculating \(\mu\), so we often work with the inverse of \(g\):
Note
Note that because \(\eta_n\) is a function of the data, it will vary for each observation (though the \(\beta\)s will not).
In total then, a GLM assumes
where \(F\) is some distribution with mean parameter \(\mu_n\).
Fitting a GLM¶
“Fitting” a GLM, like fitting ordinary linear regression, really consists of estimating the coefficients, \(\bbeta\). Once we know \(\bbeta\), we have \(\eta\). Once we have a link function, \(\eta\) gives us \(\mu\) through \(g^{-1}\). A GLM can be fit in these four steps:
Specify the distribution of \(Y_n\), indexed by its mean parameter \(\mu_n\).
Specify the link function \(\eta_n = g(\mu_n)\).
Identify a loss function. This is typically the negative log-likelihood.
Find the \(\bbetahat\) that minimize that loss function.
In general, we can write the log-likelihood across our observations for a GLM as follows.
This shows how the log-likelihood depends on \(\bbeta\), the parameters we want to estimate. To fit the GLM, we want to find the \(\bbetahat\) to maximize this log-likelihood.
Example: Poisson Regression¶
Step 1
Suppose we choose to model \(Y_n\) conditional on \(\bx_n\) as a Poisson random variable with rate parameter \(\lambda_n\):
Since the expected value of a Poisson random variable is its rate parameter, \(E(Y_n) = \mu_n = \lambda_n\).
Step 2
To determine the link function, let’s think in terms of its inverse, \(\lambda_n = g^{-1}(\eta_n)\). We know that \(\lambda_n\) must be non-negative and \(\eta_n\) could be anywhere in the reals since it is a linear function of \(\bx_n\). One function that works is
meaning
This is the “canonical link” function for Poisson regression. More on that here.
Step 3
Let’s derive the negative log-likelihood for the Poisson. Let \(\blambda = \begin{bmatrix} \lambda_1, \dots, \lambda_N\end{bmatrix}^\top\).
Math Note
The PMF for \(Z \sim \text{Pois}(\lambda)\) is
Now let’s get our loss function, the negative log-likelihood. Recall that this should be in terms of \(\bbeta\) rather than \(\blambda\) since \(\bbeta\) is what we control.
Step 4
We obtain \(\bbetahat\) by minimizing this loss function. Let’s take the derivative of the loss function with respect to \(\bbeta\).
Ideally, we would solve for \(\bbetahat\) by setting this gradient equal to 0. Unfortunately, there is no closed-form solution. Instead, we can approximate \(\bbetahat\) through gradient descent. This is done in the construction section.
Since gradient descent calculates this gradient a large number of times, it’s important to calculate it efficiently. Let’s see if we can clean this expression up. First recall that $\( \hat{y}_n = \hat{\lambda}_n = \exp(\bbetahat^\top \bx_n). \)$
The loss function can then be written as
Further, this can be written in matrix form as
where \(\hat{\by}\) is the vector of fitted values. Finally note that this vector can be calculated as
where the exponential function is applied element-wise to each observation.
Many other GLMs exist. One important example is logistic regression, the topic of the next chapter.