Concept¶
Discriminative classifiers, as we saw in the previous chapter, model a target variable as a direct function of one or more predictors. Generative classifiers, the subject of this chapter, instead view the predictors as being generated according to their class—i.e., they see the predictors as a function of the target, rather than the other way around. They then use Bayes’ rule to turn \(P(\bx_n|Y_n = k)\) into \(P(Y_n = k|\bx_n)\).
In generative classifiers, we view both the target and the predictors as random variables. We will therefore refer to the target variable with \(Y_n\), but in order to avoid confusing it with a matrix, we refer to the predictor vector with \(\bx_n\).
Generative models can be broken down into the three following steps. Suppose we have a classification task with \(K\) unordered classes, represented by \(k = 1, \dots, K\).
Estimate the density of the predictors conditional on the target belonging to each class. I.e., estimate \(p(\bx_n|Y_n = k)\) for \(k = 1, \dots, K\).
Estimate the prior probability that a target belongs to any given class. I.e., estimate \(P(Y_n = k)\) for \(k = 1, \dots, K\). This is also written as \(p(Y_n)\).
Using Bayes’ rule, calculate the posterior probability that the target belongs to any given class. I.e., calculate \(p(Y_n = k|\bx_n) \propto p(\bx_n|Y_n = k)p(Y_n = k)\) for \(k = 1, \dots, K\).
We then classify observation \(n\) as being from the class for which \(P(Y_n = k|\bx_n)\) is greatest. In math,
Note that we do not need \(p(\bx_n)\), which would be the denominator in the Bayes’ rule formula, since it would be equal across classes.
Note
This chapter is oriented differently from the others. The main methods discussed—Linear Discriminant Analysis, Quadratic Discriminant Analysis, and Naive Bayes—share much of the same structure. Rather than introducing each individually, we describe them together and note (in section 2.2) how they differ.
1. Model Structure¶
A generative classifier models two sources of randomness. First, we assume that out of the \(K\) possible classes, each observation belongs to class \(k\) independently with probability \(\pi_k\). In other words, letting \(\bpi =\begin{bmatrix} \pi_1 & \dots & \pi_K\end{bmatrix}^\top \in \mathbb{R}^{K}\), we assume the prior
See the math note below on the Categorical distribution.
Math Note
A random variable which takes on one of \(K\) discrete and unordered outcomes with probabilities \(\pi_1, \dots, \pi_K\) follows the Categorical distribution with parameter \(\bpi = \begin{bmatrix} \pi_1 & \dots & \pi_K \end{bmatrix}^\top\), written \(\text{Cat}(\bpi)\). For instance, a single die roll is distributed \(\text{Cat}(\bpi)\) for \(\bpi = \begin{bmatrix} 1/6 \dots 1/6 \end{bmatrix}^\top\).
The density for \(Y \sim \text{Cat}(\bp)\) is defined as
This can be written more compactly as
where \(I_k\) is an indicator that equals 1 if \(y = k\) and 0 otherwise.
We then assume some distribution for \(\mathbf{x}_n\) conditional on observation \(n\)’s class, \(Y_n\). We typically assume all the \(\bx_n\) come from the same family of distributions, though the parameters depend on their class. For instance, we might have
though we wouldn’t let one conditional distribution be Multivariate Normal and another be Multivariate \(t\). Note that it is possible, however, for the individual variables within the random vector \(\bx_n\) to follow different distributions. For instance, if \(\bx_n = \begin{bmatrix} x_{n1} & x_{n2} \end{bmatrix}^\top\), we might have
The machine learning task is to estimate the parameters of these models—\(\bpi\) for \(Y_n\) and whatever parameters might index the possible distributions of \(\bx_n|Y_n\), in this case \(\bmu_k\) and \(\bSigma_k\) for \(k = 1, \dots, K\). Once that’s done, we can estimate \(p(Y_n = k)\) and \(p(\bx_n|Y_n = k)\) for each class and, through Bayes’ rule, choose the class that maximizes \(p(Y_n = k|\bx_n)\).
2. Parameter Estimation¶
2.1 Class Priors¶
Let’s start by deriving the estimates for \(\bpi\), the class priors. Let \(I_{nk}\) be an indicator which equals 1 if \(Y_n = k\) and 0 otherwise. Then the joint likelihood and log-likelihood are given by
where \(N_k = \sumN I_{nk}\) gives the number of observations in class \(k\) for \(k = 1, \dots, K\).
Math Note
The Lagrangian function provides a method for optimizing a function \(f(\bx)\) subject to the constraint \(g(\bx) = 0\). The Lagrangian is given by
\(\lambda\) is known as the Lagrange multiplier. The critical points of \(f(\bx)\) (subject to the equality constraint) are found by setting the gradients of \(\mathcal{L}(\lambda, \bx)\) with respect to \(\lambda\) and \(\bx\) equal to 0.
Noting the constraint \(\sum_{k = 1}^K \pi_k = 1\) (or equivalently \(\sum_{k = 1}^K\pi_k - 1 = 0\)), we can maximize the log-likelihood with the following Lagrangian.
This system of equations gives an intuitive solution:
which says that our estimate of \(p(Y_n = k)\) is just the sample fraction of observations from class \(k\).
2.2 Data Likelihood¶
The next step is to model the conditional distribution of \(\bx_n\) given \(Y_n\) so that we can estimate this distribution’s parameters. This of course depends on the family of distributions we choose to model \(\bx_n\). Three common approaches are detailed below.
2.2.1 Linear Discriminative Analysis (LDA)¶
In LDA, we assume
for \(k = 1, \dots, K\). Note that each class has the same covariance matrix but a unique mean vector.
Let’s derive the parameters in this case. First, let’s find the likelihood and log-likelihood. Note that we can write the joint likelihood as follows,
since \(\left(p(\bx_{n}|\bmu_k, \bSigma)\right)^{I_{nk}}\) equals 1 if \(y_n \neq k\) and \(p(\bx_n|\bmu_k, \bSigma)\) otherwise. Then we plug in the Multivariate Normal PDF (dropping multiplicative constants) and take the log, as follows.
Math Note
The following matrix derivatives will be of use for maximizing the above log-likelihood.
For any invertible matrix \(\mathbf{W}\),
where \(\mathbf{W}^{-\top} = (\mathbf{W}^{-1})^\top\). It follows that
We also have
For any symmetric matrix \(\mathbf{A}\),
These results come from the Matrix Cookbook.
Let’s start by estimating \(\bSigma\). First, simplify the log-likelihood to make the gradient with respect to \(\bSigma\) more apparent.
Then, using equations (2) and (3) from the Math Note, we get
Finally, we set this equal to 0 and multiply by \(\bSigma^{-1}\) on the left to solve for \(\hat{\bSigma}\):
where \(\mathbf{S}_T = \sumN\sumK I_{nk}(\bx_n - \bmu_k)(\bx_n - \bmu_k)^\top\).
Now, to estimate the \(\bmu_k\), let’s look at each class individually. Let \(N_k\) be the number of observations in class \(k\) and \(C_k\) be the set of observations in class \(k\). Looking only at terms involving \(\bmu_k\), we get
Using equation (4) from the Math Note, we calculate the gradient as
Setting this gradient equal to 0 and solving, we obtain our \(\bmu_k\) estimate:
where \(\bar{\bx}_k\) is the element-wise sample mean of all \(\bx_n\) in class \(k\).
2.2.2 Quadratic Discriminative Analysis (QDA)¶
QDA looks very similar to LDA but assumes each class has its own covariance matrix:
for \(k = 1, \dots, K\). The log-likelihood is the same as in LDA except we replace \(\bSigma\) with \(\bSigma_k\):
Again, let’s look at the parameters for each class individually. The log-likelihood for class \(k\) is given by
We could take the gradient of this log-likelihood with respect to \(\bmu_k\) and set it equal to 0 to solve for \(\hat{\bmu}_k\). However, we can also note that our \(\hat{\bmu}_k\) estimate from the LDA approach will hold since this expression didn’t depend on the covariance term (which is the only thing we’ve changed). Therefore, we again get
To estimate the \(\bSigma_k\), we take the gradient of the log-likelihood for class \(k\).
Then we set this equal to 0 to solve for \(\hat{\bSigma}_k\):
where \(\mathbf{S}_k = \sum_{n \in C_k} (\bx_n - \bmu_k)(\bx_n - \bmu_k)^\top\).
2.2.3 Naive Bayes¶
Naive Bayes assumes the random variables within \(\bx_n\) are independent conditional on the class of observation \(n\). I.e. if \(\bx_n \in \mathbb{R}^D\), Naive Bayes assumes
This makes estimating \(p(\bx_n|Y_n)\) very easy—to estimate the parameters of \(p(x_{nd}|Y_n)\), we can ignore all the variables in \(\bx_{n}\) other than \(x_{nd}\)!
As an example, assume \(\bx_n \in \mathbb{R}^2\) and we use the following model (where for simplicity \(n\) and \(\sigma^2_k\) are known).
Let the \(\btheta_k = (\mu_k, \sigma_k^2, p_k)\) contain all the parameters for class \(k\) . The joint likelihood function would become
where the two are equal because of the Naive Bayes conditional independence assumption. This allows us to easily find maximum likelihood estimates. The rest of this sub-section demonstrates how those estimates would be found, though it is nothing beyond ordinary maximum likelihood estimation.
The log-likelihood is given by
As before, we estimate the parameters in each class by looking only at the terms in that class. Let’s look at the log-likelihood for class \(k\):
Taking the derivative with respect to \(p_k\), we’re left with
which, will give us \(\hat{p}_k = \frac{1}{N_k}\sum_{n \in C_k} x_{n2}\) as usual. The same process would again give typical results for \(\mu_k\) and and \(\sigma^2_k\).
3. Making Classifications¶
Regardless of our modeling choices for \(p(\bx_n|Y_n)\), classifying new observations is easy. Consider a test observation \(\bx_0\). For \(k = 1, \dots, K\), we use Bayes’ rule to calculate
where \(\hat{p}\) gives the estimated density of \(\bx_0\) conditional on \(Y_0\). We then predict \(Y_0 = k\) for whichever value \(k\) maximizes the above expression.