The Perceptron Algorithm¶
The perceptron algorithm is a simple classification method that plays an important historical role in the development of the much more flexible neural network. The perceptron is a linear binary classifier—linear since it separates the input variable space linearly and binary since it places observations into one of two classes.
Model Structure¶
It is most convenient to represent our binary target variable as \(y_n \in \{-1, + 1\}\). For example, an email might be marked as \(+1\) if it is spam and \(-1\) otherwise. As usual, suppose we have one or more predictors per observation. We obtain our feature vector \(\bx_n\) by concatenating a leading 1 to this collection of predictors.
Consider the following function, which is an example of an activation function:
The perceptron applies this activation function to a linear combination of \(\mathbf{x}_n\) in order to return a fitted value. That is,
In words, the perceptron predicts \(+1\) if \(\bbetahat^\top\bx_n \geq 0\) and \(-1\) otherwise. Simple enough!
Note that an observation is correctly classified if \(y_n\hat{y}_n = 1\) and misclassified if \(y_n \hat{y}_n = -1\). Then let \(\mathcal{M}\) be the set of misclassified observations, i.e. all \(n \in \{1, \dots, N\}\) for which \(y_n\hat{y}_n = -1\).
Parameter Estimation¶
As usual, we calculate the \(\bbetahat\) as the set of coefficients to minimize some loss function. Specifically, the perceptron attempts to minimize the perceptron criterion, defined as
The perceptron criterion does not penalize correctly classified observations but penalizes misclassified observations based on the magnitude of \(\bbetahat^\top \bx_n\)—that is, how wrong they were.
The gradient of the perceptron criterion is
We obviously can’t set this equal to 0 and solve for \(\bbetahat\), so we have to estimate \(\bbetahat\) through gradient descent. Specifically, we could use the following procedure, where \(\eta\) is the learning rate.
Randomly instantiate \(\bbetahat\)
Until convergence or some stopping rule is reached:
For \(n \in \{1, \dots, N\}\):
\(\hat{y}_n = f(\bbetahat^\top \bx_n)\)
If \(y_n\hat{y}_n = -1\), \(\hspace{5mm} \bbetahat \leftarrow \bbetahat + \eta\cdot y_n\bx_n\).
It can be shown that convergence is guaranteed in the linearly separable case but not otherwise. If the classes are not linearly separable, some stopping rule will have to be determined.