Concept

The neural network is a highly powerful and versatile class of models that has become quite a hot topic in machine learning. While the neural network’s ability to often outperform other popular model classes has earned it a reputation for being a near-magical black box algorithm, networks are not terribly complex or mysterious. Rather, by optimizing a highly-parametric and nonlinear structure, neural networks are flexible enough to model subtle relationships that other models may struggle to detect.

Note

Neural networks come in a variety of forms intended to accomplish a variety of tasks. Recurrent neural networks, for instance, are designed to model time series data, and convolutional neural networks are designed to model image data. In this chapter, we only cover feed-forward neural networks (FFNNs). FFNNs can be used for regression or classification tasks and serve as a natural introduction to other forms of neural networks.

This section is organized as follows.

  1. Model Structure

    1. An Overview

    2. Communication between Layers

    3. Activation Functions

  2. Optimization

    1. Back Propagation

    2. Calculating Gradients

    3. Combining Results with the Chain Rule

  3. Combining Observations

    1. A New Representation

    2. Gradients

1. Model Structure

Throughout this chapter, suppose we have training data {xn,yn}Nn=1 with xnRDx—which does not include an intercept term—and ynRDy for n=1,2,,N. In other words, for each observation we have Dx predictors and Dy target variables. In this chapter, these will primarily be referred to as the input and output variables, respectively. Note that unlike in previous chapters, we might now have a vector of target variables rather than a single value. If there is only one target variable per observation (i.e. Dy=1), we will write it as yn rather than yn.

1.1 An Overview

The diagram below is a helpful representation of a basic neural network. Neural networks operate in layers. The network starts of with an input layer, consisting of the vector of predictors for a single observation. This is shown by x0,,x3 in the diagram (indicating that Dx=4 here). The network then passes through one or more hidden layers. The first hidden layer is a function of the input layer and each following hidden layer is a function of the last. (We will discuss these functions in more detail later). The network below has two hidden layers. Finally, the network passes from the last hidden layer into an output layer, representing the target variable or variables. In the network below, the target variable is two-dimensional (i.e. Dy=2), so the layer is represented by the values y0 and y1.

Note

Diagrams like the one below are commonly used to represent neural networks. Note that these diagrams show only a single observation at a time. For instance, x0,x3 represent four predictors within one observation, rather than four different observations.

Each layer in a neural network consists of neurons, represented by the circles in the diagram above. Neurons are simply scalar values. In the input layer, each neuron represents a single predictor. In the above diagram, the input layer has four neurons, labeled x0 through x3, each representing a single predictor. The neurons in the input layer then determine the neurons in the first hidden layer, labeled z(1)0 through z(1)2. We will discuss how shortly, but for now simply note the lines running from the input layer’s neurons to the first hidden layer’s neurons in the diagram above. Once the neurons in the first hidden layer are set, they become predictors for the next layer, acting just as the input layer did. When the neurons in the final hidden layer are fixed, they act as predictors for the output layer.

One natural question is how many layers our neural network should contain. There is no single answer to this question, as the number of layers is chosen by the modeler. Any true neural network will have an input layer, an output layer, and at least one hidden layer. The network above has two hidden layers. Note that the superscript indicates the hidden layer number, e.g. z(1)0 through z(1)2 are in the first hidden layer and z(2)0 through z(2)2 are in the second hidden layer. We could also consider the input layer as an exogenous “hidden layer” and represent it with z(0)0 through z(0)3.

Another natural question is how many neurons each layer should contain. This is in part chosen by the modeler and in part predetermined. If our predictor vectors are of length D, the input layer must have D neurons. Similarly, the output layer must have as many neurons as there are target variables. If, for instance, our model attempts to predict a store’s revenue and its costs (two targets) in a given month, our output layer must have two neurons. The sizes of the hidden layers, however, are chosen by the modeler. Too few neurons may cause underfitting by preventing the network from picking up on important patterns in the data while too many neurons may cause overfitting, allowing the network to fit parameters that match the training data exactly.

1.2 Communication between Layers

Let’s now turn to the process through which one layer communicates with the next. In this section, let z(a) and z(b) represent the vector of neurons in any two consecutive layers. For instance, z(a) might be an input layer and z(b) the first hidden layer or z(a) might be a hidden layer and z(b) the following hidden layer. Suppose z(a)RDa and z(b)RDb.

In a feed-forward neural network, each neuron in z(b) is a function of every neuron in z(a). This function occurs in two stages: first a linear mapping of z(a) onto one dimension, then a nonlinear function called an activation function. Let’s look at a single neuron within z(b), z(b)i. The transformation from z(a) to z(b)i takes the form

h(b)i=wiz(a)+ciz(b)i=f(h(b)i),

where wiRDa is a vector of weights, ci is a constant intercept term, and f() is an activation function. Note that wi and ci are specific to the ith neuron in z(b) while f() is typically common among all neurons in z(b). We can also write the function relating the two layers in matrix form, as below.

h(b)=Wz(a)+c z(b)=f(h(b)),

where WRDb×Da, cRDb and f() is applied element-wise.

Note

Note that we haven’t yet discussed how W, c or f() are determined. For now, consider these all to be fixed and focus on the structure of a network. How we determine these values is discussed in the optimization section below.

Once z(b) is fixed, we use the same process to create the next layer, z(c). When discussing many layers at a time, it is helpful to add superscripts to W,c, and f() to indicate the layer. We can write the transmission of z(a) to z(b) followed by z(b) to z(c) as

z(b)=f(b)(W(b)z(a)+c(b))z(c)=f(c)(W(c)z(b)+c(c)).

A more mathematical representation of a neural network is given below. The network starts with a vector of predictors x. This vector is then multiplied by W(1) and added to c(1), which sums to h(1). We then apply an activation f(1) to h(1), which results in our single hidden layer, z(1). The same process is then applied to z(1), which results in our output vector, y.

1.3 Activation Functions

As we have seen, we create a neuron in one layer by taking a linear mapping of the neurons in the previous layer and then applying some activation function. What exactly is this activation function? An activation function is a (typically) nonlinear function that allows the network to learn complex relationships between the predictor(s) and the target variable(s).

Suppose, for instance, the relationship between a target variable yn and a predictor xn is given by

yn=|xn|+ϵn,

where ϵn is a noise term. Despite its simplicity, this relationship cannot be accurately fit by a linear model.

Ideally, we would apply some function to the predictor and use a different model depending on the result of this function. In the case above, xn>0 would “activate” the model ynxn, and xn0 would “activate” the model ynxn. Hence the name “activation function”.

There are many commonly used activation functions, and deciding which function to use is a major consideration in modeling a neural network. Here we will limit our discussion to two of the most common functions: the ReLU (Rectified Linear Unit) and sigmoid functions. The linear activation function (which is really the absence of an activation function) is also discussed.

ReLU

ReLU is a simple yet extremely common activation function. It is defined as

f(x)=max(x,0).

How can such a simple function benefit a neural network? ReLU acts like a switch, selectively turning channels on and off. Consider fitting a neural network to the dataset above generated with yn=|xn|+ϵn. Let’s use a very simple network represented by the diagram below. This network has one predictor, a single hidden layer with two neurons, and one output variable.

Now let’s say we decide to use f(x)=ReLU(x) and we land on the following parameters:

W(1)=(11),c(1)=(00),W(2)=(11),c(2)=0.

This is equivalent to the following complete model

z(1)=ReLU((11)x)y=(11)z(1).

Will this model be able to fit our dataset? Suppose xn=c for some positive constant c. We will then get

z(1)=ReLU((cc))=(c0)y=(11)(c0)=c.

So we will predict yn=|xn|=c, a sensible result! Similarly, if xn=c, we would again obtain the valid prediction yn=|xn|=c. ReLU is able to achieve this result by activating a different channel depending on the value of xn: if xn is greater than 0, it activates yn=xn, and if xn is less than 0, it activates yn=xn.

As we will see in the next section, fitting a neural network consists of taking gradients of our activation functions. Fortunately ReLU has a straightforward derivative:

xReLU(x)={1,x>00,x0.

Note that this derivative is not technically defined at 0. In practice, it is very unlikely that we will be applying an activation function to 0 exactly, though in that case the convention is to set its derivative equal to 0.

Sigmoid

A second common activation function is the logistic sigmoid function, often referred to as just the sigmoid function. This function was introduced in chapter 3 in the context of the logistic regression. The sigmoid function is defined as

σ(x)=11+exp(x).

Note that the sigmoid function takes any real value and returns a value between 0 and 1. As a result, the sigmoid function is commonly applied to the last hidden layer in a network in order to return a probability estimate in the output layer. This makes it common in classification problems.

As we saw in chapter 3, a convenient fact about the sigmoid function is that we can express its derivative in terms of itself.

σ(x)x=exp(x)(1+exp(x))2=11+exp(x)exp(x)1+exp(x)=σ(x)(1σ(x)).

The Linear Activation Function

Another possible activation function is the “linear” activation function, which is the same as skipping the activation function altogether. The linear activation function simply returns its input. It is defined with

f(x)=x,

and has derivative

f(x)x=1.

The linear activation function is often used before the last layer in a neural network for regression. Rather than constraining the fitted values to be in some range or setting half of them equal to 0, we want to leave them as they are.

2. Optimization

We have now seen that a neural network operates through a series of linear mappings and activation functions. The linear mapping for layer is determined by the parameters in W() and c(), also called the weights. This section discusses the process through which the weights in a neural network are fit, called back propagation.

The rest of this page requires a good amount of matrix differentiation, which is introduced in the math appendix. Note that we use the “numerator layout,” meaning for yRm and xRn, we write y/x as

yx=[y1x1y1xnymx1ymxn]Rm×n.

2.1 Back Propagation

Suppose we choose some loss function L for our network to minimize. Note that because our target variable is multi-dimensional, L function will be a vector of losses (e.g. the loss for the first target, the loss for the second target, etc.). To find the network’s optimal weights, we can conduct gradient descent, repeatedly taking the derivative of our loss function with respect to each weight and adjusting accordingly. As we will see, this involves finding the gradient of the network’s final weights, then using the chain rule to find the gradient of the weights that came earlier. In this process, we move backward through the network, and hence the name “back propagation.”

Consider conducting gradient descent for the network above. Write the loss function as L(ˆy), where ˆy is the network’s output. Let’s start by writing out the derivative of L with respect to W(L), the final matrix of weights in our network. We can do this with the chain rule, as below.

L(ˆy)W(L)=L(ˆy)ˆyˆyh(L)h(L)W(L)

The gradient of c(L) is equivalent. The math behind these calculations is covered in the following section. Next, we want to find the gradient of W(L1), shown below.

L(ˆy)W(L1)=L(ˆy)ˆyˆyh(L)h(L)z(L1)z(L1)h(L1)h(L1)W(L1)

This expression is pretty ugly, but there is a shortcut. This gradient and the gradient of W(L) share the first two terms, which represent the gradient of h(L). To save time (both in writing out the gradients and in calculating them in practice), we can record this gradient, h(L), and apply it where necessary. We can do the same with h(L1), which simplifies the gradient of W(L2):

L(ˆy)W(L2)=h(L1)h(L1)z(L2)z(L2)h(L2)h(L2)W(L2).

We continue this same process until we reach the first set of weights.

We’ve now seen intuitively how to find the gradients for our network’s many weights. To conduct back propagation, we simply use these gradients to run gradient descent. Next, let’s see how to actually calculate these gradients.

2.2 Calculating Gradients

In this section we will derive the gradients used in back propagation. For each iteration in this process we need to know the derivative of our loss function with respect to each weight in the entire network. For the network shown above, this requires calculating the following gradients:

L(ˆy)W(1),L(ˆy)c(1),,L(ˆy)W(L), and L(ˆy)c(L).

Since we will find these with the chain rule, we will need to calculate other gradients along the way. All the necessary gradients are derived below. Note that the following sub-sections cover the stages within a single layer of a network in reverse order (as back propagation does).

Note

Note that the rest of this section considers only one observation at a time. The vector y, for instance, refers to the output variables for a single observation, rather than a vector of 1-dimensional output variables for several observations. Similarly, L(ˆy)/ˆy refers to the derivative of the loss with respect to a single observation’s output. The final section discusses how to combine the derivatives from multiple observations.

For the following, let there be L layers in total. Also let layer have size D, except the input and output layers which have sizes Dx and Dy, respectively.

2.2.1 Loss Functions and their Gradients

Back propagation begins where the network ends: the loss function L(ˆy). Let’s start by introducing some common loss functions and their derivatives with respect to our predictions, ˆy. Later, using the chain rule, we will use these derivatives to calculate the derivatives with respect to our network’s weights.

A common loss function for quantitative output variables is the residual sum of squares. For a single observation, the loss is

LRSS(ˆy)=(yˆy)2.

Note

Note that the loss is a function of both our predictions (ˆy) and the true targets (y). However, since the true targets are fixed, we can only manipulate ˆy, so we write the loss as only a function of ˆy.

Note that we have a vector of losses because there are multiple output variables and we consider the loss for each variable independently. Now for the first step in back propagation, we calculate the derivative of this loss with respect to ˆy, which is simply given by

LRSS(ˆy)ˆy=2(yˆy)R1×Dy.

Since we are using the numerator layout convention, this derivative is a length-Dy row vector, or equivalently a 1 by Dy matrix.

For binary classification problems, a common loss function is the log loss or cross entropy, given by

LLog(ˆy)=(ylogˆy+(1y)log(1ˆy)),

where the ith entry in ˆy gives the estimated probability that the ith output variable equals 1. The derivative of this loss function with respect to ˆy is given by

LLog(ˆy)ˆy=(yˆy+1y1ˆy)R1×Dy.

Once we calculate L(ˆy)/ˆy, we can move further back into the network. Since ˆy is the result of an activation function, the next step in back propagation is to calculate the derivative of our activation functions.

2.2.2 Gradients of the Activation Functions

Recall that z(), the output layer of , is the result of an activation function applied to a linear mapping h(). This includes the output of the final layer, ˆy, which we can also write as z(L).

ReLU

Suppose we have z()=f()(h()) where f() is the ReLU function. We are interested in z()/h(). For ij, we have

z()ih()j=0,

since z()i is not a function of h()j. Then using the ReLU derivative, we have

z()ih()i={1,h()i>00,h()i0.

We can then compactly write the entire derivative as

z()h()=diag(h()>0)RD×D.
Sigmoid

Now suppose we have z()=f()(h()) where f() is the sigmoid function. Again, the partial derivative z()i/h()j is 0 for ij. By the sigmoid derivative, we have

z()ih()i=σ(h()i)(1σ(h()i)).

We can again write the entire result compactly as

z()h()=diag(σ(h())(1σ(h()))RD×D.
Linear

Finally, suppose we have z()=f()(h()) where f() is the linear function. We then have

z()ih()j={1,i=j0,ij.

The entire gradient is then simply

z()h()=IDRD×D.

2.2.3 Gradients of the Weights

We are now finally able to calculate the gradients of our weights. Specifically, we will calculate h()/c() and h()/W() which, when combined with our previous results through the chain rule, will allow us to obtain the derivative of the loss function with respect the layer weights.

Recall that we obtain h() through

h()=W()z(1)+c(),

giving us the simple derivative

h()c()=IDRD×D.

The derivative h()/W() is more complicated. Since we are taking the derivative of a vector with respect to a matrix, our result will be a tensor. The shape of this tensor will be D×(D×D1) since h()RD and W()RD×D1. The first element of this tensor is given by h()1/W(). Using the expression for h() above, we see that this is a matrix with (z(1)) in the first row and 0s everywhere else. More generally, the ith entry in this derivative will have all 0s except (z(1)) in its ith row. This is represented below.

h()W()=[h()1W()h()nW()]=[[z(1))...0][0(z(1))]]RD×(D×D1).

2.2.4 One Last Gradient

We now have all the results necessary to calculate the derivative of the loss function with respect to the weights in the final layer. For instance, we can evaluate

L(ˆy)W(L)=L(ˆy)ˆyˆyh(L)h(L)W(L)

using the results from sections 2.1, 2.2, and 2.3. However, to obtain the derivative of the loss function with respect to weights in the previous layers, we need one more derivative: the derivative of h(), the linear mapping in layer , with respect to z(1), the output of the previous layer. Fortunately, this derivative is simple:

h()z(1)=W().

Now that we have h()/z(1), we reuse the results from sections 2.2 and 2.3 to calculate z(1)/h(1) and h(1)/W(1) (respectively); this gives us all the necessary results to compute the gradient of the weights in the previous layer. We then rinse, lather, and repeat with layer 2 through the first layer.

2.3 Combining Results with the Chain Rule

We’ve seen lots of individual derivatives. Ultimately, we really care about the derivatives of the loss function with respect to the network’s weights. Let’s review by calculating the derivatives of the loss function with respect to the weights in the final layer for the familiar network above. Suppose f(2) is the Sigmoid function and we use the log loss. For W(2) we get the following.

L(ˆy)W(2)=L(ˆy)ˆyˆyh(2)h(2)W(2)=(yˆy+1y1ˆy)diag(σ(h(2))(1σ(h(2))))T=[(y1ˆy1+1y11ˆy1)σ(h(2)1)(1σ(h(2)1))z(1)(yn2ˆyn2+1yn21ˆyn2)σ(h(2)n2)(1σ(h(2)n2))z(1)]Rn2×n1,

where T is the tensor derivative discussed in section 2.2.3.

For c(2), we get

L(ˆy)c(2)=L(ˆy)ˆyˆyh(2)h(2)c(2)=(yˆy+1y1ˆy)diag(σ(h(2))(1σ(h(2))))In2=[(y1ˆy1+1y11ˆy1)σ(h(2)1)(1σ(h(2)1))(yn2ˆyn2+1yn21ˆyn2)σ(h(2)n2)(1σ(h(2)n2))]Rn2.

3. Combining Observations

So far, we’ve only considered the derivative of the loss function for a single observation. When training a network, we will of course want to consider the entire dataset. One way to do so is to simply add the derivatives of the loss function with respect to the weights across observations. Since the loss over the dataset is the sum of the individual observation losses and the derivative of a sum is the sum of the derivatives, we can simply add the results above. For instance, to find the derivative of the loss with respect to the final matrix of weights W(L), we could loop through observations and sum the individual derivatives:

L({ˆyn}Nn=1))W(L)=Nn=1L(ˆyn)W(L)=Nn=1L(ˆyn)W(L).

While straightforward, this approach is computationally inefficient. The rest of this section outlines a more complicated but much faster method.

3.1 A New Representation

So far, we’ve treated our predictors and outputs as vectors. The network starts with x and outputs z(1). Then it predicts with z(1) and outputs z(2). It repeats this process until z(L1) outputs ˆy. To incorporate multiple observations, we can turn these vectors into matrices. Again suppose our dataset consists of N observations with xnRDx and ynRDy. We start with XRN×Dx, whose nth row is xn. Note that in X, xn is a row vector; to keep consistent with our previous sections, we want it to be a column vector. So, we’ll work with X rather than X.

Rather than feeding each observation through the network at once, we will feed all observations together and give each observation its own column. Each column in X represents an observation’s predictors. We then multiply this matrix by W(1) and add c(1) element-wise to get H(1). Each column in H(1) represents a vector of linear combinations of the corresponding column in X. We then pass H(1) through an activation function to obtain Z(1). Similarly, each column in Z(1) represents the output vector for the corresponding observation in X. We then repeat, with Z(1) acting as the matrix of predictors for the next layer. Ultimately, we will obtain a matrix ˆYRDy×N whose nth column represents the vector of fitted values for the nth observation.

3.2 Gradients

While this new representation is more efficient, it also makes the gradients more complicated since we are taking derivatives with respect to matrices rather than vectors. Ordinarily, the derivative of one matrix with respect to another would be a four-dimensional tensor. Luckily, there’s a shortcut.

For each parameter θ in our network, we will find its gradient by asking “which parameters does θ affect in the next layer”. Supposing the answer is some set {ψ1,ψ2,,ψn}, we will calculate the derivative of the loss function with respect to θ as

Lθ=ni=1Lψiψiθ.

Recall that our loss function is a vector L of size Dy since we have Dy output variables. This loss vector is a row-wise function of the prediction matrix, ˆY, meaning the dth entry in L is a function of only the dth row of ˆY (which represents the fitted values for the dth output variable across observations). For the (i,d)th entry in ˆY, then, we only need to consider the derivative of the dth entry in L—the derivative of any other entry in L with respect ˆYi,d is 0. We can then use the following gradient in place of a four-dimensional tensor.

LˆY=[L1ˆY1,1...L1ˆY1,N...LDyˆYDy,1...LDyˆYDy,N]

Next, we consider the derivative of ˆY with respect to H(L). Note that ˆY is an element-wise function of H(L). This means we only need to consider the gradient of each element in the former with respect to its corresponding element in the latter. This gives us

ˆYH(L)=[ˆY1,1H(L)1,1...ˆY1,NH(L)1,N...ˆYDy,1H(1)Dy,1...ˆYDy,NL(L)Dy,N].

Now let’s use the shortcut described above. Since each element in H(L) only affects the corresponding element in ˆY, we calculate L/H(L) by multiplying the two gradients above element-wise. I.e.,

LH(L)=LˆYˆYH(L),

where is the element-wise multiplication operator, also known as the Hadamard product.

Next up is c(L). Whereas each element in H(L) affected only one element in ˆY, each element in c(L) affects N elements in H(L)—every element in its corresponding row. Consider the first entry in c(L). Since this entry affects each entry in the first row of H(L), the chain rule gives us

Lc(L)1=Nn=1LH(L)1,nH(L)1,nc(L)1.

Fortunately H(L)1,n/c(L)1 is just 1 since c(L)1 is an intercept term. This implies that the derivative of the loss function with respect to c(1) is just the row sum of L/H(L), or

Lc(L)i=Nn=1LH(L)i,n.

Next, we have W(L). Using our shortcut, we ask “which values does the (i,j)th entry in W(L) affect?” Since H(L)=W(L)Z(L1), we have that

H(L)i,n=W(L)i,jZ(L1)j,nn{1,,N}.

This tells us that W(L)i,j affects each entry in the ith row of H(L) and gives us the derivative H(L)i,n/W(L)i,j=Z(L1)j,n. Therefore,

LW(L)i,j=Nn=1LH(L)i,nH(L)i,nW(L)i,j=Nn=1(H(L))i,nZ(L1)j,n,

where H(L) is the matrix representing L/H(L). This can be computed for each element in W(L) using a tensor dot product, which will be covered in the construction section.

Finally, we have Z(L1). This case is symmetric to W(L), and the same approach gives us the result

LZ(L1)i,n=Rr=1LH(L)r,nH(L)r,nZ(L1)i,n=Rr=1(H(L))r,nW(L)r,i.

Again, the derivative for all of Z(L1) can be calculated at once using a tensor dot product.