August 09, 2019 · 15 mins read

Lesson 9 - Looking inside a neural network

While sometimes neural networks seem like black boxes that happen to be good at certain tasks, they really are not. After a quick setup of a fully connected neural network with one hidden layer, I’ll demonstrate how to look inside a neural network by visualising the hidden layer and get a better understanding of what happens inside it.

The result of this project

This is the ninth post about my fast.ai journey. Read all posts here.

import numpy as np
from fastai import datasets
import gzip, pickle
from PIL import Image, ImageDraw
import matplotlib.pylab as plt
import cv2

I assume you are already familiar with the basics of neural networks. If not, here’s a list of good tutorials:

Getting the data

Because the goal of this tutorial is to learn about neural networks and not image classification, I decided to simply use the well known MNIST dataset.

MNIST_URL = 'http://deeplearning.net/data/mnist/mnist.pkl'

# Load data
path = datasets.download_data(MNIST_URL, ext='.gz')
with gzip.open(path, 'rb') as f:
    ((x_train, y_train), (x_valid, y_valid), _) = pickle.load(f, encoding='latin-1')
    #y_train = np.array(y_train, dtype=np.float64)
    #y_valid = np.array(y_valid, dtype=np.float64)

Because we will be the using negative log likelihood function as the cost function, we have to one-hot encode the labels. One hot encoding is simply replacing the label index (like 5) with an -dimensional vector where the element at the label index is 1 and is the number of classes. In other words, the number 5 (out of 10) would be one hot encoded as:

In code:

# Preprocessing
def remap(y, K):
    m = len(y)
    out = np.zeros((m, K))
    for index in range(m):
        out[index][y[index] - 1] = 1
    return out
x_train, x_valid = x_train * 255, x_valid * 255
y_train = remap(y_train, 10)
y_valid = remap(y_valid, 10)

Next we normalise our data. A mean of zero and a standard deviation of one are mathematically convenient, as described in this document.. We can get a mean of 0 by subtracting the mean from every data point and we get a standard deviation of one by dividing by the initial standard deviation.

To keep the validation set consistent with the training set, we use the same mean and standard deviation for normalisation on the validation set. The standard deviation and mean, therefore, might not have the desired values in the validation set. This doesn’t matter, because we don’t use this dataset for training.

def normalize(x, m, s): return (x-m)/s
mean, std = x_train.mean(), x_train.std()
x_train = normalize(x_train, mean, std)
x_valid = normalize(x_valid, mean, std) # use same mean and std for validation set
mean, std = x_train.mean(), x_train.std()

Loading the data

We could perform feedforward and backpropogation for every training example at once, but that would be very inefficient. By using batches, the hyper parameters can be updated more frequently making for a shorter training time. It will be clear why in a second. We also use a shuffler to randomize the data. The final API is very similar to that of PyTorch.

As the data is separated into two distinct arrays (x and y) it is useful to create a data structure that holds complete training examples.

class Dataset():
    def __init__(self, x, y): self.x,self.y = x,y
    def __len__(self): return len(self.x)
    def __getitem__(self, i): return self.x[i],self.y[i]

Looping over every training example in the same order for every epoch affects the accuracy of the model. So let’s create a Sampler that shuffles the data upon access.

class Sampler():
    def __init__(self, ds, bs, shuffle=False):
        self.n = len(ds)
        self.bs = bs
        self.shuffle = shuffle
        
    def __iter__(self):
        self.idxs = np.random.permutation(self.n) if self.shuffle else np.arange(self.n)
        for i in range(0, self.n, self.bs): yield self.idxs[i:i+self.bs]

Next create the dataloader. Because of the __iter__ method, we can easily loop over the data like we can with an array.

def collate(b):
    xs,ys = zip(*b)
    return np.stack(xs), np.stack(ys)

class DataLoader():
    """ Abstraction for looping over datasets in batches """
    def __init__(self, ds, sampler, collate_fn = collate):
        self.ds, self.sampler, self.collate_fn = ds, sampler, collate_fn
        
    def __iter__(self):
        for s in self.sampler: yield self.collate_fn([self.ds[i] for i in s])

Now that we got the API, let’s load our data into it.

train_ds,valid_ds = Dataset(x_train, y_train),Dataset(x_valid, y_valid)

bs = 64
train_samp = Sampler(train_ds, bs, shuffle=True)
valid_samp = Sampler(valid_ds, bs, shuffle=False)

train_dl = DataLoader(train_ds, sampler=train_samp, collate_fn=collate)
valid_dl = DataLoader(valid_ds, sampler=valid_samp, collate_fn=collate)

Activation functions

This neural network uses sigmoid activation functions. While it’s generally better to use a ReLU (rectified linear unit), for small projects like these sigmoid works just fine.

I assume you are familiar with the sigmoid function. If not, check out this post to learn more about it.

For the sake of completeness, here’s an implementation of the sigmoid function and its derivative:

def sigmoid(z):
    return 1 / (1 + np.exp(-z))

def sigmoid_(z):
    return sigmoid(z) * (1 - sigmoid(z))

Feedforward

If we define the weights as w1 for layer 1 and w2 for layer 2, feedforward would look something like this:

def feedforward(x, w1, w2):
    bias = np.ones((len(x), 1))
    x_bias = np.hstack((bias, x))
    layer1 = sigmoid(x_bias @ w1)
    
    bias = np.ones((len(layer1), 1))
    layer1_bias = np.hstack((bias, layer1))
    output = sigmoid(layer1_bias @ w2)

Backpropagation

And backpropagation could be implemented like so:

def backprop(x, y, w1, w2):
    bias = np.ones((len(x), 1))
    x_bias = np.hstack((bias, x))
    layer1 = sigmoid(x_bias @ w1)
    
    bias = np.ones((len(layer1), 1))
    layer1_bias = np.hstack((bias, layer1))
    output = sigmoid(layer1_bias @ w2)
    
    w2_ = np.dot(layer1.T, (2 * (y - output)) * sigmoid_(output))
    w1_ = np.dot(x.T, (np.dot(2 * (y - output) * sigmoid_(output), w2[1:, :].T)) * sigmoid_(layer1))
    return w2_, w1_

Training the neural network

Set the learning rate to . This learning rate works very good with this model.

lr = 5e-03

The most simple type of training loop would look something like this:

imgs = []
for i in range(5):
    for (x, y) in train_dl:
        w2_, w1_ = backprop(x, y, w1, w2)
        w2[1:, :] += w2_ * lr
        w1[1:, :] += w1_ * lr

    train_predictions = feedforward(x, w1, w2)
    train_loss = (1 / n) * np.sum(- y * np.log(train_predictions) - (1 - y) * np.log(1 - train_predictions))
    
    valid_predictions = feedforward(x_valid, w1, w2)
    valid_loss = (1 / n) * np.sum(- y_valid * np.log(valid_predictions) - (1 - y_valid) * np.log(1 - valid_predictions))
    print(i, 'training loss: ', train_loss, 'validation loss: ', valid_loss)

It prints out the training loss and validation loss for each iteration.

You could make the training phase more efficient by using an optimizer such as Adam. That’s not the motivation of this post, however.

Visualizing the hidden layer

Visualizing a neurone

Now let’s get to the juicy part: looking inside the model.

The size of the weight matrix for the first hidden layer, w1, is , the size of the input layer plus a bias, by , the number of neurones in the hidden layer. Thus, each weight is associated with an input pixel and a neurone. We can visualise each neurone by plotting the values of the weights associated with each pixel. This will give us 28 by 28 images, the same as the input.

Let’s first visualize one neurone by redistributing its values in (0, 255):

def visualize(neurone):
	neurone = ((neurone / np.max(neurone)) * 255)
	Image.fromarray(weights).convert('RGBA')

Before training, a neurone could look like this:

After training however, much more recognizable shapes start to appear.

While it doesn’t look like a recognizable shape to us as humans, neural networks can use these weights to recognize digits.

Visualizing the hidden layer

By looping over all neurones in the hidden layer, in an 8 by 8 grid because , and visualizing each neurone we get a beautiful grid of all neurones.

def generate_image(w):
    size = int(np.sqrt(w[1:, :].shape[0]))
    img = Image.new('RGB', (28 * 8, 28 * 8))
    i = 0
    for row in range(8):
        for col in range(8):
            weights = w[1:, i].reshape((size, size))
            i += 1
            im = visualize(weights)
            draw = ImageDraw.Draw(im)
            draw.rectangle([(0, 0), (size-1, size-1)], width=1, outline='#ff0000')
            img.paste(im, (size * row, size * col))

By drawing a transparent rectangles with a red border on the visualization of a neurone, the grid becomes clearer:

Animating the hidden layer

While seeing the result is interesting, seeing the network learn is even better.

The framework I use for video composition is OpenCV. This is a very powerful framework for video editing with an easy api.

You can create a new clip like so:

fourcc = cv2.VideoWriter_fourcc(*'mp4v')
out = cv2.VideoWriter('learn.mp4', fourcc, 70.0, (28 * 8, 28 * 8))

Writing to the video is easy too, after computing the new weights in batch, add the following code:

out.write(np.asarray(generate_image(w1)))

Finally, to save the video to disk, use

out.release()