Simple 3-Layer Neural Network for MNIST Handwriting Recognition

I've extended my simple 1-Layer neural network to include a hidden layer and use the back propagation algorithm for updating connection weights. The size of the network (number of neurons per layer) is dynamic. It's accuracy in classifying the handwritten digits in the MNIST database improved from 85% to >91%.

In a previous blog post I introduced a simple 1-Layer neural network for MNIST handwriting recognition. It was based on a single layer of perceptrons whose connection weights are adjusted during a supervised learning process. The result was an 85% accuracy in classifying the digits in the MNIST testing dataset.

The network did not use a real activation function (only simple linear "normalization") and no weighted error back propagation. Now, I want to add these things to see how this will improve the network's effectiveness. At the same time I also want to make the code more versatile and re-usable, offering a standardized interface, and to allow for dyanmic network sizing.

From 1 Layer to 3 Layers

The first major change in the design is adding a hidden layer between input and output. So how come a 1 layer network becomes a 3 layer network? It's simply a different naming convention. The 1-layer network only had one single layer of perceptrons, the output layer. In my previous design, the input to the network was NOT part of the network, i.e. whenever this input was needed (when calculating a node's output and when updating a node's weights) I refered to a MNIST image, a variable outside of the network.

When I redesigned the network I found it advantagous to include the input feed inside the network. It allows to treat the network as an independent object (data structure) without external references.

Therefore, by adding a hidden layer, plus treating the input as a network layer as well, the 1-layer network becomes a 3-layer network.

Dynamically Sized Network

One of the challenges when redesigning the network was making its size dynamic. In C, dynamically sized objects are not as easy to implement as in higher-level languages.

The Data Model

I decided to make use of a C feature originally refered to as the struct hack until it officially became part of C as flexible array member. The idea is simple: place an empty array at the end of a struct definition and manually allocate as much memory for the struct as it actually needs.

I make extensive use of this feature by stacking several layers of dynamically sized data structures.

Let's start at the smallest, most inner level of a network, the node. (Node refers to the same concept of a perceptron or cell, as I previously had called it.)

1struct Node{
2    double bias;
3    double output;
4    int wcount;
5    double weights[];
6};

The number of weights of a node normally depends on the number of nodes in the previous layer (assuming a fully connected design). Since we want this number to be dynamic we use an empty array or flexible array member. But, in order to be able to access each individual weight later, we need to remember how many weights a node actually has. This is what wcount (weight count) is for.

Next we put several of these nodes tegether to form a layer

1struct Layer{
2    int ncount;
3    Node nodes[];
4};

and use ncount (node count) to remember how many nodes are actually stored inside this layer.

Lastly, we stack several (for now only 3: input, hidden and output) of such layers into a network:

1struct Network{
2    int inpNodeSize;
3    int inpLayerSize;
4    int hidNodeSize;
5    int hidLayerSize;
6    int outNodeSize;
7    int outLayerSize;
8    Layer layers[];
9};

Since the number of layers (for now) is fixed, i.e. always 3 (input, hidden, output), I did not add a variable lcount (layer count) to remember the number of layers. This could be done later if the number of (hidden) layers inside the network is required to be dynamic.

Manual Memory Allocation

Once the data model for the network and its components is defined, we need to manually allocate memory for each part. If we assume that all layers are fully connected, i.e. each node connects to all nodes in the following layer, then the overall size of the network only depends on 3 numbers:

11. Size of the input vector (= number of pixels of a MNIST image)
22. Number of nodes in the hidden layer 
33. Number of nodes in the output layer

Hence, we'll create an interface for creating the network which looks like this:

1Network *createNetwork(int size_of_input_vector,    
2                       int number_of_nodes_in_hidden_layer, 
3                       int number_of_nodes_in_output_layer);

Now we calculate the size of each node type (input, hidden, output) as well as the required memory for each of the 3 layers. Adding up the layers' sizes then gives us the size of the overall network.

 1Network *createNetwork(int inpCount, int hidCount, int outCount){
 2    
 3    // Calculate size of INPUT Layer
 4    int inpNodeSize     = sizeof(Node);         // Input layer has 0 weights
 5    int inpLayerSize    = sizeof(Layer) + (inpCount * inpNodeSize);
 6    
 7    // Calculate size of HIDDEN Layer
 8    int hidWeightsCount = inpCount;
 9    int hidNodeSize     = sizeof(Node) + (hidWeightsCount * sizeof(double));
10    int hidLayerSize    = sizeof(Layer) + (hidCount * hidNodeSize);
11    
12    // Calculate size of OUTPUT Layer
13    int outWeightsCount = hidCount;
14    int outNodeSize     = sizeof(Node) + (outWeightsCount * sizeof(double));
15    int outLayerSize    = sizeof(Layer) + (outCount * outNodeSize);
16    
17    // Allocate memory block for the network
18    Network *nn = (Network*)malloc(sizeof(Network) + inpLayerSize + hidLayerSize + outLayerSize);
19    
20    return nn;
21}

Since we will need to refer to the aboves values throughout other parts of our code we store them as part of the network:

1    nn->inpNodeSize     = inpNodeSize;
2    nn->inpLayerSize    = inpLayerSize;
3    nn->hidNodeSize     = hidNodeSize;
4    nn->hidLayerSize    = hidLayerSize;
5    nn->outNodeSize     = outNodeSize;
6    nn->outLayerSize    = outLayerSize;

Once the network is created, it's still only an empty memory block. We now need to fill this memory block with the 3-tier layer structure.

The way we do this is by creating each layer as a temporary, independent object, i.e. in a different memory block. Then we copy this memory block into the larger memory block allocated for the network and delete (free) the memory block of the temporary layer.

Obviously, the most critical point here is to copy the layer to the correct memory address as required by the data model. Since we calculated each node's and each layer's size, this can be easily done using a single byte pointer.

I put this into a separate function and call it initializing the network:

 1void initNetwork(Network *nn, int inpCount, int hidCount, int outCount){
 2    
 3    // Copy the input layer into the network's memory block and delete it
 4    Layer *il = createInputLayer(inpCount);
 5    memcpy(nn->layers,il,nn->inpLayerSize);
 6    free(il);
 7    
 8    // Move pointer to end of input layer = beginning of hidden layer
 9    uint8_t *sbptr = (uint8_t*) nn->layers;     // single byte pointer
10    sbptr += nn->inpLayerSize;
11    
12    // Copy the hidden layer into the network's memory block and delete it
13    Layer *hl = createLayer(hidCount, inpCount);
14    memcpy(sbptr,hl,nn->hidLayerSize);
15    free(hl);
16    
17    // Move pointer to end of hidden layer = beginning of output layer
18    sbptr += nn->hidLayerSize;
19    
20    // Copy the output layer into the network's memory block and delete it
21    Layer *ol = createLayer(outCount, hidCount);
22    memcpy(sbptr,ol,nn->outLayerSize);
23    free(ol);
24        
25}

Let's look at this in more detail: A single byte pointer is simply a pointer pointing to memory blocks of byte size 1. I.e. we can easily move the pointer throughout the address space either by incrementing it via pointer++ or by adding the number of bytes that we want the pointer to move forward by pointer += bytes_to_move_forward.

First, we define the pointer and make it point to the memory address where the first layer, the input layer, should be located:

1uint8_t *sbptr = (uint8_t*) nn->layers;     // single byte pointer

Then, we move the pointer forward to the memory address of the 2nd layer, the hidden layer, simply by increasing the pointer by the size of the input layer:

1sbptr += nn->inpLayerSize;

Above should help to understand the above initNetwork() function. Inside of it we are creating the layers by calling a createInputLayer() and createLayer() function. These functions will create and return (a pointer to) a Layer object which is already pre-filled with default values.

Let's start with the createInputLayer() function:

 1Layer *createInputLayer(int inpCount){
 2    
 3    int inpNodeSize     = sizeof(Node);         // Input layer has 0 weights
 4    int inpLayerSize    = sizeof(Layer) + (inpCount * inpNodeSize);
 5    
 6    Layer *il = malloc(inpLayerSize);
 7    il->ncount = inpCount;
 8    
 9    // Create a detault input layer node
10    Node iln;
11    iln.bias = 0;
12    iln.output = 0;
13    iln.wcount = 0;
14    
15    // Use a single byte pointer to fill in the network's content
16    uint8_t *sbptr = (uint8_t*) il->nodes;
17    
18    // Copy the default input layer node x times
19    for (int i=0;i<il->ncount;i++){
20        memcpy(sbptr,&iln,inpNodeSize);
21        sbptr += inpNodeSize;
22    }
23    
24    return il;
25}

The input layer is different from the other 2 layers (hidden and output) in two aspects:

  1. As I outlined above, the input layer is strictly speaking not a real network layer. I.e. its nodes are not perceptrons because this layer does not have any connections, and hence any weights, to a previous layer. For our coding this means that the size of a node's weight array (weights[]) is 0 and therefore the size of an input node is the same as the actual size of a node struct sizeof(Node).

  2. The second major difference of the input layer is that its main objective is storing the input that is fed into the network. But for coding consistency and simplicity I want to use the same data model with the same naming conventions for the input layer as for the other two layers. Therefore, I use the node's output variable to store the network's input. (Keep this point in mind for later!)

Next, let's look at the function how to create a hidden and an output layer. Since both layers share the same structure we can use the same function.

 1Layer *createLayer(int nodeCount, int weightCount){
 2    
 3    int nodeSize = sizeof(Node) + (weightCount * sizeof(double));
 4    Layer *l = (Layer*)malloc(sizeof(Layer) + (nodeCount*nodeSize));
 5    
 6    l->ncount = nodeCount;
 7    
 8    // create a detault node
 9    Node *dn = (Node*)malloc(sizeof(Node) + ((weightCount)*sizeof(double)));
10    dn->bias = 0;
11    dn->output = 0;
12    dn->wcount = weightCount;
13    for (int o=0;o<weightCount;o++) dn->weights[o] = 0; // will be initialized later
14    
15    uint8_t *sbptr = (uint8_t*) l->nodes;     // single byte pointer
16    
17    // copy the default node into the layer
18    for (int i=0;i<nodeCount;i++) memcpy(sbptr+(i*nodeSize),dn,nodeSize);
19    
20    free(dn);
21    
22    return l;
23}

The above code first creates an empty memory block for the layer and then fills it with n copies of an empty default node.

Lastly, the nodes' weights must be initialized with random values which we do via the following function:

 1void initWeights(Network *nn, LayerType ltype){
 2    
 3    int nodeSize = 0;
 4    if (ltype==HIDDEN) nodeSize=nn->hidNodeSize;
 5                  else nodeSize=nn->outNodeSize;
 6    
 7    Layer *l = getLayer(nn, ltype);
 8    
 9    uint8_t *sbptr = (uint8_t*) l->nodes;
10    
11    for (int o=0; o<l->ncount;o++){
12    
13        Node *n = (Node *)sbptr;
14        
15        for (int i=0; i<n->wcount; i++){
16            n->weights[i] = rand()/(double)(RAND_MAX);
17        }
18        
19        // init bias weight
20        n->bias =  rand()/(double)(RAND_MAX);
21        
22        sbptr += nodeSize;
23    }
24    
25}

I'll come back to this point of random initialization later (see below) as I had to find out how these initial values greatly impact network performance.

Training the Network

Once the network has been created and pre-filled with random weights we can start training it. The training algorithm uses the following steps:

1
21. Feed image data into the network
32. Calculate node outputs of *hidden* and *output* layers (=FEED FORWARD)
43. Back-propagate the error and adjust the weights (=FEED BACKWARD)
54. Classify the image (*guess* what digit is presented in the image)

1. Feeding Data into the Network

As outlined in my previous MNIST code example the data we want to feed into the network exists in the form of a MNIST_Image structure which holds a 28*28 pixel image.

In order to maintain a standardized interface for this neural network code, I don't want to feed the MNIST_Image object directly into the network. Instead, I first convert it to a neutral Vector structure.

The Vector makes again use of C's flexible array member functionality so that it can be of dynamic size.

1struct Vector{
2    int size;
3    double vals[];
4};

We convert the MNIST_Image structure into this Vector structure using the following function:

 1Vector *getVectorFromImage(MNIST_Image *img){
 2    
 3    Vector *v = (Vector*)malloc(sizeof(Vector) + (28 * 28 * sizeof(double)));
 4    
 5    v->size = 28 * 28;
 6    
 7    for (int i=0;i<v->size;i++)
 8        v->vals[i] = img->pixel[i] ? 1 : 0;
 9    
10    return v;
11}

Now we can feed this input vector into the network:

 1void feedInput(Network *nn, Vector *v) {
 2    
 3    Layer *il;
 4    il = nn->layers;
 5    
 6    Node *iln;
 7    iln = il->nodes;
 8    
 9    // Copy the vector content to the "output" field of the input layer nodes
10    for (int i=0; i<v->size;i++){
11        iln->output = v->vals[i];
12        iln++;           
13    }
14    
15}

As I explained above, I chose to utilize the output field of an input node to hold the image's pixels.

2. Feed Forward

Once the network is filled with an image's pixels, we can start feeding this data forward, from the input layer to the hidden layer and from the hidden layer to the output layer.

1void feedForwardNetwork(Network *nn){
2    calcLayer(nn, HIDDEN);
3    calcLayer(nn, OUTPUT);
4}

Feed forward means calculating the output values of each node by multiplying its weights with the output values of the previous layer's nodes that it is connected to. This mechanism is the same as in my simple 1-Layer NN MNIST code so feel free to check there if below is not clear.

1void calcLayer(Network *nn, LayerType ltype){
2    Layer *l;
3    l = getLayer(nn, ltype);
4    
5    for (int i=0;i<l->ncount;i++){
6        calcNodeOutput(nn, ltype, i);
7        activateNode(nn,ltype,i);
8    }
9}

Calculating the hidden layer and the output layer works the same way, hence we can use the same function. However, in order to be able to know which type of layer we're working on we're introducing a

1typedef enum LayerType {INPUT, HIDDEN, OUTPUT} LayerType;

and pass it as an argument to the calcLayer() function which itself applies the following steps to each of its nodes:

11. Calculating the node's output
22. Executing an 'activation function' on the node's output

So, let's first calculate a node's output value:

 1void calcNodeOutput(Network *nn, LayerType ltype, int id){
 2    
 3    Layer *calcLayer = getLayer(nn, ltype);
 4    Node *calcNode = getNode(calcLayer, id);
 5    
 6    Layer *prevLayer;
 7    int prevLayerNodeSize = 0;
 8    
 9    if (ltype==HIDDEN) {
10        prevLayer = getLayer(nn, INPUT);
11        prevLayerNodeSize = nn->inpNodeSize;
12    }
13    else {
14        prevLayer = getLayer(nn, HIDDEN);
15        prevLayerNodeSize = nn->hidNodeSize;
16    }
17    
18    uint8_t *sbptr = (uint8_t*) prevLayer->nodes;
19    
20    // Start by adding the bias
21    calcNode->output = calcNode->bias;
22    
23    for (int i=0; i<prevLayer->ncount;i++){
24        Node *prevLayerNode = (Node*)sbptr;
25        calcNode->output += prevLayerNode->output * calcNode->weights[i];
26        sbptr += prevLayerNodeSize;
27    }
28
29}

To do this we need to loop through the node's connections, i.e. loop through its weights and the output values in the previous layer that it is connected to. However, since we defined the network's components (layer, nodes, weights) using flexible array members the compiler does not know where one component ends and where the next one starts. Hence, it is not possible to locate, for example, a node by simply referring to the n'th member of the node array: layer.node[n]!

Since both a layer's and a node's size are unknown to the compiler I use a single byte pointer to navigate through the allocated memory space. The above code does so by calling a getLayer() and getNode() function to access a particular layer or a particular node inside the network.

1Node *getNode(Layer *l, int nodeId) {
2    
3    int nodeSize = sizeof(Node) + (l->nodes[0].wcount * sizeof(double));
4    uint8_t *sbptr = (uint8_t*) l->nodes;
5    
6    sbptr += nodeId * nodeSize;
7    
8    return (Node*) sbptr;
9}

Let's take a more detailed look at how this works. To access a particular node we simply set the pointer to the first node of this layer and then move the pointer forward by the number of nodes (given as the node's id) times the size of a node. The latter is calculated by adding the default size of a node sizeof(Node) to the product of the number of weights (wcount) times the size of a weight sizeof(double).

Accessing a particular Layer works similar. We place the pointer to the first layer (given by nn->layers) and then move it forward as required.

To access the input layer, we don't need to move forward at all since it is the first layer in the network. To access the hidden layer, we need to move the pointer forward by the size of the input layer. And, to access the output layer, we need to move the pointer forward by the size of the input layer plus the size of the hidden layer.

 1Layer *getLayer(Network *nn, LayerType ltype){
 2    
 3    Layer *l;
 4    
 5    switch (ltype) {
 6        case INPUT:{
 7            l = nn->layers;
 8            break;
 9        }
10        case HIDDEN:{
11            uint8_t *sbptr = (uint8_t*) nn->layers;
12            sbptr += nn->inpLayerSize;
13            l = (Layer*)sbptr;
14            break;
15        }
16            
17        default:{ // OUTPUT
18            uint8_t *sbptr = (uint8_t*) nn->layers;
19            sbptr += nn->inpLayerSize + nn->hidLayerSize;
20            l = (Layer*)sbptr;
21            break;
22        }
23    }
24    
25    return l;
26}

If you're following and understanding this post so far, the above code should be self-explanatory. Yet, one line worth highlightening is:

1l = (Layer*)sbptr;

I'm using a single byte pointer to move through the address space but I actually need to return a Layer pointer as reference. Therefore, I cast the single byte pointer into a Layer pointer and make it point to the same address.

Activation Function

After the node's output value has been calculated we need to pass this value through an activation function. The purpose of the activation function is to constrain the output value (which could be very large) to a standard range, for example, to between 0 and +1, or to between -1 and +1.

The exact range depends on what activation function you use. The one most often refered to in the context of designing neural networks is the SIGMOID function. I'm not going into the mathematical details of this function as this is outside of the scope of this post. If you're interested, there are plenty of good explanations about SIGMOID and other activation functions on the web.

Since I want the network to be flexible as to what activation function it uses I decided to make this a parameter of the network. The current code suppports two in particular, SIGMOID and TANH, but this list could be expanded as needed.

1typedef enum ActFctType {SIGMOID, TANH} ActFctType;

What type of activation function is desired can be defined after the network has been created. It is also possible to choose a different activation function for hidden and output layer.

1    nn->hidLayerActType = SIGMOID;
2    nn->outLayerActType = SIGMOID;

The reason why I made this a network parameter instead of a function argument passed into the activation function is that the back-propagation algorithm (see below) requires using the derivative of this function. So we need to remember how we activated a node because when we later adjust its weights the derivate of the same function must be used.

The different types of activation function the network supports are implement in below activateNode() function which applies what has been defined in nn->hidLayerActType and nn->outLayerActType variables to the hidden and ouput layer respectively:

 1void activateNode(Network *nn, LayerType ltype, int id){
 2    
 3    Layer *l = getLayer(nn, ltype);
 4    Node *n = getNode(l, id);
 5    
 6    ActFctType actFct;
 7    
 8    if (ltype==HIDDEN) actFct = nn->hidLayerActType;
 9    else actFct = nn->outLayerActType;
10    
11    if (actFct==TANH)   n->output = tanh(n->output);
12    else n->output = 1 / (1 + (exp((double)-n->output)) );
13    
14}

3. Feed Backward (Error Back-Propagation)

After we finished our feed forward we start to back propagate the error. The error here refers to the difference of the desired or target output (which in MNIST is provided as the image's label) and the actual output of the output layer nodes.

Let's do this step by step, or layer by layer:

1void backPropagateNetwork(Network *nn, int targetClassification){
2    
3    backPropagateOutputLayer(nn, targetClassification);
4    
5    backPropagateHiddenLayer(nn, targetClassification); 
6}

We first back propagate the output layer:

 1void backPropagateOutputLayer(Network *nn, int targetClassification){
 2    
 3    Layer *ol = getLayer(nn, OUTPUT);
 4    
 5    for (int o=0;o<ol->ncount;o++){
 6        
 7        Node *on = getNode(ol,o);
 8        
 9        int targetOutput = (o==targetClassification)?1:0;
10        
11        double errorDelta = targetOutput - on->output;
12        double errorSignal = errorDelta * getActFctDerivative(nn, OUTPUT, on->output);
13        
14        updateNodeWeights(nn, OUTPUT, o, errorSignal);
15        
16    } 
17}

Our targetClassification is the image's label, i.e. a single digit 0-9. The targetOutput of an output node, however, is of binary valye, i.e. either 0 or 1.

For example: if we train on an image presenting a "3", the corresponding label will be the integer "3". Then the targetOutput of node 3 (which is actually the 4th node since we started at the 0th) will be a "1" while the targetOutput of all other 9 nodes will be "0". This is done via below command:

1int targetOutput = (o==targetClassification)?1:0;

The back propagation of the hidden layer works the same say, yet the actual algorithm is a little more complex since it requires to first calculate the sum of all weighted outputs connected to a node. I'm not going into the details of the back propagation algorithm in this post. If you're interested, there are plenty of detailed and more qualified sources about this subject on the web.

In both of the above back propagation functions (for the output layer and for the hidden layer) we need to get the derivative of the activation function and update the node's weights.

The calculation of the derivates of the supported SIGMOID and TANH functions is implemented in the getActFctDerivative() function:

 1double getActFctDerivative(Network *nn, LayerType ltype, double outVal){
 2    
 3    double dVal = 0;
 4    ActFctType actFct;
 5    
 6    if (ltype==HIDDEN) actFct = nn->hidLayerActType;
 7                  else actFct = nn->outLayerActType;
 8    
 9    if (actFct==TANH) dVal = 1-pow(tanh(outVal),2);
10                 else dVal = outVal * (1-outVal);
11    
12    return dVal;
13}

Lastly, each node's weights are updated using the back propagated error:

 1void updateNodeWeights(Network *nn, LayerType ltype, int id, double error){
 2    
 3    Layer *updateLayer = getLayer(nn, ltype);
 4    Node *updateNode = getNode(updateLayer, id);
 5    
 6    Layer *prevLayer;
 7    int prevLayerNodeSize = 0;
 8    if (ltype==HIDDEN) {
 9        prevLayer = getLayer(nn, INPUT);
10        prevLayerNodeSize = nn->inpNodeSize;
11    } else {
12        prevLayer = getLayer(nn, HIDDEN);
13        prevLayerNodeSize = nn->hidNodeSize;
14    }
15    
16    uint8_t *sbptr = (uint8_t*) prevLayer->nodes;
17    
18    for (int i=0; i<updateNode->wcount; i++){
19        Node *prevLayerNode = (Node*)sbptr;
20        updateNode->weights[i] += (nn->learningRate * prevLayerNode->output * error);
21        sbptr += prevLayerNodeSize;
22    }
23    
24    // update bias weight
25    updateNode->bias += (nn->learningRate * 1 * error);
26    
27}

On a side note, I decided to make the learning rate part of the network (instead of a global constant). It can be set as follows:

1nn->learningRate    = 0.5;

4. Classify Output

Now that the error has been back propagated and the weights have been updated, we can query our network to attempt to classify the image. This is simply done by retrieving the index of the output node with the highest value.

 1int getNetworkClassification(Network *nn){
 2    
 3    Layer *l = getLayer(nn, OUTPUT);
 4    
 5    double maxOut = 0;
 6    int maxInd = 0;
 7    
 8    for (int i=0; i<l->ncount; i++){
 9        
10        Node *on = getNode(l,i);
11        
12        if (on->output > maxOut){
13            maxOut = on->output;
14            maxInd = i;
15        }
16    }
17    
18    return maxInd;
19}

That's it. We're done. So I thought. Only to discover that this is when the other, less obvious part of building a neural network starts.

Fine-tuning Network Performance

The neural network's accuracy is defined as the ratio of correct classifications (in the testing set) to the total number of images processed.

Using the code above, my 3-layer network achieves an out-of-the-box accuracy of (only) 91% which is slightly better than the 85% of the simple 1-layer network I built before. How come the improvement is so little?

This post so far may give the impression that building and coding a neural network is a pretty straight forward and deterministic excercise. Unfortunately, it's not.

There are a number of parameters that may strongly impact the network's performance. Sometimes even a slight change dramatically impacts the network's accuracy which during the coding process for this post often led me to believe that my algorithm and/or code were wrong while they were not. It was just wrongly set parameters.

In particular, I found the following parameters to most significantly impact the network's performance:

11. Number of nodes in the hidden layer
22. Type of activation function
33. Values of initial weights
44. Learning rate

Let's go through them.

Number of Nodes in the Hidden Layer

This one is obvious. The higher the number of hidden nodes the more the network will adapt to the training data and "remember" it, thereby preventing generalization. But generalization, i.e. the ability to apply features that have been learned during training to completely new and unknown data sets, is exactly what we do want from our network.

On the other side, the smaller the number of nodes in the hidden layer, the less complexity the network will be able to grasp. For example, in our MNIST database, there may be many different ways of hand-writing a "4". The more complex our data set, the more hidden nodes are needed.

Via numerous trials I found that using 20 nodes in the hidden layer achieves the best result.

Activation Function

This one is also obvious, albeit a little less so. I found that changing the activation function requires to make additional parameters change as well to avoid a significant performance hit. In particular, the activation function is linked to what initial weights were chosen (see below) and to what learning rate was defined (see below).

In my tests I found that SIGMOID performed slightly better than TANH, although I suppose this is rather due to my overall design. Based on my reading I had expected that TANH outperforms SIGMOID.

Initial Weights

This one is a lot less obvious. The weights in neural networks are usually initialized using random values 0 to +1 or -1 to +1. In fact, in my previous 1-layer MNIST network I had found that initializing all weights to 0.5 leads to almost the same result as initializing them to a random value between 0 and +1.

Now, with the added hidden layer and the use of an activation function, the network behaves very differently.

I started by using random values between 0 and +1 and wondered about the desastrous performance. I then changed to include negative values, i.e. using random values -1 to +1 which improved performance a lot. If you understand how SIGMOID and TANH work, though, it's less of a surprise why this subtle change has a rather large impact.

Yet, while I was trying to further fine-tune the network, I found that another subtle change significantly improved the network's accuracy: instead of using random values between -1 and +1 I used values between 0 and +1 and made every second value negative.

1    for (int o=0;o<weightCount;o++) {
2        dn->weights[o] = (0.5*(rand()/(double)(RAND_MAX)));
3        if (o%2) dn->weights[o] = -dn->weights[o];  // make half of the numbers negative
4    }

Wow! This was unexpected. It led me to the conclusion that for best performance the network needs to be initialized not only randomly but evenly random or symetrically random.

Learning Rate

While this parameter is obvious, it's significant impact on the network's performance did surprise me several times.

The learning rate defines the factor by which an error is applied as a weight update. The higher the learning rate the faster the network adapts, i.e. the faster it should reach its point of optimal performance. However, if the rate is too high the network may jump over the optimum and may settle in a point of lower accuracy.

On the other side, a smaller learning rate allows the network to make very fine changes but overall it may take so long that it reaches the end of the dataset before reaching its optimal point of performance.

In my tests I found that the optimal learning rate depends on the chosen type of activation function. While for SIGMOID my network did well using a learning rate of 0.2 (91% accuracy), I had to change it to 0.004 to achieve highest performance with TANH (78% accuracy).

Conclusion

Coding this 3-layer neural network to recognize the MNIST digits has been an interesting excercise. It provided valuable insights into the unpredictability of neural networks. In particular, it helped to alert me to how small parameter changes can cause very different results.

Overall, the network's performance, i.e. its accuracy in recognizing the MNIST digits, is still disappointing. Further fine-tuning is required, e.g. using a dynamic learning rate. I experimented with different parameters and algorithm changes which helped pushing accuracy further into the 9x% (but did not implement them in the published code).

Next, instead of spending much more time on fine tuning this rather simple feed forward network to further improve its performance on MNIST I'd rather want to move on to using a convolutional network. ;)


Code & Documentation

You can find all the code for this exercise on my Github project page, including full code documentation.

When I run it on my 2010 MacBook Pro, using 784 input nodes, 20 hidden nodes and 10 output nodes, it takes about 19 seconds to process all 70,000 images (with the image rendering turned-off) and achieves an accuracy on the testing set of 91.5%.

Happy Hacking!