# The Math Behind the Neural Network

Last week I gave a brief introduction to neural networks, but left out most of the math. It turns out that, like genetic algorithms, neural nets have extremely awesome mathematical properties which allow computer programmers to create efficient and effective neural programs.

Remember how each neural takes in charge, figures out if it is higher than a certain threshold, and then sends out charge to other neurons? Well, it turns out that this can be easily represented as an equation. In the simplest case you have one neuron with a singe neuron before it, and a single neuron after it. The neuron has a singe value, say T, which represents its threshold. If the charge coming into the neuron, say C, is larger than T, {if(C > Y)}, then it will fire. C, in this case, is simply the charge from the one neuron before this one.

Okay, so we have the math logic, {if(C > Y) fire!}. This neuron now gives off charge to the next one down the line. We call this amount of charge the weight of the connection, W. A higher weight is how much of the neuron's charge goes to the next neuron. If a pulse has 1 charge, then the next neuron gets W * 1 = W charge.

The great thing about this simple mathematical model is that it is easy to expand. Consider the following setup, 2 neurons → 1 neuron → 2 neurons. Here, C for the middle neuron is {C = Winput1 + Winput2}, and if C is > Y, it will fire an amount Woutput1 and Woutput2. Simple. Easy. Efficient.

Now consider 2 → 3 → 2. The middle layer (the 3 neurons) C values can be set up as follows:

F1*W1,1 + F2*W2,1 = C1

F1*W1,2 + F2*W2,2 = C2

F1*W1,3 + F2*W2,3 = C3

F is 0 or 1, depending on whether the previous neuron fired or not. W n,m is the weight of the previous neuron n to the next neuron m.

For those sharper mathematical fellows, this is a very simple set of linear equations. This simply means the equation forms a straight line, and because of its straightforward linear properties, can be easily converted into a set of matrix equations. The previous example reduces to the following matrix equation:

[W11    W21]    [F1]    [C1]

[W12    W22] * [F2] = [C2]

[W13    W23]    [F3]    [C3]

The first matrix is called the Weight Matrix, the second I call the Pulse Matrix, and the last is the Charge Matrix. The Weight Matrix contains the information of how strong the charge connections between neuron layers are, the Pulse Matrix is whether or not the neurons in that layer are firing, and the Charge Matrix is the sum of the charges on the next layer. All you need to do with the Charge Matrix is see if it is bigger than the threshold for the neurons it represents to get the next Fire Matrix.

Using this math you can reduce a very complicated neural network into a streamlined set of matrix equations. Instead of evolving a set of neuron objects, you can adjust the contents of each layer's weight matrix and threshold matrix, and presto, you get an evolved creation. The following is a demonstration video for a juiced up version of my last program, now using matrix math.  (Note: the creatures now are more like tanks, with a left tread and a right tread.  They adjust their motion by adjusting the speed of each tread.  Hence the tight turns.)

The code for my upgraded matrix math neural net can be found here.

## 8 thoughts on “The Math Behind the Neural Network”

1. Pingback: 2020

2. Pingback: human-design-space

3. Pingback: dizajn cheloveka

4. Pingback: human design

5. Pingback: watch online TV LIVE

6. Pingback: Top 10 Best

7. Pingback: Pune Escorts Services Call Girls

8. Pingback: Bangalore Cheap Escorts Sevices