Working Of A Neural Network
Artificial neural networks, also known as neural networks, are computing systems that are inspired by the biological neural networks that make up animal brains. An ANN is built from a network of connected units or nodes known as artificial neurons, which are loosely modelled after the neurons in the human brain.
As seen in the above image a neural network is basically divided in to three layers. Input Layer, Hidden Layer and output Layer.
Input Layer
The input Layer brings the initial data into the system for further processing by subsequent layers of artificial neurons. The input layer is the very beginning of the workflow for the artificial neural network.
Before learning about the working of Hidden layer let’s first understand what are weights and bias in a neural network.
What are Weights In A Neural Network
Weights represent the strength of the connection between the units; a weight can reduce the importance of an input value or increase its importance. A weight near 0 indicates that a change in input has no effect on the output. Negative weights indicate that increasing the input will reduce the output. So, in essence, the weight determines how much influence the input has on the output.
Significance Of Weights In A Neural Network
Assume we want to train a model that can tell whether an image is of a cat or a dog. First, we will train the model with images of a dog. Now, the recognizable features of a dog or the features that distinguish a dog from a cat can be its eyes, ears, nose, and so on.
Assume, for example, that the dog has a distinct nose, and that the inputs for the weights range from 0 to 1. So, to give the nose more weight, we assign the connections (coloured in blue) a weight of 0.6, and the other features, such as the ears and eyes, a weight of 0.2. I hope this example has helped you understand the significance of weights in a Neural Network.
What Is Bias?
In Neural Networks, the activation function takes an input ‘x’ multiplied by a weight ‘w’. By adding a constant (i.e. the given bias) to the input, you can shift the activation function. Bias in Neural Networks is analogous to the role of a constant in a linear function, where the constant value effectively transposes the line.
In the absence of bias, the activation function’s input is ‘x’ multiplied by the connection weight ‘w0’.
In a bias scenario, the activation function’s input is ‘x’ times the connection weight ‘w0’ plus the bias times the connection weight for the bias ‘w1’. This causes the activation function to shift by a constant amount (b * w1).
Hidden Layer
Finally, now we can understand what happens inside the Hidden Layer. Once the weights are passed to the hidden layer two specific operations take place.
(1) Summation Of Weights and Inputs
In this step we basically do the following:-
let’s say our output is ‘y’ and our input is ‘x’ and weight is ‘w’.
y=x1 * w1+ x2 * w2 + x3 * w3 + bias — — — — — (equation 1)
here we also add bias, now you know what is the importance of bias and why do we add it.
(2) Applying Activation Function z =Act(y)
After that we apply activation function, let’s say here we apply the Sigmoid activation function. Which is represented by z = 1/1 + e^-y.
So the value of y from equation 1 will be passed in sigmoid activation function (z = 1/1 + e^-y) which will give us a value between 0 and 1, if the value is < 0.5 the neuron will not get activated and if the value is > 0.5 the neuron will get activated( as seen in the dog image prediction example before).
Output Layer
The output layer is the final type of layer. The result or output of the problem is stored in the output layer. Raw images get passed to the input layer and we receive output in the output layer.
I hope you liked this short article on Nerul network was helpful. Do share it with your friends and colleagues.