Neural networks are all about finding a pattern or in other words weights are the patterns.

suppose we get two inputs and to get the right resultant value we need to ignore the second input, so our weight for the second input will be 0 and suppose our first input directly affects the chances of getting A. so our weight for the first input will be 1

adding these weight will give us the chances of getting A

w1 * i1 + w2 * i2 + b = (chances that it’s A)

**b **here is bias, it is what it sounds like. (bias says, I think the prediction is off by a little bit. no worries, why take the trouble of losing weight and making your neighbor gain weight. From now I will do that for you)

simple! there’s is a slight problem, we want our model to have uniform value, we normalize the value using.. (normalization method vary depending on the case, the logistic and sinusoidal function won’t be terrible. sinusoidal is a special case of logistic function )

Logistic functions are used to articulate growth (things grow fast and slow down after a case-specific point) quite similar to what we want

a neural network is a jungle of weights (and biases) made to adapt to reach maximum accuracy by repeated training with more variety of data

you can just store these weights and the network structure and you have an ML model

but how does a neural network find these weights (patterns)?

cost function

we find what value of weights and biases most efficiently minimize the cost function for that output and average that with all other possible outputs

this was backpropagation or back pass