Neuron (neural network)

From Maths
Jump to: navigation, search

Definition

Block diagram of a generic neuron with inputs: I1,,In
A neuron in a neural network has:
  • an output domain, O typically [1,1]R or [0,1]R
    • Usually {0,1} for input and output neurons
  • some inputs, Ii, typically IiR
  • some weights, 1 for each input, wi, again wiR
  • a way to combine each input with a weight (typically multiplication) (Iiwi - creating an "input activation", AiR
  • a bias, θ (pf the same type as the result of combining an input with a weight. Typically this can be simulated by having a fixed "on" input, and treating the bias as another weight) - another input activation, A0
  • a way to combine the input values, typically: nj=0Aj=nj=1Ijwj+θ
  • an activation function A():ROR, this maps the combined input activations to an output value.

In the example to the right, the output of the neuron would be:

  • A(ni=1(Iiwi)+θ)

Specific models

For an exhaustive list see Category:Types of neuron in a neural network

McCulloch-Pitts neuron

Diagram of a McCulloch-Pitts neuron
The McCulloch-Pitts neuron has[1]:
  • Inputs: (I1,,In)Rn
    • Usually each Ii is confined to [0,1]R or [1,1]R
  • A set of weights, one for each input: (w1,,wn)Rn
  • A bias: θR
  • An activation function, A:RR
    • It is more common to see A:R[1,1]R or sometimes A:R[0,1]R than the entire of R

The output of the neuron is given by:

Output:=A(ni=1(Iiwi)+θ)

References

  1. Jump up Neural Networks and Statistical Learning - Ke-Lin Du and M. N. S. Swamy

Template:Neural networks navbox



Template:Statistics Definition