A Hopfield^{(2)} neural network is a type of artificial neural network invented by John Hopfield in 1982. It usually works by first learning a number of binary patterns and then returning the one that is the most similar to a given input.
What defines a Hopfield network
It is composed of only one layer of nodes or units each of which is connected to all the others but not itself. It is therefore a feedback network, which means that its outputs are redirected to its inputs. Every unit also acts as an input and an output of the network. Thus the number of nodes, inputs, outputs of the network are equal. Additionally, each one of the neurons in a has a binary state or activation value, usually represented as 1 or 1, which is its particular output. The state of each node generally converges, meaning that the state of each node becomes fixed after a certain number of updates.
An example of a Hopfield neural network with 4 nodes:
How a Hopfield network is updated
The nodes of a Hopfield network can be updated either synchronously or asynchronously. In the case of a synchronously updated network, a clock coordinates the different nodes so that they are all updated at the same time. In the case of an asynchronously updated network the nodes update one at a time and the node is selected randomly. Since there doesn’t exist any clock in biological systems such as the brain to synchronise the update of the nodes, the asynchronous networks are considered to be more realistic. In both cases there exists a connectivity weight between any two distinct nodes since all nodes are connected. The weight between any given node and itself is considered to be 0. We note the connectivity weight between node n and node m w_{nm} = w_{mn}. This connectivity weight determines how much impact the state of node j will have on the new state of node i when node i is updated and viceversa. If the weight between two nodes is positive the states of the two nodes will converge, otherwise they will diverge. The weight can be considered as an indicator of the final state of a node with respect to the finite state of the others Every node i also has a threshold θ_{i} that determines whether or not the input is sufficient to give it a state of 1. If the total input of a node is greater than or equal to its threshold then the node’s new state is 1, otherwise it is 1.
The input received by a node i, also called weighted input sum of node i can be defined as:
Where:
 w_{ij} is the connectivity weight between i and j.
 s_{j} is the state of node j.
And the new state of node i is then given by:
Where:
 sign(x) = 1 ∀x ≥ 0
 sign(x) = 1 ∀x < 0
We can also describe the update of a node in terms of matrices:
The respective states of all neurons can be expressed as a row matrix S:
All the connectivity weights of a Hopfield network can be represented in a weight matrix W:
The weight matrix of a Hopfield network is always a symmetric, zerodiagonal matrix. This follows from the fact that no neuron is connected to itself and the fact that w
The thresholds of all neurons form a column matrix T:
Training a Hopfield network
There exist several rules to train a Hopfield network all of which are based on the same process: the modification of the synaptic weights. Given an input to learn, the connectivity weight between nodes with the same state is increased whereas the weight between nodes with opposite states is decreased. A learning rule can have two useful properties: being local and being incremental. It is local if for each update it uses only the information from both nodes whose connectivity weight is updated. It is incremental if it does not require any information about the other patterns previously learnt by the network. Another major property of a learning rule is the capacity. It corresponds to the number of patterns per neuron that can be recalled by the network. The most common learning rule for Hopfield networks is Hebbian^{(3)} learning, invented by Donald Hebb in 1949.
For a network with N nodes Hebbian learning updates the connectivity weights according to the formula:
Where:
 ε_{x}^{μ} is the state of node x in pattern μ
As the formula describes it, Hebbian learning consists of setting the new weight between nodes i and j to the average of the product of the states of i and j in each of the patterns. While the Hebbian learning rule is the most wellknown, the Storkey learning rule, invented by Amos Storkey in 1997, is also both local and incremental and provides a network with a greater capacity.
The Storkey learning rule was mathematically defined by Amos Storkey as:
Where:
 w^{ν}_{ij} represents the weight between i and j after the ν^{th} pattern has been learnt.
 ξ^{ν} is the new learning pattern.

h_{ij}^{\nu}=\sum\limits_{k=1, k\neq i,j}^{n}w_{ik}^{\nu1}\xi_{k}^{\nu}\;is a form of local field.^{(1)}
The following graph published by Storkey shows the differencein capacity between the Hebbian learning and the Storkey learning:
According to the model deduced from the above graph, for n nodes we have:
Hebbian learning absolute capacity
Storkey learning absolute capacity
The energy of a Hopfield network
In a Hopfield network each node has an associated energy which is defined as:
Where:

I_{i} is the weighted input sum of node i
 s_{i} is the state of i
From this expression we can deduce the total energy of a network with n nodes:
This expression can now be clarified using the previously defined matrices:
If a neuron changes sign when updated then the energy of the network decreases otherwise it stays constant. Therefore the energy indicates whether or not the networks is modified when an update occurs and reaches its minimum energy when it becomes stable. The training patterns all correspond to a local minimum, called attractor on the graph of the energy, thus once the network reaches a training pattern it stops evolving. However some local minima may not correspond to a training pattern, these are called spurious minima and constitute an obvious limit to the Hopfield network.