Introduction to Autoencoders

1 Introduction

An autoencoder is a neural network that is trained to copy its input to its output, with the typical purpose of dimension reduction - the process of reducing the number of random variables under consideration. It features an encoder function to create a hidden layer (or multiple layers) which contains a code to describe the input. There is then a decoder which creates a reconstruction of the input from the hidden layer. An autoencoder can then become useful by having a hidden layer smaller than the input layer, forcing it to create a compressed representation of the data in the hidden layer by learning correlations in the data. This facilitates the classification, visualisation, communication and storage of data [1]. Autoencoders are a form of unsupervised learning, meaning that an autoencoder only needs unlabelled data - a set of input data rather than input-output pairs.

Through an unsupervised learning algorithm, for linear reconstructions the autoencoder will try to learn a function $h_{W,b}(x)\approx x$, so as to minimise the mean square difference:

$L(x,y)=\sum(x-h_{W,b}(x))^{2}$

where x is the input data and y is the reconstruction. However, when the decoder’s activation function is the Sigmoid function, the cross-entropy loss function is typically used:

$L(x,y)=-\sum_{i=1}^{d_{x}}x_{i}log(y_{i})+(1-x_{i})log(1-y_{i})$

We can obtain optimum weights for this by starting with random weights and calculating a gradient. This is done by using the chain rule to back-propagate error derivatives through the decoder network and then the encoder network.

PIC

Figure 1: Layers in a autoencoder [2].

2 Uses

The most intuitive application of autoencoders is data compression. Given an 256 x 256px image for example, a representation of a 28 x 28px may be learned, which is easier to handle.

We can also use denoising autoencoders to reconstruct corrupted data, often in the form of images.

Another use is to pre-train deep networks with stacked denoising autoencoders. This is allows us to optimise deep learning solutions and avoid being stuck in local minima as we might be with random initialisation of weights.

3 History

The idea of autoencoders was first mentioned in 1986, in an article extensively analysing back-propagation [3]. In following years, the idea resurfaced in more research papers. A 1989 paper by Baldi and Hornik helped further introduce autoencoders by offering “a precise description of the salient features of the surface attached to E [the error function] when the units are linear” [4]. Another notable paper is by Hinton and Zemel from 1994 which describes a new objective function for training autoencoders that allows them to discover non-linear, factorial representations [5]. However, it is hard to attribute the ideas about autoencoders because the literature is diverse and terminology has evolved over time. A currently emerging learning algorithm is the extreme learning machine, where the hidden node parameters are randomly generated and the output weights are computed thus learning a linear model, in a way faster than with back-propagation. It is worth noting how all currently used variants of autoencoders have been defined in only the last 10 years:

References

[1]   Hinton GE, Salakhutdinov RR. Reducing the Dimensionality of Data with Neural Networks. In: Science; 2006. .

[2]   et al N. A dynamic programming approach to missing data estimation using neural networks;. Available from: https://www.researchgate.net/figure/222834127_fig1.

[3]   Rumelhrt, Hinton, Williams. Learning internal representations by error propogation. Parallel distributed processing:; explorations in the microstructure of congition;Available from: http://dl.acm.org/citation.cfm?id=104293.

[4]   Baldi, Hornik. Neural Networks and Principal Component Analysis: Learning from Examples Without Local Minima;. Available from: https://www.researchgate.net/publication/222438485.

[5]   Hinton, Zemel. Autoencoders, Minimum Description Length, and Helmholtz Free Energy. Advances in Neural Information Processsing Systems;Available from: https://www.cs.toronto.edu/~hinton/absps/cvq.pdf.