Energy Optimisation of Cascading Neural-Network
Classifiers
Vinamra Agrawal and Anandha Gopalan
Abstract:
Artificial Intelligence is increasingly being used to improve different
facets of society such as healthcare, education, transport, security,
etc. One of the popular building blocks for such AI systems are Neural
Networks, which allow us to recognise complex patterns in large amounts
of data. With the exponential growth of data, Neural Networks have
become increasingly crucial to solve more and more challenging problems.
As a result of this, the computational and energy requirements for these
algorithms have grown immensely, which going forward will be a major
contributor to climate change. In this paper, we present techniques to
reduce the energy use of Neural Networks without significantly reducing
their accuracy or requiring any specialised hardware. In particular, our
work focuses on Cascading Neural Networks and reducing the dimensions of
the input space which in turn allows us to create simpler classifiers
which are more energy-efficient. We reduce the input complexity by using
semantic data (Colour, Edges, etc.) from the input images and systematic
techniques such as LDA. We also introduce an algorithm to efficiently
arrange these classifiers to optimise gain in energy efficiency. Our
results show a 13% reduction in energy usage over the popular Scalable
effort classifier and a 35% reduction when compared to Keras CNN for
Cifar10. Finally, we also reduced energy usage of the full input neural
network (often used as the last stage in the cascading technique) by
using Bayesian optimisation with adjustable parameters and minimal
assumptions to search for the best model under given energy constraints.
Using this technique we achieve significant energy savings of 29% and
34% for MNIST and Cifar10 respectively.