The conundrum concerns the substrate of neural and symbolic methods utilised nowadays. While the mathematics behind deep learning assume continuous values which favour gradient based optimisation, symbolic approaches are inherently discrete. So if we follow the true journey of a neuro-symbolic AI, we find ourselves at the intersection of two very stubborn siblings that do not want get along. To make continuous and discrete less ambiguous, in this context, let’s take discrete to mean things that we can reasonably count such as predicates in a logic program and continuous to cover things we generally measure such as the activation value of a neuron in a neural network. Now the conundrum reveals itself at several steps along the way:

  1. We start off with the voltage across a wire and use it to create symbol manipulating machines, 1s and 0s to be precise. This is what computer science is built on, symbol manipulating machines. But the voltage across wires is something we measure, yet we attempt our first jump from continuous to discrete land way before we even consider any form of neuro-symbolic AI.
  2. We then approximate continuous floating point numbers using 32 or 64 of these 1s and 0s. It has limitations but with enough bits it looks continuous enough for us to carry out everyday critical computation from calculating our salaries to plotting trajectories to other planets. Now it seems like we are back to continuous land on top of which we then build deep learning models, compute gradients update parameters.
  3. Finally, we want to get back to some sort of symbolic reasoning (?) which is discrete in nature. This discreteness stops us from computing gradients and simply integrating them into deep learning architectures. So we come up with methods to convert the fragile continuous approximation back into some symbolic form or convert symbolic stuff into continuous relaxations. We then call the result neuro-symbolic computing.

In effect, we went from continuous voltage to binary symbols in a computer to floating point continuous numbers and back to discrete symbolic reasoning. Or at least tried to. It seems like a lot of convoluted indirections for what the human brain does so easily. In fact, we do it so easily that on a daily basis it’s unlikely one would notice what information is processed in a discrete or continuous fashion. Perhaps the solution to the conundrum is to rethink the substrate on which to build neuro-symbolic methods or even just deep learning.

It is worth mentioning Spiking Neural Networks which mimic the action potentials of biological neurons and construct neural networks using them. Although they are biologically more plausible than linear algebra based artificial neural networks, few methods exist to train them and none come close to the effectiveness of gradient based optimisation. Due to spikes being discrete in nature, they are not viable to gradient based optimisation at a first glance yet the human brain seems to manage it just fine.

If it is of any interest, there are other computing paradigms, particularly neuromorphic engineering, that build analogue circuits to mimic neuro-biological systems. It is an active area of research and could provide some insights into how artificial neural networks could be realised in a more natural setting.