Brain-based computing could help tech companies overcome the current constraints of chip design.
What’s the best computer in the world? The most souped-up, high-end gaming rig? Whatever supercomputer took the number one spot in the TOP500 this year? The kit inside the datacentres that Apple or Microsoft rely on? Nope: it’s the one inside your skull.
As computers go, brains are way ahead of the competition. They’re small, lightweight, have low energy consumption, and are amazingly adaptable. And they’re also set to be the model for the next wave of advanced computing.
These brain-inspired designs are known collectively as ‘neuromorphic computing’. Even the most advanced computers don’t come close to the human brain — or even most mammal brains — but our grey matter can give engineers and developers a few pointers on how to make computing infrastrastructure more efficient, by mimicking the brain’s own synapses and neurones.
First, the biology. Neurones are nerve cells, and work as the cabling that carries messages from one part of the body to the other. Those messages are passed from one neurone to another until they reach the right part of the body where they can produce an effect — by causing us to be aware of pain, move a muscle, or form a sentence, for example.
The way that neurones pass on messages to each other is across a gap is called a synapse. Once a neurone has received enough input to trigger it, it passes a chemical or electrical impulse, known as an action potential, onto the next neurone, or onto another cell, such as a muscle or gland.
Next, the technology. Neuromorphic computing software seeks to recreate these action potentials through spiking neural networks (SNNs). SNNs are made of neurons that signal to other neurons by generating their own action potentials, conveying information as they go. The strength and timing of the messages cause the neurones to remap the connections between them, allowing the SNN to ‘learn’ as inputs change, much as the brain does.