Chinese scientists have developed a novel computing architecture inspired by the human brain that could lead to more efficient and powerful artificial intelligence (AI) systems. The researchers argue that this approach, which focuses on increasing the internal complexity of individual artificial neurons, could be a path to achieving artificial general intelligence (AGI). Current AI models rely on scaling up neural networks to become larger and more complex, an approach the researchers call “big model with external complexity.” However, this method faces challenges such as high energy consumption and demand for computing resources.
In contrast, the human brain has 100 billion neurons and nearly 1,000 trillion synaptic connections, yet consumes only around 20 watts of power. Each neuron in the brain has a rich and diverse internal structure that contributes to its efficiency.
Novel nanotube-based AI architecture
The scientists built a Hodgkin-Huxley (HH) network where each artificial neuron was an HH model that could scale in internal complexity. This computational model simulates neural activity and accurately captures neuronal spikes, making it suitable for modeling a deep neural network architecture that aims to replicate human cognitive processes. The study demonstrated that this model can handle complex tasks efficiently and reliably.
A small model based on this architecture performed just as well as a much larger conventional model of artificial neurons. Although AGI remains an elusive milestone, some researchers believe it is only a matter of years before humanity builds the first such model. The scientists behind this study hope their novel computing architecture will contribute to the development of more efficient and powerful AI systems, potentially leading to AGI in the future.