|Talks|

Brain-inspired sparse network science for next generation efficient and sustainable AI

Visiting speaker
Hybrid
Past Talk
Carlo Vittorio Cannistraci
Prof. PhD. Eng.
Mar 16, 2026
1:00 pm
EST
Mar 16, 2026
1:00 pm
In-person
Portsoken Street
London, E1 8PH, UK
The Roux Institute
Room
100 Fore Street
Portland, ME 04101
Network Science Institute
2nd floor
Network Science Institute
11th floor
177 Huntington Ave
Boston, MA 02115
Network Science Institute
2nd floor
Room
58 St Katharine's Way
London E1W 1LP, UK
Register for talk
Register for talk

Talk recording

Artificial neural networks (ANNs) are foundational to contemporary artificial intelligence (AI), however their conventional fully connected architectures are computationally inefficient. Contemporary large language models consume vast amounts of power at rates over 100 times that of the human brain. In stark contrast, the brain's inherently sparse connectivity facilitates exceptional capabilities with minimal expenditure: learning with just a few watts. Brain-inspired network science research can play a relevant role in designing low-consumption and efficient deep learning. We need to develop concepts and theories for an ecological and sustainable approach to AI. Some of these new computing paradigms can be inspired from the physics of the brain network architecture and its complex systems biology. At the Center for Complex Network Intelligence (CCNI) within the Tsinghua Laboratory of Brain and Intelligence (THBI), our research focuses on three pivotal features of brain networks that contribute to efficient computation: (1) Connectivity Sparsity: Implementing sparse connections to reduce computational overhead while maintaining performance; (2) Connectivity Morphology: Exploring the spatial patterns of neural connections to optimize information processing; (3) Neuro-Glia Coupling: Investigating the interactions between neurons and glial cells to enhance computational efficiency. This talk will introduce the Cannistraci-Hebb Training soft rule (CHTs), a brain-inspired network science theory that employs a gradient-free approach, relying solely on network topology to predict sparse connectivity during dynamic sparse training. CHTs have demonstrated the potential to achieve ultra-sparse networks with approximately 1% connectivity, outperforming fully connected networks in various tasks. Additionally, we will discuss our recent study on the relationship between sparse morphological connectivity and spatiotemporal intelligence. This research introduces neuromorphic dendritic network computation with silent synapses, a model that emulates visual motion perception by integrating synaptic organization with dendritic tree-like morphology. The model exhibits exceptional performance in visual motion perception tasks, underscoring the potential of bio-inspired approaches to enhance the transparency and efficiency of modern AI systems.
About the speaker
Carlo Vittorio Cannistraci is a theoretical engineer and computational innovator. He is a Chair Professor in the Tsinghua Laboratory of Brain and Intelligence (THBI) and a member of the Department of Psychological and Cognitive Sciences, the Department of Computer Science and Technology, and the School of Biomedical Engineering at Tsinghua University. He directs the Center for Complex Network Intelligence (CCNI) in THBI, which pioneers algorithms at the interface between information science, physics of complex systems, complex networks and machine intelligence, with a focus in brain/life-inspired computing for efficient artificial intelligence and big data analysis. These computational methods are often applied to precision biomedicine, neuroscience, social and economic science.
Share this page:
Mar 16, 2026