We explore the intersection of "Networks & AI", investigating how connectivity patterns within both natural and artificial learning systems influence learning outcomes, performance, and overall robustness. Central to this work is the exploration of the underlying mechanisms of human cognition, the evolution of AI models, and the study of the collaboration between humans and AI agents within complex environments. Our research also addresses issues of trustworthy Machine Learning in network science, to mitigate the risks of using ML and ensure transparency, fairness, and reliability in artificial intelligence systems.
Our focus
AI in Neural Networks
By viewing neural systems as complex adaptive networks, this research bridges neuroscience, network science, and artificial intelligence to investigate how patterns of information sharing and specific network motifs evolve during learning and what principles govern this reorganization. Through tools from information theory and network science we aim to understand how the structure and dynamics of cognitive functions impact learning, adaptation, and the ability to solve complex problems in biological and artificial systems.
Trustworthy Networks
Our research into trustworthy AI systems examines issues of explainability, transparency, stability, and robustness within networked data, a critical approach as AI becomes increasingly integrated into a large-scale societal infrastructure. We highlight the potential of epistemic instability created by AI systems and the ethical implications of machine learning algorithms, such as their fairness, as we attempt to understand and mitigate the broader societal risks associated with deploying AI.
Human-AI Teams
This research investigates how interactions between humans and AI systems shape collaboration, impact performance, and influence user preferences in a dynamical cycle of information exchange. While human-AI teaming can improve cognitive tasks expanding collective intelligence, the continuous feedback loop between humans and AI recommender systems can produce unintended outcomes like polarization, inequality, and diversity loss of information. Examining social media, retail, mapping, and content generation ecosystems, the research proposes "society-centred AI" that considers societal impacts beyond individual focus.


