Network Science of AI

Deciphering the network structures underlying natural and artificial learning systems through their connectivity patterns

We explore the intersection of "Networks & AI", investigating how connectivity patterns within both natural and artificial learning systems influence learning outcomes, performance, and overall robustness. Central to this work is the exploration of the underlying mechanisms of human cognition, the evolution of AI models, and the study of the collaboration between humans and AI agents within complex environments. Our research also addresses issues of trustworthy Machine Learning in network science, to mitigate the risks of using ML and ensure transparency, fairness, and reliability in artificial intelligence systems.

Our focus

AI in Neural Networks

Read more

Trustworthy Networks

Read more

Human-AI Teams

Read more

Auditing the Stability and Personalization of Large Language Models

Read more
Explore our research

Featured projects

Universal laws governing the generalization-identification tradeoff in intelligent systems

This research reveals universal laws governing how intelligent systems, from brains to neural networks, balance two competing demands: generalizing across similar items while maintaining distinct identities for different items. The study derives closed-form solutions, showing that ‘semantic resolution’ determines this fundamental trade-off by indicating how finely systems can distinguish between representations in their internal networks. This theory explains capacity limits across biological and artificial intelligence, providing a rigorous foundation for understanding why even advanced systems struggle with multi-object reasoning tasks.

Read more
Read more

Probing the veracity of LLMs

This project explores the epistemological challenges of large language models introducing a new approach, sAwMIL (Sparse Aware Multiple-Instance Learning), for evaluating factuality and uncertainty in AI outputs. sAwMIL advances the state of the art in probing the veracity of LLMs, providing a reliable method for verifying what LLMs "know" and how certain they are of their probabilistic internal knowledge.

Read more
Read more

Human-AI coevolution

This study introduces human-AI coevolution as a new interdisciplinary field examining how humans and AI algorithms continuously influence each other through feedback loops. Focusing on recommender systems and digital assistants, we explore how user choices generate training data that shape AI models, which then influence future user preferences. The research outlines methodological approaches, provides real-world examples across different human-AI ecosystems, and addresses scientific, legal, and socio-political challenges in studying these complex, often unintended systemic outcomes at the intersection of AI and complexity science.

Read more
Read more