When do neural ordinary differential equations generalize on complex networks?

Moritz Laber, Tina Eliassi-Rad, Brennan Klein
arXiv
February 9, 2026

Neural ordinary differential equations (neural ODEs) can effectively learn dynamical systems from time series data, but their behavior on graph-structured data remains poorly understood, especially when applied to graphs with different size or structure than encountered during training. We study neural ODEs (πš—π™Ύπ™³π™΄s) with vector fields following the BarabΓ‘si-Barzel form, trained on synthetic data from five common dynamical systems on graphs. Using the π•Š1-model to generate graphs with realistic and tunable structure, we find that degree heterogeneity and the type of dynamical system are the primary factors in determining πš—π™Ύπ™³π™΄s' ability to generalize across graph sizes and properties. This extends to πš—π™Ύπ™³π™΄s' ability to capture fixed points and maintain performance amid missing data. Average clustering plays a secondary role in determining πš—π™Ύπ™³π™΄ performance. Our findings highlight πš—π™Ύπ™³π™΄s as a powerful approach to understanding complex systems but underscore challenges emerging from degree heterogeneity and clustering in realistic graphs.

Related publications