Neural ordinary differential equations (neural ODEs) can effectively learn dynamical systems from time series data, but their behavior on graph-structured data remains poorly understood, especially when applied to graphs with different size or structure than encountered during training. We study neural ODEs (ππΎπ³π΄s) with vector fields following the BarabΓ‘si-Barzel form, trained on synthetic data from five common dynamical systems on graphs. Using the π1-model to generate graphs with realistic and tunable structure, we find that degree heterogeneity and the type of dynamical system are the primary factors in determining ππΎπ³π΄s' ability to generalize across graph sizes and properties. This extends to ππΎπ³π΄s' ability to capture fixed points and maintain performance amid missing data. Average clustering plays a secondary role in determining ππΎπ³π΄ performance. Our findings highlight ππΎπ³π΄s as a powerful approach to understanding complex systems but underscore challenges emerging from degree heterogeneity and clustering in realistic graphs.



