4 Conclusion
We have demonstrated an approach for imposing physically motivated inductive biases on graph net-
works to learn interpretable representations and improved zero-shot generalization. We have shown
through experiment that our graph network models which implement this inductive bias can learn
message representations equivalent to the true force vector for n-body gravitational and spring-like
simulations in 2D and 3D. We also have demonstrated a generic technique for finding an unknown
force law: symbolic regression models to fit explicit algebraic equations to our trained model’s mes-
sage function. Because GNs have more explicit sub-structure than their more homogeneous deep
learning relatives (e.g., plain MLPs, convolutional networks), we can draw more fine-grained inter-
pretations of their learned representations and computations. Finally, we have demonstrated that our
model generalizes better at inference time to systems with more bodies than had been experienced
during training.
Acknowledgments: Miles Cranmer and Rui Xu thank Professor S.Y. Kung for insightful suggestions
on early work, as well as Zejiang Hou for his comments on an early presentation. Miles Cranmer
would like to thank David Spergel for advice on this project, and Thomas Kipf, Alvaro Sanchez,
and members of the DeepMind team for helpful comments on a draft of this paper. We thank the
referees for insightful comments that both improved this paper and inspired future work.
References
M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S. Corrado, A. Davis, J. Dean,
M. Devin, S. Ghemawat, I. Goodfellow, A. Harp, G. Irving, M. Isard, Y. Jia, R. Jozefow-
icz, L. Kaiser, M. Kudlur, J. Levenberg, D. Man
´
e, R. Monga, S. Moore, D. Murray, C. Olah,
M. Schuster, J. Shlens, B. Steiner, I. Sutskever, K. Talwar, P. Tucker, V. Vanhoucke, V. Va-
sudevan, F. Vi
´
egas, O. Vinyals, P. Warden, M. Wattenberg, M. Wicke, Y. Yu, and X. Zheng.
TensorFlow: Large-scale machine learning on heterogeneous systems, 2015. URL https:
//www.tensorflow.org/. Software available from tensorflow.org.
P. Battaglia, R. Pascanu, M. Lai, D. J. Rezende, et al. Interaction networks for learning about objects,
relations and physics. In Advances in neural information processing systems, pages 4502–4510,
2016.
P. W. Battaglia, J. B. Hamrick, V. Bapst, A. Sanchez-Gonzalez, V. Zambaldi, M. Malinowski, A. Tac-
chetti, D. Raposo, A. Santoro, R. Faulkner, et al. Relational inductive biases, deep learning, and
graph networks. arXiv preprint arXiv:1806.01261, 2018.
M. M. Bronstein, J. Bruna, Y. LeCun, A. Szlam, and P. Vandergheynst. Geometric deep learning:
going beyond euclidean data. IEEE Signal Processing Magazine, 34(4):18–42, 2017.
M. B. Chang, T. Ullman, A. Torralba, and J. B. Tenenbaum. A compositional object-based approach
to learning physical dynamics. arXiv preprint arXiv:1612.00341, 2016.
B. C. Daniels and I. Nemenman. Automated adaptive inference of phenomenological dynamical
models. Nature Communications, 6(1):1–8, Aug. 2015. ISSN 2041-1723.
J. Gilmer, S. S. Schoenholz, P. F. Riley, O. Vinyals, and G. E. Dahl. Neural message passing for
quantum chemistry. In Proceedings of the 34th International Conference on Machine Learning-
Volume 70, pages 1263–1272. JMLR. org, 2017.
M. Jaques, M. Burke, and T. Hospedales. Physics-as-Inverse-Graphics: Joint Unsupervised Learning
of Objects and Physics from Video. arXiv:1905.11169 [cs], May 2019.
T. Kipf, E. Fetaya, K.-C. Wang, M. Welling, and R. Zemel. Neural relational inference for interacting
systems. arXiv preprint arXiv:1802.04687, 2018.
Y. Li, J. Wu, R. Tedrake, J. B. Tenenbaum, and A. Torralba. Learning particle dynamics for manip-
ulating rigid bodies, deformable objects, and fluids. arXiv preprint arXiv:1810.01566, 2018.
D. Mrowca, C. Zhuang, E. Wang, N. Haber, L. F. Fei-Fei, J. Tenenbaum, and D. L. Yamins. Flex-
ible neural representation for physics prediction. In Advances in Neural Information Processing
Systems, pages 8799–8810, 2018.
5