Abstract

Recursive neural networks (RNNs) and graph neural networks (GNNs) are two connectionist models that can directly process graphs. RNNs and GNNs exploit a similar processing framework, but they can be applied to different input domains. RNNs require the input graphs to be directed and acyclic, whereas GNNs can process any kind of graphs. The aim of this paper consists in understanding whether such a difference affects the behaviour of the models on a real application. An experimental comparison on an image classification problem is presented, showing that GNNs outperforms RNNs. Moreover the main differences between the models are also discussed w.r.t. their input domains, their approximation capabilities and their learning algorithms.

Keywords

Recurrent neural networkConnectionismComputer scienceExploitArtificial intelligenceArtificial neural networkGraphDirected acyclic graphProcess (computing)Directed graphMachine learningTheoretical computer scienceAlgorithm

Affiliated Institutions

Related Publications

Publication Info

Year
2006
Type
article
Pages
778-785
Citations
39
Access
Closed

External Links

Social Impact

Social media, news, blog, policy document mentions

Citation Metrics

39
OpenAlex

Cite This

Vincenzo Di Massa, Gabriele Monfardini, Lorenzo Sarti et al. (2006). A Comparison between Recursive Neural Networks and Graph Neural Networks. The 2006 IEEE International Joint Conference on Neural Network Proceedings , 778-785. https://doi.org/10.1109/ijcnn.2006.246763

Identifiers

DOI
10.1109/ijcnn.2006.246763