Abstract

Graph convolutional network (GCN) has been successfully applied to many\ngraph-based applications; however, training a large-scale GCN remains\nchallenging. Current SGD-based algorithms suffer from either a high\ncomputational cost that exponentially grows with number of GCN layers, or a\nlarge space requirement for keeping the entire graph and the embedding of each\nnode in memory. In this paper, we propose Cluster-GCN, a novel GCN algorithm\nthat is suitable for SGD-based training by exploiting the graph clustering\nstructure. Cluster-GCN works as the following: at each step, it samples a block\nof nodes that associate with a dense subgraph identified by a graph clustering\nalgorithm, and restricts the neighborhood search within this subgraph. This\nsimple but effective strategy leads to significantly improved memory and\ncomputational efficiency while being able to achieve comparable test accuracy\nwith previous algorithms. To test the scalability of our algorithm, we create a\nnew Amazon2M data with 2 million nodes and 61 million edges which is more than\n5 times larger than the previous largest publicly available dataset (Reddit).\nFor training a 3-layer GCN on this data, Cluster-GCN is faster than the\nprevious state-of-the-art VR-GCN (1523 seconds vs 1961 seconds) and using much\nless memory (2.2GB vs 11.2GB). Furthermore, for training 4 layer GCN on this\ndata, our algorithm can finish in around 36 minutes while all the existing GCN\ntraining algorithms fail to train due to the out-of-memory issue. Furthermore,\nCluster-GCN allows us to train much deeper GCN without much time and memory\noverhead, which leads to improved prediction accuracy---using a 5-layer\nCluster-GCN, we achieve state-of-the-art test F1 score 99.36 on the PPI\ndataset, while the previous best result was 98.71 by [16]. Our codes are\npublicly available at\nhttps://github.com/google-research/google-research/tree/master/cluster_gcn.\n

Keywords

Computer scienceScalabilityCluster analysisGraphNode (physics)AlgorithmTheoretical computer scienceArtificial intelligence

Affiliated Institutions

Related Publications

GeePS

Large-scale deep learning requires huge computational resources to train a multi-layer neural network. Recent systems propose using 100s to 1000s of machines to train networks w...

2016 296 citations

Publication Info

Year
2019
Type
preprint
Pages
257-266
Citations
1120
Access
Closed

External Links

Social Impact

Altmetric
PlumX Metrics

Social media, news, blog, policy document mentions

Citation Metrics

1120
OpenAlex

Cite This

Wei-Lin Chiang, Xuanqing Liu, Si Si et al. (2019). Cluster-GCN. , 257-266. https://doi.org/10.1145/3292500.3330925

Identifiers

DOI
10.1145/3292500.3330925