Abstract

The purpose of this study is to determine whether current video datasets have sufficient data for training very deep convolutional neural networks (CNNs) with spatio-temporal three-dimensional (3D) kernels. Recently, the performance levels of 3D CNNs in the field of action recognition have improved significantly. However, to date, conventional research has only explored relatively shallow 3D architectures. We examine the architectures of various 3D CNNs from relatively shallow to very deep ones on current video datasets. Based on the results of those experiments, the following conclusions could be obtained: (i) ResNet-18 training resulted in significant overfitting for UCF-101, HMDB-51, and ActivityNet but not for Kinetics. (ii) The Kinetics dataset has sufficient data for training of deep 3D CNNs, and enables training of up to 152 ResNets layers, interestingly similar to 2D ResNets on ImageNet. ResNeXt-101 achieved 78.4% average accuracy on the Kinetics test set. (iii) Kinetics pretrained simple 3D architectures outperforms complex 2D architectures, and the pretrained ResNeXt-101 achieved 94.5% and 70.2% on UCF-101 and HMDB-51, respectively. The use of 2D CNNs trained on ImageNet has produced significant progress in various tasks in image. We believe that using deep 3D CNNs together with Kinetics will retrace the successful history of 2D CNNs and ImageNet, and stimulate advances in computer vision for videos. The codes and pretrained models used in this study are publicly available <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">1</sup> .

Keywords

OverfittingConvolutional neural networkComputer scienceArtificial intelligencePattern recognition (psychology)Deep neural networksContextual image classificationDeep learningTraining setMachine learningImage (mathematics)Artificial neural network

Affiliated Institutions

Related Publications

Publication Info

Year
2018
Type
article
Citations
2139
Access
Closed

Social Impact

Social media, news, blog, policy document mentions

Citation Metrics

2139
OpenAlex
297
Influential
1556
CrossRef

Cite This

Kensho Hara, Hirokatsu Kataoka, Yutaka Satoh (2018). Can Spatiotemporal 3D CNNs Retrace the History of 2D CNNs and ImageNet?. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition . https://doi.org/10.1109/cvpr.2018.00685

Identifiers

DOI
10.1109/cvpr.2018.00685
arXiv
1711.09577

Data Quality

Data completeness: 84%