Abstract

There is an increasing need to bring machine learning to a wide diversity of hardware devices. Current frameworks rely on vendor-specific operator libraries and optimize for a narrow range of server-class GPUs. Deploying workloads to new platforms - such as mobile phones, embedded devices, and accelerators (e.g., FPGAs, ASICs) - requires significant manual effort. We propose TVM, a compiler that exposes graph-level and operator-level optimizations to provide performance portability to deep learning workloads across diverse hardware back-ends. TVM solves optimization challenges specific to deep learning, such as high-level operator fusion, mapping to arbitrary hardware primitives, and memory latency hiding. It also automates optimization of low-level programs to hardware characteristics by employing a novel, learning-based cost modeling method for rapid exploration of code optimizations. Experimental results show that TVM delivers performance across hardware back-ends that are competitive with state-of-the-art, hand-tuned libraries for low-power CPU, mobile GPU, and server-class GPUs. We also demonstrate TVM's ability to target new accelerator back-ends, such as the FPGA-based generic deep learning accelerator. The system is open sourced and in production use inside several major companies.

Keywords

Computer scienceCompilerDeep learningVendorField-programmable gate arraySoftware portabilityComputer architectureEmbedded systemArtificial intelligenceOperating system

Affiliated Institutions

Related Publications

LINQits

We present LINQits, a flexible hardware template that can be mapped onto programmable logic or ASICs in a heterogeneous system-on-chip for a mobile device or server. Unlike fixe...

2013 97 citations

Publication Info

Year
2018
Type
article
Pages
578-594
Citations
902
Access
Closed

External Links

Social Impact

Social media, news, blog, policy document mentions

Citation Metrics

902
OpenAlex

Cite This

Tianqi Chen, Thierry Moreau, Ziheng Jiang et al. (2018). TVM: an automated end-to-end optimizing compiler for deep learning. Operating Systems Design and Implementation , 578-594. https://doi.org/10.5555/3291168.3291211

Identifiers

DOI
10.5555/3291168.3291211