Abstract

We present a method that achieves state-of-the-art results for synthesizing novel views of complex scenes by optimizing an underlying continuous volumetric scene function using a sparse set of input views. Our algorithm represents a scene using a fully connected (nonconvolutional) deep network, whose input is a single continuous 5D coordinate (spatial location ( x , y , z ) and viewing direction ( θ, ϕ )) and whose output is the volume density and view-dependent emitted radiance at that spatial location. We synthesize views by querying 5D coordinates along camera rays and use classic volume rendering techniques to project the output colors and densities into an image. Because volume rendering is naturally differentiable, the only input required to optimize our representation is a set of images with known camera poses. We describe how to effectively optimize neural radiance fields to render photorealistic novel views of scenes with complicated geometry and appearance, and demonstrate results that outperform prior work on neural rendering and view synthesis.

Keywords

Rendering (computer graphics)Computer scienceRadianceArtificial intelligenceDifferentiable functionComputer visionVolume renderingComputer graphics (images)Image-based modeling and renderingView synthesisArtificial neural networkMathematicsOpticsPhysics

Affiliated Institutions

Related Publications

Recognizing indoor scenes

We propose a scheme for indoor place identification based on the recognition of global scene views. Scene views are encoded using a holistic representation that provides low-res...

2009 2009 IEEE Conference on Computer Visi... 1464 citations

Publication Info

Year
2021
Type
article
Volume
65
Issue
1
Pages
99-106
Citations
4497
Access
Closed

External Links

Social Impact

Altmetric
PlumX Metrics

Social media, news, blog, policy document mentions

Citation Metrics

4497
OpenAlex

Cite This

Ben Mildenhall, Pratul P. Srinivasan, Matthew Tancik et al. (2021). NeRF. Communications of the ACM , 65 (1) , 99-106. https://doi.org/10.1145/3503250

Identifiers

DOI
10.1145/3503250