Neural Screen Space Rendering of Direct Illumination (best conference paper award)
conference contribution
posted on 2021-11-30, 02:01authored byChristian Suppan, Andrew Chalmers, Junhong ZhaoJunhong Zhao, Taehyun Rhee
Neural rendering is a class of methods that use deep learning to produce novel images of scenes from more limited information
than traditional rendering methods. This is useful for information scarce applications like mixed reality or semantic photo
synthesis but comes at the cost of control over the final appearance. We introduce the Neural Direct-illumination Renderer
(NDR), a neural screen space renderer capable of rendering direct-illumination images of any geometry, with opaque materials,
under distant illuminant. The NDR uses screen space buffers describing material, geometry, and illumination as inputs to
provide direct control over the output. We introduce the use of intrinsic image decomposition to allow a Convolutional Neural
Network (CNN) to learn a mapping from a large number of pixel buffers to rendered images. The NDR predicts shading maps,
which are subsequently combined with albedo maps to create a rendered image. We show that the NDR produces plausible
images that can be edited by modifying the input maps and marginally outperforms the state of the art while also providing
more functionality.
History
Preferred citation
Suppan, C., Chalmers, A., Zhao, J. & Rhee, T. (n.d.). Neural Screen Space Rendering of Direct Illumination (best conference paper award). https://www.wgtn.ac.nz/__data/assets/pdf_file/0008/1975814/neural-ssr-direct-illumination.pdf. https://www.wgtn.ac.nz/__data/assets/pdf_file/0008/1975814/neural-ssr-direct-illumination.pdf