Deep Reflectance Estimation from Single RGBD Images
The photo-realistic reconstruction of real environments has long been one of the goals of computer graphics. Obtaining reflectance, how a material reflects and scatters light, is key to achieving this. Reflectance capture is already integral to a plethora of applications ranging from digital entertainment, to cultural preservation, to medical diagnoses. However, actually measuring reflectance remains expensive and extracting it from images requires the ill-posed inverse rendering problem to be solved. In this thesis, we consider how the capture of reflectance from single casually captured images might be achieved by leveraging technology that already exists, such as readily available consumer sensors. This is a very active area of research and the resurgence of deep learning has allowed for the development of several few-shot reflectance estimators. We advance this field by testing the hypothesis that previously unused depth and illumination inputs can be beneficial to deep learning based reflectance estimation. The wide dissemination of consumer depth sensors through mobile phones and advances in deep inverse rendering of illumination have made these readily available as data sources, but their use in reflectance estimation has not been explored. We provide strong evidence to support this hypothesis by developing a deep reflectance estimator that estimates spatially varying BRDF parameters of arbitrary objects under uncontrolled illumination from single RGBD images using these novel inputs that outperforms similar existing methods. We achieve this by implementing a state-of-the-art neural renderer, developing a novel hierarchical estimator architecture, and generating large synthetic datasets for training. Our method can produce plausible estimates of normals, diffuse albedo, specular albedo, and roughness from a single real-world photo.