ORCa: Glossy Objects as Radiance-Field Cameras

Abstract

overview

Reflections on glossy objects contain valuable and hidden information about the surrounding environment. By converting these objects into cameras, we can unlock exciting applications, including imaging beyond the camera's field-of-view and from seemingly impossible vantage points, e.g. from reflections on the human eye. However, this task is challenging because reflections depend jointly on object geometry, material properties, the 3D environment, and the observer viewing direction. Our approach converts glossy objects with unknown geometry into radiance-field cameras to image the world from the object's perspective. Our key insight is to convert the object surface into a virtual sensor that captures cast reflections as a 2D projection of the 5D environment radiance field visible to the object. We show that recovering the environment radiance fields enables depth and radiance estimation from the object to its surroundings in addition to beyond field-of-view novel-view synthesis, i.e. rendering of novel views that are only directly-visible to the glossy object present in the scene, but not the observer. Moreover, using the radiance field we can image around occluders caused by close-by objects in the scene. Our method is trained end-to-end on multi-view images of the object and jointly estimates object geometry, diffuse radiance, and the 5D environment radiance field.

Modeling Unknown Object Surface as Virtual Sensors with Virtual Pixels

From Multi-View images of an object with unknown geometry and diffuse texture, we convert the object into a radiance-field camera by modelling reflections as a 2D projection of a 5D environment radiance field onto the object surface. The unknown object surface is modeled as a virtual sensor with virtual pixels that, unlike a tradition pixel, have incoming radiance that is dependent on both viewpoint and local geometry . Using the virtual pixel, we capture the Environment Radiance Field. The radiance field is then queried on novel view points closer to the object to enable Beyond Field-Of-View Novel View Synthesis. Our framework can disentangle complex diffuse and specular radiance to uncover the hidden environment- notice how the fireplace is occluded in the multi-view images yet our framework can recover and interpolate to render the complete fireplace.

-->

Why 5D Environment Radiance Fields?

Modeling reflections on object surfaces as a 5D env. radiance field enables beyond field-of-view novel-view synthesis, including rendering of the environment from translated virtual camera views. Depth and environment radiance of translated and parallax views can further enable imaging behind occluders, for example revealing the tails behind the primary Pokemon occluders.

Environment Radiance Field Enables Depth Estimation

From Multi-View images of an object with unknown geometry and diffuse texture, we convert the object into a radiance-field camera by modelling reflections as a 2D projection of a 5D environment radiance field onto the object surface. The radiance field is queried on points closer to the object to enable Beyond Field-Of-View Novel View Synthesis. Our framework can disentangle complex diffuse and specular radiance to uncover the hidden environment- notice how the fireplace is occluded in the multi-view images yet our framework can recover and interpolates between reflections to renders novel views and depth.

Virtual Pixels Cast Virutal Cones

We image the world through the object by modeling each pixel’s specular radiance as a projection of the 5D radiance field of the environment onto the object’s surface. We capture the radiance field by treating the surface area on the object that the pixel views, dSt, as a virtual pixel with its center-of-projection at vo. We cast virtual cones through the virtual sensor to capture the 5D radiance field of the environment. Our formulation is more physically informed and samples the full incoming radiance by accounting for the real-pixel and object intersectional area (Right).

BibTeX

Acknowledgements

The website template was borrowed from Michaël Gharbi.