X-Fields: Implicit Neural View-, Light- and Time-Image Interpolation
Event Type
Technical Papers
Technical Papers Q&A
Registration Categories
TimeSunday, 13 December 20209:06 - 9:12 SGT
LocationZoom Room 2
DescriptionWe suggest to represent an X-Fields, a set of 2D images taken across different view, time or illumination conditions i.e. video, light field, reflectance fields or combinations thereof by learning a neural network (NN) to map their view, time or light coordinates to 2D images. Executing this NN at new coordinates results in joint view, time or light interpolation. The key idea to make this workable is a NN that already knows the "basic tricks" of graphics (lighting, 3D projection, occlusion) in a hard-coded and differentiable form. The NN represents the implicit input to that rendering as an implicit map, that for any view, time, light coordinate and for any pixel can quantify how it will move if view, time or light coordinates change (Jacobian of pixel position in respect to view, time, illumination, etc). An X-Fields is trained for one scene within minutes, leading to a compact set of trainable parameters and consequently real-time navigation in view, time and illumination.