Fast Object Compositional Neural Radiance Fields
Published: 2024
Publication Name: Proceedings of IEEE ICIR 2024
Publication URL: https://icir.ieee.org/2024-program/
Abstract:
Neural Radiance Fields (NeRFs) techniques have
demonstrated a remarkable ability to generate high-quality novel
views in diverse scenes. However, they encode the entire scene
wholly and are unaware of the ‘objects’ in the scene. We
present a pipeline for end-to-end object-level reconstruction
while maintaining scene fidelity. Our pipeline uses an object
segmentation model to decompose the scene into semantic objects
and encode them into separate NeRF models. First, it allows us to
learn object-class level NeRF models that can reconstruct certain
object classes using multi-view images. Second, we can move, edit,
and recombine these objects within a single scene. Individual
objects are segmented using You Only Look Once (YOLO) [33],
and their individual NeRF model is trained to achieve object-level
reconstruction. Then, each object is individually reconstructed
before recombining them into a single scene. This process enables
potential downstream applications such as NeRF editing and
mesh extraction.