If you’ve ever tried to create 3DGS scenes from photos taken with different cameras or lighting conditions, you know the

If you’ve ever tried to create 3DGS scenes from photos taken with different cameras or lighting conditions, you know the pain. The colors shift, the exposure varies, and your splats ends up looking… well, weird.

Most current solutions throw neural networks at the problem. They work on training data but fall apart on novel views. However, @NVIDIAAIDev’s paper that just dropped, PPISP (Physically-Plausible Image Signal Processing), takes a different approach. It models the actual camera pipeline.

TLDR: It separates what changes from what doesn’t. Camera-specific things like vignetting and sensor response stay constant. Per-frame things like exposure and white balance get their own parameters.

Because it models real camera physics, you can manually adjust exposure or white balance like you would in Lightroom.

It then predicts what exposure/WB settings a real camera would use for new viewpoints. Think of it as auto-exposure for 3D reconstructions.

The results beats SOTA methods on standard benchmark.

You can learn more from Radiance Fields (Gaussian Splatting and NeRFs) full coverage on their webite: https://radiancefields.com/nvidia-announces-ppisp-for-radiance-fields

3d 3DGS computervision

🔗 원본 링크

미디어

image