Deblur-GS: 3D Gaussian Splatting from Camera Motion Blurred Images

I3D 2024

1University of Science and Technology of China,

Deblur-GS reconstruction sharp Gaussian scene from motion blurred images

Abstract

Novel view synthesis has undergone a revolution thanks to the radiance field method. The introduction of 3D Gaussian splatting (3DGS) has successfully addressed the issues of prolonged training times and slow rendering speeds associated with the Neural Radiance Field (NeRF), all while preserving the quality of reconstructions. However, 3DGS remains heavily reliant on the quality of input images and their initial camera pose initialization. In cases where input images are blurred, the reconstruction results suffer from blurriness and artifacts. In this paper, we propose the Deblur-GS method for reconstructing 3D Gaussian points to create a sharp radiance field from a camera motion blurred image set. We model the problem of motion blur as a joint optimization challenge involving camera trajectory estimation and time sampling. We cohesively optimize the parameters of the Gaussian points and the camera trajectory during the shutter time. Deblur-GS consistently achieves superior performance and rendering quality when compared to previous methods, as demonstrated in evaluations conducted on both synthetic and real datasets.

Demo

BibTeX

@article{Chen_deblurgs2024,
        author       = {Wenbo, Chen and Ligang, Liu},
        title        = {Deblur-GS: 3D Gaussian Splatting from Camera Motion Blurred Images},
        journal      = {Proc. ACM Comput. Graph. Interact. Tech. (Proceedings of I3D 2024)},
        year         = {2024},
        volume       = {7},
        number       = {1},
        numpages     = {13},
        location     = {Philadelphia, PA, USA},
        url          = {http://doi.acm.org/10.1145/3651301},
        doi          = {10.1145/3651301},
        publisher    = {ACM Press},
        address      = {New York, NY, USA},
     }