Reconstructing clean, distractor-free 3D scenes from real-world captures remains a significant challenge, particularly in highly dynamic and cluttered settings such as egocentric videos. To tackle this problem, we introduce DeGauss, a simple and robust self-supervised framework for dynamic scene reconstruction based on a decoupled dynamic-static Gaussian Splatting design. DeGauss models dynamic elements with foreground Gaussians and static content with background Gaussians, using a probabilistic mask to coordinate their composition and enable independent yet complementary optimization. DeGauss generalizes robustly across a wide range of real-world scenarios, from casual image collections to long, dynamic egocentric videos, without relying on complex heuristics or extensive supervision. Experiments on benchmarks including NeRF-on-the-go, ADT, AEA, Hot3D, and EPIC-Fields demonstrate that DeGauss consistently outperforms existing methods, establishing a strong baseline for generalizable, distractor-free 3D reconstructionin highly dynamic, interaction-rich environments
We present an on-the-fly rendering comparison of reconstructed scenes using 3D-GS, SpotlessSplats, and our approach on the Nerf on-the-go and Aria Everyday Activities datasets. Our method achieves high quality rendering quality with smooth depth geometry.
Drag the slider to compare different reconstruction methods
Drag the slider to compare different reconstruction methods on Aria datasets
Unlike NeuralDiff, where the dynamic foreground tends to dominate the reconstruction, our method leverages a decoupled two-branch design to achieve a flexible and accurate dynamic-static decomposition, yielding a static reconstruction rich in fine details. (left to right: Input, Composed Render, Foreground Render, Background Render, -/Compoistion mask)
Our method can also be used for high-quality and efficient dynamic scene representation with the decoupled foreground-background branches. We show the results on the Neu3D dataset5 and HyperNerf dataset6. (left to right: -/Input, Composed Render, Foreground Render, -/Background Render, Compoistion mask)
Welcome to check these interesting works for robust 3d reconstruction and efficent dynamic scene modeling.
SpotlessSplats: Ignoring Distractors in 3D Gaussian Splatting
WildGaussians: 3D Gaussian Splatting in the Wild
DeSplat: Decomposed Gaussian Splatting for Distractor-Free Rendering
HybridGS: Decoupling Transients and Statics with 2D and 3D Gaussian Splatting
NeuralDiff: Segmenting 3D objects that move in egocentric videos
@misc{wang2025degaussdynamicstaticdecompositiongaussian,
title={DeGauss: Dynamic-Static Decomposition with Gaussian Splatting for Distractor-free 3D Reconstruction},
author={Rui Wang and Quentin Lohmeyer and Mirko Meboldt and Siyu Tang},
year={2025},
eprint={2503.13176},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2503.13176},
}