Abstract
LGTM is a feed-forward framework that enables high-fidelity 4K novel view synthesis by predicting compact Gaussian primitives with per-primitive textures, decoupling geometric complexity from rendering resolution.
Existing feed-forward 3D Gaussian Splatting methods predict pixel-aligned primitives, leading to a quadratic growth in primitive count as resolution increases. This fundamentally limits their scalability, making high-resolution synthesis such as 4K intractable. We introduce LGTM (Less Gaussians, Texture More), a feed-forward framework that overcomes this resolution scaling barrier. By predicting compact Gaussian primitives coupled with per-primitive textures, LGTM decouples geometric complexity from rendering resolution. This approach enables high-fidelity 4K novel view synthesis without per-scene optimization, a capability previously out of reach for feed-forward methods, all while using significantly fewer Gaussian primitives. Project page: https://yxlao.github.io/lgtm/
Community
The decoupling of geometric complexity from rendering resolution is a smart approach — similar to what we've seen work in neural radiance fields with feature grids. The per-primitive textures remind me of UV atlas techniques from traditional graphics, but applied to feed-forward networks. One question: does the texture prediction add significant memory overhead during inference? 4K synthesis is impressive, but I'm curious about the VRAM footprint compared to pixel-aligned approaches at the same output resolution.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- ViewSplat: View-Adaptive Dynamic Gaussian Splatting for Feed-Forward Synthesis (2026)
- SurfSplat: Conquering Feedforward 2D Gaussian Splatting with Surface Continuity Priors (2026)
- SR3R: Rethinking Super-Resolution 3D Reconstruction With Feed-Forward Gaussian Splatting (2026)
- F4Splat: Feed-Forward Predictive Densification for Feed-Forward 3D Gaussian Splatting (2026)
- CylinderSplat: 3D Gaussian Splatting with Cylindrical Triplanes for Panoramic Novel View Synthesis (2026)
- UniSem: Generalizable Semantic 3D Reconstruction from Sparse Unposed Images (2026)
- AirSplat: Alignment and Rating for Robust Feed-Forward 3D Gaussian Splatting (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend
Get this paper in your agent:
hf papers read 2603.25745 Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper