On two ways to use determinantal point processes for Monte Carlo integration
Abstract
Determinantal point processes are used to improve Monte Carlo integration estimators, with one approach achieving faster variance decay rates and another providing unbiased estimates through process adaptation to the integrand.
The standard Monte Carlo estimator I_N^{MC} of int fdω relies on independent samples from ω and has variance of order 1/N. Replacing the samples with a determinantal point process (DPP), a repulsive distribution, makes the estimator consistent, with variance rates that depend on how the DPP is adapted to f and ω. We examine two existing DPP-based estimators: one by Bardenet & Hardy (2020) with a rate of O(N^{-(1+1/d)}) for smooth f, but relying on a fixed DPP. The other, by Ermakov & Zolotukhin (1960), is unbiased with rate of order 1/N, like Monte Carlo, but its DPP is tailored to f. We revisit these estimators, generalize them to continuous settings, and provide sampling algorithms.
Get this paper in your agent:
hf papers read 2604.19698 Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash Models citing this paper 0
No model linking this paper
Datasets citing this paper 1
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper