Papers
arxiv:2604.19698

On two ways to use determinantal point processes for Monte Carlo integration

Published on Apr 21
Authors:
,
,

Abstract

Determinantal point processes are used to improve Monte Carlo integration estimators, with one approach achieving faster variance decay rates and another providing unbiased estimates through process adaptation to the integrand.

AI-generated summary

The standard Monte Carlo estimator I_N^{MC} of int fdω relies on independent samples from ω and has variance of order 1/N. Replacing the samples with a determinantal point process (DPP), a repulsive distribution, makes the estimator consistent, with variance rates that depend on how the DPP is adapted to f and ω. We examine two existing DPP-based estimators: one by Bardenet & Hardy (2020) with a rate of O(N^{-(1+1/d)}) for smooth f, but relying on a fixed DPP. The other, by Ermakov & Zolotukhin (1960), is unbiased with rate of order 1/N, like Monte Carlo, but its DPP is tailored to f. We revisit these estimators, generalize them to continuous settings, and provide sampling algorithms.

Community

Sign up or log in to comment

Get this paper in your agent:

hf papers read 2604.19698
Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2604.19698 in a model README.md to link it from this page.

Datasets citing this paper 1

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2604.19698 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.