Abstract
Neural PDE solvers trained with Monte Carlo-based weak supervision achieve improved accuracy, faster training, and reduced memory usage compared to traditional physics-informed methods.
Training neural PDE solvers is often bottlenecked by expensive data generation or unstable physics-informed neural network (PINN) involving challenging optimization landscapes due to higher-order derivatives. To tackle this issue, we propose an alternative approach using Monte Carlo approaches to estimate the solution to the PDE as a stochastic process for weak supervision during training. Leveraging the Walk-on-Spheres method, we introduce a learning scheme called Walk-on-Spheres Neural Operator (WoS-NO) which uses weak supervision from WoS to train any given neural operator. We propose to amortize the cost of Monte Carlo walks across the distribution of PDE instances using stochastic representations from the WoS algorithm to generate cheap, noisy, estimates of the PDE solution during training. This is formulated into a data-free physics-informed objective where a neural operator is trained to regress against these weak supervisions, allowing the operator to learn a generalized solution map for an entire family of PDEs. This strategy does not require expensive pre-computed datasets, avoids computing higher-order derivatives for loss functions that are memory-intensive and unstable, and demonstrates zero-shot generalization to novel PDE parameters and domains. Experiments show that for the same number of training steps, our method exhibits up to 8.75times improvement in L_2-error compared to standard physics-informed training schemes, up to 6.31times improvement in training speed, and reductions of up to 2.97times in GPU memory consumption. We present the code at https://github.com/neuraloperator/WoS-NO
Community
Training neural PDE solvers is often bottlenecked by expensive data generation or unstable physics-informed neural network (PINN) involving challenging optimization landscapes due
to higher-order derivatives. To tackle this issue,
we propose an alternative approach using Monte
Carlo approaches to estimate the solution to the
PDE as a stochastic process for weak supervision during training. Leveraging the Walk-on-Spheres method, we introduce a learning scheme
called Walk-on-Spheres Neural Operator (WoS-NO) which uses weak supervision from WoS to
train any given neural operator. We propose to
amortize the cost of Monte Carlo walks across the
distribution of PDE instances using stochastic representations from the WoS algorithm to generate
cheap, noisy, estimates of the PDE solution during training. This is formulated into a data-free
physics-informed objective where a neural operator is trained to regress against these weak supervisions, allowing the operator to learn a generalized
solution map for an entire family of PDEs. This
strategy does not require expensive pre-computed
datasets, avoids computing higher-order derivatives for loss functions that are memory-intensive
and unstable, and demonstrates zero-shot generalization to novel PDE parameters and domains.
Experiments show that for the same number of
training steps, our method exhibits up to 8.75×
improvement in L2-error compared to standard
physics-informed training schemes, up to 6.31×
improvement in training speed, and reductions of
up to 2.97× in GPU memory consumption
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Stabilizing Physics-Informed Consistency Models via Structure-Preserving Training (2026)
- Ambient Physics: Training Neural PDE Solvers with Partial Observations (2026)
- Learning Physical Operators using Neural Operators (2026)
- Physics-Informed Laplace Neural Operator for Solving Partial Differential Equations (2026)
- Neural Hodge Corrective Solvers: A Hybrid Iterative-Neural Framework (2026)
- Learning Neural Operators from Partial Observations via Latent Autoregressive Modeling (2026)
- Test-time Generalization for Physics through Neural Operator Splitting (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper