Transform-Invariant Generative Ray Path Sampling for Efficient Radio Propagation Modeling
Abstract
A machine learning framework using generative flow networks with experience replay, uniform exploration, and physics-based masking enables fast and accurate radio propagation path sampling with significant computational speedup.
Ray tracing has become a standard for accurate radio propagation modeling, but suffers from exponential computational complexity, as the number of candidate paths scales with the number of objects raised to the power of the interaction order. This bottleneck limits its use in large-scale or real-time applications, forcing traditional tools to rely on heuristics to reduce the number of path candidates at the cost of potentially reduced accuracy. To overcome this limitation, we propose a comprehensive machine-learning-assisted framework that replaces exhaustive path searching with intelligent sampling via Generative Flow Networks. Applying such generative models to this domain presents significant challenges, particularly sparse rewards due to the rarity of valid paths, which can lead to convergence failures and trivial solutions when evaluating high-order interactions in complex environments. To ensure robust learning and efficient exploration, our framework incorporates three key architectural components. First, we implement an experience replay buffer to capture and retain rare valid paths. Second, we adopt a uniform exploratory policy to improve generalization and prevent the model from overfitting to simple geometries. Third, we apply a physics-based action masking strategy that filters out physically impossible paths before the model even considers them. As demonstrated in our experimental validation, the proposed model achieves substantial speedups over exhaustive search -- up to 10times faster on GPU and 1000times faster on CPU -- while maintaining high coverage accuracy and successfully uncovering complex propagation paths. The complete source code, tests, and tutorial are available at https://github.com/jeertmans/sampling-paths.
Community
Hi everyone!
I have just submitted my new journal paper on using Generative Flow Networks (GFlowNets) to speed up radio propagation modeling. Don't hesitate to checkout the paper or the tutorial notebook!
The problem and our solution
Traditional point-to-point ray tracing suffers from exponential computational complexity, scaling with the number of objects raised to the interaction order. To fix this bottleneck, we define path finding as a sequential decision process and trained a generative model to intelligently sample valid ray paths instead of relying on an exhaustive search.
This work extends previous work I presented at ICMLCN 2025, but with much better results and details. Specifically, the proposed model achieves speedups of up to 10x on GPU and 1000x on CPU while maintaining high coverage accuracy!
Improvements from previous model
While working on this project, I researched a lot about reinforcement learning and GFlowNets. Applying GFlowNets here meant traversing a tree rather than a generic directed graph, which led to a number of standard solutions not being applicable. However, a few of them led to positive outcomes:
- Sparse Rewards: Finding valid geometric paths is rare, leading to a massive sparse reward issue and model collapse. After exploring goal-oriented RL with no success, I solved this by introducing a successful experience replay buffer to capture and store rare valid paths.
- Exploration: Using a uniform exploratory policy (ε-greedy) turned out to slightly improve performance on higher-order paths (i.e., deeper trees).
- Action Masking: I applied a physics-based action masking strategy to filter out physically impossible paths before the model even considers them, drastically pruning the search space.
- Muon Optimizer: Finally, I recently tried the Muon optimizer instead of the traditional Adam I was always using, and noticed much better training performance and convergence speed.
ML framework and hardware
Everything was built using the JAX ecosystem (Equinox, Optax, and my own library DiffeRT). Sadly, sharing code isn't super common in my specific research community, but I strongly believe open-sourcing research data can only benefit everyone. As a result, I put a lot of effort into making the code clean and well-documented.
I'm not an ML expert but a telecom researcher, and I performed these experiments entirely on my own using a single NVIDIA RTX 3070. FYI, training the three models (as shown in the tutorial) takes about 3 hours on my computer. It might not be ready to completely replace exhaustive ray tracing just yet, but the results are really promising.
I'm very happy to receive questions, comments, or criticisms about this work. I hope you like it! :-)
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper