I Know What I Don't Know: Latent Posterior Factor Models for Multi-Evidence Probabilistic Reasoning
Abstract
Latent Posterior Factors framework combines variational autoencoder latent posteriors with sum-product network inference to enable probabilistic reasoning over unstructured evidence while maintaining calibrated uncertainty estimates.
Real-world decision-making, from tax compliance assessment to medical diagnosis, requires aggregating multiple noisy and potentially contradictory evidence sources. Existing approaches either lack explicit uncertainty quantification (neural aggregation methods) or rely on manually engineered discrete predicates (probabilistic logic frameworks), limiting scalability to unstructured data. We introduce Latent Posterior Factors (LPF), a framework that transforms Variational Autoencoder (VAE) latent posteriors into soft likelihood factors for Sum-Product Network (SPN) inference, enabling tractable probabilistic reasoning over unstructured evidence while preserving calibrated uncertainty estimates. We instantiate LPF as LPF-SPN (structured factor-based inference) and LPF-Learned (end-to-end learned aggregation), enabling a principled comparison between explicit probabilistic reasoning and learned aggregation under a shared uncertainty representation. Across eight domains (seven synthetic and the FEVER benchmark), LPF-SPN achieves high accuracy (up to 97.8%), low calibration error (ECE 1.4%), and strong probabilistic fit, substantially outperforming evidential deep learning, LLMs and graph-based baselines over 15 random seeds. Contributions: (1) A framework bridging latent uncertainty representations with structured probabilistic reasoning. (2) Dual architectures enabling controlled comparison of reasoning paradigms. (3) Reproducible training methodology with seed selection. (4) Evaluation against EDL, BERT, R-GCN, and large language model baselines. (5) Cross-domain validation. (6) Formal guarantees in a companion paper.
Community
This paper is introducing LPFs. Your insights and advise are deeply sought.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Theoretical Foundations of Latent Posterior Factors: Formal Guarantees for Multi-Evidence Reasoning (2026)
- UAT-LITE: Inference-Time Uncertainty-Aware Attention for Pretrained Transformers (2026)
- On Multi-Step Theorem Prediction via Non-Parametric Structural Priors (2026)
- Density-Informed Pseudo-Counts for Calibrated Evidential Deep Learning (2026)
- GTS: Inference-Time Scaling of Latent Reasoning with a Learnable Gaussian Thought Sampler (2026)
- Confidence over Time: Confidence Calibration with Temporal Logic for Large Language Model Reasoning (2026)
- From Entropy to Calibrated Uncertainty: Training Language Models to Reason About Uncertainty (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper