SelaVPR++

SelaVPR++ introduces a parameter-, memory-, and time-efficient PEFT method for seamless adaptation of foundation models to visual place recognition, enhancing both parameter and computational efficiency. It also proposes a novel two-stage paradigm using compact binary features for fast candidate retrieval and robust floating-point features for re-ranking, significantly improving retrieval speed. In addition to its high efficiency, this work also outperforms previous state-of-the-art methods on several VPR benchmarks.

Paper: SelaVPR++: Towards Seamless Adaptation of Foundation Models for Efficient Place Recognition (Accepted by IEEE T-PAMI 2025)

GitHub: Lu-Feng/SelaVPRplusplus

Citation

@ARTICLE{selavprpp,
author={Lu, Feng and Jin, Tong and Lan, Xiangyuan and Zhang, Lijun and Liu, Yunpeng and Wang, Yaowei and Yuan, Chun},
  journal={IEEE Transactions on Pattern Analysis and Machine Intelligence}, 
  title={SelaVPR++: Towards Seamless Adaptation of Foundation Models for Efficient Place Recognition}, 
  year={2025},
  volume={},
  number={},
  pages={1-18},
  doi={10.1109/TPAMI.2025.3629287}}
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Paper for fenglu96/SelaVPRplusplus