LoRWeB Model
Hila Manor1,2,β Rinon Gal2,β Haggai Maron1,2,β Tomer Michaeli1,β Gal Chechik2,3
1Technion - Israel Institute of Technology ββ 2NVIDIA ββ 3Bar-Ilan University
Given a prompt and an image triplet {a, a', b} that visually describe a desired transformation, LoRWeB dynamically constructs a single LoRA from a learnable basis of LoRA modules, and produces an editing result b' that applies the same analogy to the new image.
βΉοΈ Additional Information
This model is a reproduction of the original model from the paper. It was trained from scratch using Technion resources. This might introduce differences from the results reported in the paper. Please see the samples directory for examples of this model's outputs on the {a, a', b} triplets from the teaser figure.
Please see our full modelcard and further details in the GitHub Repo.
π Citation
If you use this model in your research, please cite:
@article{manor2026lorweb,
title={Spanning the Visual Analogy Space with a Weight Basis of LoRAs},
author={Manor, Hila and Gal, Rinon and Maron, Haggai and Michaeli, Tomer and Chechik, Gal},
journal={arXiv preprint arXiv:2602.15727},
year={2026}
}
Model tree for hilamanor/lorweb
Base model
black-forest-labs/FLUX.1-Kontext-dev