Abstract
A multi-modal language model-based agent generates vector sketches incrementally using part-level annotations and process-reward reinforcement learning with visual feedback.
We develop a method for producing vector sketches one part at a time. To do this, we train a multi-modal language model-based agent using a novel multi-turn process-reward reinforcement learning following supervised fine-tuning. Our approach is enabled by a new dataset we call ControlSketch-Part, containing rich part-level annotations for sketches, obtained using a novel, generic automatic annotation pipeline that segments vector sketches into semantic parts and assigns paths to parts with a structured multi-stage labeling process. Our results indicate that incorporating structured part-level data and providing agent with the visual feedback through the process enables interpretable, controllable, and locally editable text-to-vector sketch generation.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- IntroSVG: Learning from Rendering Feedback for Text-to-SVG Generation via an Introspective Generator-Critic Framework (2026)
- GenAgent: Scaling Text-to-Image Generation via Agentic Multimodal Reasoning (2026)
- Towards Unified Multimodal Interleaved Generation via Group Relative Policy Optimization (2026)
- Recurrent Reasoning with Vision-Language Models for Estimating Long-Horizon Embodied Task Progress (2026)
- RetouchIQ: MLLM Agents for Instruction-Based Image Retouching with Generalist Reward (2026)
- ProRAG: Process-Supervised Reinforcement Learning for Retrieval-Augmented Generation (2026)
- Code2World: A GUI World Model via Renderable Code Generation (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper