Instructions to use PromptEnhancer/PromptEnhancer-32B with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use PromptEnhancer/PromptEnhancer-32B with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="PromptEnhancer/PromptEnhancer-32B") messages = [ { "role": "user", "content": [ {"type": "image", "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/p-blog/candy.JPG"}, {"type": "text", "text": "What animal is on the candy?"} ] }, ] pipe(text=messages)# Load model directly from transformers import AutoProcessor, AutoModelForImageTextToText processor = AutoProcessor.from_pretrained("PromptEnhancer/PromptEnhancer-32B") model = AutoModelForImageTextToText.from_pretrained("PromptEnhancer/PromptEnhancer-32B") messages = [ { "role": "user", "content": [ {"type": "image", "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/p-blog/candy.JPG"}, {"type": "text", "text": "What animal is on the candy?"} ] }, ] inputs = processor.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(processor.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use PromptEnhancer/PromptEnhancer-32B with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "PromptEnhancer/PromptEnhancer-32B" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "PromptEnhancer/PromptEnhancer-32B", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/PromptEnhancer/PromptEnhancer-32B
- SGLang
How to use PromptEnhancer/PromptEnhancer-32B with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "PromptEnhancer/PromptEnhancer-32B" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "PromptEnhancer/PromptEnhancer-32B", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "PromptEnhancer/PromptEnhancer-32B" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "PromptEnhancer/PromptEnhancer-32B", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use PromptEnhancer/PromptEnhancer-32B with Docker Model Runner:
docker model run hf.co/PromptEnhancer/PromptEnhancer-32B
PromptEnhancerV2 (32B)
PromptEnhancerV2 is a multimodal language model fine-tuned for text-to-image prompt enhancement and rewriting. It restructures user input prompts while preserving the original intent, producing clearer, layered, and logically consistent prompts suitable for downstream image generation tasks.
Model Details
Model Description
PromptEnhancerV2 is a specialized text-to-image prompt rewriting model that employs chain-of-thought reasoning to enhance user prompts.
- Model type: Vision-Language Model for Prompt Enhancement
- Language(s) (NLP): Chinese (zh), English (en)
- License: Apache-2.0
- Finetuned from model: Qwen/Qwen2.5-VL-32B-Instruct
Model Sources
- Repository: https://github.com/ximinng/PromptEnhancer
- Paper: https://arxiv.org/abs/2509.04545
- Homepage: https://hunyuan-promptenhancer.github.io/
How to Get Started with the Model
- 1. Clone the repository::
git clone https://github.com/ximinng/PromptEnhancer.git
cd PromptEnhancer
pip install -r requirements.txt
- 2. Model Download:
huggingface-cli download PromptEnhancer/PromptEnhancer-32B --local-dir ./models/promptenhancer-32b
- 3. Use the model:
from inference.prompt_enhancer_v2 import PromptEnhancerV2
# Initialize the model
models_root_path = "./models/promptenhancer-32b"
enhancer = PromptEnhancerV2(models_root_path=models_root_path, device_map="auto")
# Enhance a prompt (Chinese or English)
user_prompt = "韩系插画风女生头像,粉紫色短发+透明感腮红,侧光渲染。"
enhanced_prompt = enhancer.predict(
prompt_cot=user_prompt,
device="cuda"
)
print("Enhanced:", enhanced_prompt)
Evaluation
The model is evaluated on the T2I-Keypoints-Eval dataset, which contains diverse text-to-image prompts across various categories and languages.
Citation
If you find this model useful, please consider citing:
BibTeX:
@article{promptenhancer,
title={PromptEnhancer: A Simple Approach to Enhance Text-to-Image Models via Chain-of-Thought Prompt Rewriting},
author={Wang, Linqing and Xing, Ximing and Cheng, Yiji and Zhao, Zhiyuan and Donghao, Li and Tiankai, Hang and Zhenxi, Li and Tao, Jiale and Wang, QiXun and Li, Ruihuang and Chen, Comi and Li, Xin and Wu, Mingrui and Deng, Xinchi and Gu, Shuyang and Wang, Chunyu and Lu, Qinglin},
journal={arXiv preprint arXiv:2509.04545},
year={2025}
}
- Downloads last month
- 76