Instructions to use questcoast/clone-wars-diffusion-v1 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use questcoast/clone-wars-diffusion-v1 with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("questcoast/clone-wars-diffusion-v1", dtype=torch.bfloat16, device_map="cuda") prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" image = pipe(prompt).images[0] - Notebooks
- Google Colab
- Kaggle
- Local Apps
- Draw Things
- DiffusionBee
Training new characters?
Hey, this is awesome! If possible, I would like to train some additional characters, such as Kit Fisto, but I am fairly new to SD. Are you able to share how you went about training this model?
Sorry for late response. You can get the general idea from my reddit comment:
I used this repository, and mostly followed these instructions.
But in the concepts JSON each concept's instance_prompt matches class_prompt (e.g. both are "clone wars style", "anakin skywalker", etc.) and instance_data_dir matches class_data_dir. 20 images per concept.
Actually, regularization images weren't used at all. But I wanted more images for style, so I split style images to different directories, so there are 10 concepts with same "clone wars style" but different directories.
Then, the model was trained with these parameters:
learning_rate=1e-6
lr_scheduler="polynomial"
max_train_steps=20000
And the results were fine, but I continued training with learning_rate=1e-7 and lr_scheduler="constant" for 20-k more steps. You should save each N-k steps, check for results and choose the best.