| --- |
| language: en |
| license: apache-2.0 |
| library_name: diffusers |
| tags: [] |
| datasets: Qilex/private_guys |
| metrics: [] |
| --- |
| |
|
|
| # VirtualPetDiffusion2 |
|
|
| ## Model description |
|
|
| This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library |
| on a dataset of roughly 8,000 virtual pet thumbnail images. |
|
|
| ## Intended uses & limitations |
|
|
| This model can be used to generate small (128x128) virtual pet-like thumbnails. |
| The pets are generally somewhat abstract. |
|
|
| #### How to use |
|
|
| ```python |
| from diffusers import DiffusionPipeline |
| pipeline = DiffusionPipeline.from_pretrained("Qilex/VirtualPetDiffusion2") |
| image = pipeline()["sample"][0] |
| #this line only works in jupyter |
| display(image) |
| ``` |
|
|
| ## Training data |
|
|
| This model was trained on roughly 8,000 virtual pet thumbnail images (80x80px). |
| The data was randomly flipped, rotated, and perspected using torchvision transforms to prevent some of the issues from the first VirtualPetDiffusion. |
|
|
| ### Training hyperparameters |
|
|
| The following hyperparameters were used during training: |
| - learning_rate: 0.0001 |
| - train_batch_size: 16 |
| - eval_batch_size: 16 |
| - gradient_accumulation_steps: 1 |
| - optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None |
| - lr_scheduler: None |
| - lr_warmup_steps: 500 |
| - ema_inv_gamma: None |
| - ema_inv_gamma: None |
| - ema_inv_gamma: None |
| - mixed_precision: no |
|
|
| ### Training results |
|
|
| 📈 [TensorBoard logs](https://huggingface.co/Qilex/VirtualPetDiffusion2/tensorboard?#scalars) |
|
|
|
|