|
|
| --- |
| license: creativeml-openrail-m |
| base_model: CompVis/stable-diffusion-v1-4 |
| datasets: |
| - ShinnosukeU/kanji_diffusion_dataset |
| tags: |
| - stable-diffusion |
| - stable-diffusion-diffusers |
| - text-to-image |
| - diffusers |
| inference: true |
| --- |
| |
| # Text-to-image finetuning - ShinnosukeU/kanji_vae_decoder_only |
| |
| This pipeline was finetuned from **CompVis/stable-diffusion-v1-4** on the **ShinnosukeU/kanji_diffusion_dataset** dataset. Below are some example images generated with the finetuned pipeline using the following prompts: Nothing: |
|
|
|
|
| ## Training info |
|
|
| These are the key hyperparameters used during training: |
|
|
| * Epochs: 100 |
| * Learning rate: 1.2e-06 |
| * Batch size: 2 |
| * Gradient accumulation steps: 4 |
| * Image resolution: 128 |
| * Mixed-precision: None |
|
|
|
|
| More information on all the CLI arguments and the environment are available on your [`wandb` run page](https://wandb.ai/shinnosukeu/vae-fine-tune/runs/9bt51ib7). |
|
|