Commit ·
66dc58a
1
Parent(s): 2cc94e0
Update README.md
Browse files
README.md
CHANGED
|
@@ -4,4 +4,36 @@ datasets:
|
|
| 4 |
- ChristophSchuhmann/LAION-5B-EN-Aesthetics-Subset_above_6
|
| 5 |
---
|
| 6 |
|
| 7 |
-

|
| 8 |
+
|
| 9 |
+
Image Mixer is a model that lets you combine the concepts, styles, and compositions from multiple images (and text prompts too) and generate new images.
|
| 10 |
+
|
| 11 |
+
It was trained by [Justin Pinkney](https://www.justinpinkney.com) at [Lambda Labs](https://lambdalabs.com/).
|
| 12 |
+
|
| 13 |
+
## Training details
|
| 14 |
+
|
| 15 |
+
This model is a fine tuned version of [Stable Diffusion Image Variations](https://huggingface.co/lambdalabs/sd-image-variations-diffusers)
|
| 16 |
+
it has been trained to accept multiple CLIP embedding concatenated along the sequence dimension (as opposed to 1 in the original model).
|
| 17 |
+
During training up to 5 crops of the training images are taken and CLIP embeddings are extracted, these are concatenated and used as the conditioning for the model.
|
| 18 |
+
At inference time, CLIP embeddings from multiple images can be used to generate images which are influence by multiple inputs.
|
| 19 |
+
|
| 20 |
+
Training was done at 640x640 on a subset of LAION improved aesthetics, using 8xA100 from [Lambda GPU Cloud](https://cloud.lambdalabs.com).
|
| 21 |
+
|
| 22 |
+
## Usage
|
| 23 |
+
|
| 24 |
+
The model is available on [huggingface spaces](https://huggingface.co/spaces/lambdalabs/image-mixer-demo) or to run locally do the following:
|
| 25 |
+
|
| 26 |
+
```bash
|
| 27 |
+
git clone https://github.com/justinpinkney/stable-diffusion.git
|
| 28 |
+
cd stable-diffusion
|
| 29 |
+
git checkout 1c8a598f312e54f614d1b9675db0e66382f7e23c
|
| 30 |
+
python -m venv .venv --prompt sd
|
| 31 |
+
. .venv/bin/activate
|
| 32 |
+
pip install -U pip
|
| 33 |
+
pip install -r requirements.txt
|
| 34 |
+
python scripts/gradio_image_mixer.py
|
| 35 |
+
```
|
| 36 |
+
|
| 37 |
+
Then navigate to the gradio demo link printed in the terminal.
|
| 38 |
+
|
| 39 |
+
For details on how to use the model outside the app refer to the [`run` function](https://github.com/justinpinkney/stable-diffusion/blob/c1963a36a4f8ce23784c8247fa1af0e34e02b766/scripts/gradio_image_mixer.py#L79) in `gradio_image_mixer.py`
|