Instructions to use ali-vilab/MS-Image2Video with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- OpenCLIP
How to use ali-vilab/MS-Image2Video with OpenCLIP:
import open_clip model, preprocess_train, preprocess_val = open_clip.create_model_and_transforms('hf-hub:ali-vilab/MS-Image2Video') tokenizer = open_clip.get_tokenizer('hf-hub:ali-vilab/MS-Image2Video') - Notebooks
- Google Colab
- Kaggle
Possible to change seed? I2v comes out the same
#11
by charlesai - opened
Running image to video comes out the same every time as it possible to have it run with a random seed?
Hi, you can set the seed in configuration.json file.
There are two ways to do it.
- One way is to modify the "seed" parameter in the configuration.json file.
- The second way is to directly modify the logic that sets the seed from within the codebase (https://github.com/modelscope/modelscope/blob/master/modelscope/models/multi_modal/image_to_video/image_to_video_model.py#L88).
I have printed the seed out, and it shows differently in every run, but I still got exactly the same result, any idea?