How to use ali-vilab/MS-Image2Video with OpenCLIP:
import open_clip model, preprocess_train, preprocess_val = open_clip.create_model_and_transforms('hf-hub:ali-vilab/MS-Image2Video') tokenizer = open_clip.get_tokenizer('hf-hub:ali-vilab/MS-Image2Video')
Please help me learn how to train my own model using a collection of our videos.
· Sign up or log in to comment