Instructions to use linyq/kiwi-edit-5b-instruct-only-diffusers with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use linyq/kiwi-edit-5b-instruct-only-diffusers with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline from diffusers.utils import load_image, export_to_video # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("linyq/kiwi-edit-5b-instruct-only-diffusers", dtype=torch.bfloat16, device_map="cuda") pipe.to("cuda") prompt = "A man with short gray hair plays a red electric guitar." image = load_image( "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/guitar-man.png" ) output = pipe(image=image, prompt=prompt).frames[0] export_to_video(output, "output.mp4") - Notebooks
- Google Colab
- Kaggle
Add model card for Kiwi-Edit
#1
by nielsr HF Staff - opened
Hi! I'm Niels from the Hugging Face community science team. I noticed that this repository doesn't have a model card yet.
A model card is essential for helping the community discover and understand your work. I've opened this PR to add a README.md that includes:
- Links to your paper, project page, and GitHub repository.
- Proper metadata for
pipeline_tag(image-to-video) andlibrary_name(diffusers). - A brief description of the Kiwi-Edit framework.
- Installation and sample usage instructions for the Diffusers environment.
- Citation information.
This will ensure the "Edit model card" and "Use in Diffusers" buttons appear correctly on the Hub.
linyq changed pull request status to merged