text stringlengths 0 624 |
|---|
WARNING 02-08 03:03:20 [envs.py:235] Flash Attention library "flash_attn" not found, using pytorch attention implementation |
================================================================================ |
CONFIGURATION PARAMETERS: |
================================================================================ |
cfg_scale_text : 5.0 |
data_root : data_inference/wan_i2v/ |
dit_root : ./weights/Wan2.1-I2V-14B-480P/ |
extra_module_root : weights/Stable-Video-Infinity/version-1.0/svi-shot.safetensors |
lora_alpha : 1.0 |
max_prompts_per_sample : None |
max_width : 832 |
num_clips : 10 |
num_motion_frames : 1 |
num_persistent_param_in_dit : 6000000000 |
num_steps : 50 |
output : videos/svi_shot/ |
prompt_path : /mnt/vita/scratch/vita-students/users/wuli/code/DigitalHuman/VBench/20260207_test/sample1/prompt.txt |
prompt_prefix : none |
prompt_repeat_times : 1 |
ref_image_path : /mnt/vita/scratch/vita-students/users/wuli/code/DigitalHuman/VBench/20260207_test/sample1/train_000001.jpg |
ref_pad_cfg : False |
ref_pad_num : -1 |
repeat_first_clip : False |
seed_times : 42 |
test_samples : None |
tile_size : [30, 52] |
tile_stride : [15, 26] |
tiled : False |
train_architecture : lora |
use_first_aug : False |
use_first_prompt_only : True |
================================================================================ |
Total number of cfg parameters: 27 |
================================================================================ |
Using direct paths for reference image and prompt file |
Reference image: /mnt/vita/scratch/vita-students/users/wuli/code/DigitalHuman/VBench/20260207_test/sample1/train_000001.jpg |
Prompt file: /mnt/vita/scratch/vita-students/users/wuli/code/DigitalHuman/VBench/20260207_test/sample1/prompt.txt |
Generated 1 test scenario with 1 prompts |
Loading models from: ./weights/Wan2.1-I2V-14B-480P/models_clip_open-clip-xlm-roberta-large-vit-huge-14.pth |
model_name: wan_video_image_encoder model_class: WanImageEncoder |
The following models are loaded: ['wan_video_image_encoder']. |
Loading models from: ['./weights/Wan2.1-I2V-14B-480P/diffusion_pytorch_model-00001-of-00007.safetensors', './weights/Wan2.1-I2V-14B-480P/diffusion_pytorch_model-00002-of-00007.safetensors', './weights/Wan2.1-I2V-14B-480P/diffusion_pytorch_model-00003-of-00007.safetensors', './weights/Wan2.1-I2V-14B-480P/diffusion_pytorch_model-00004-of-00007.safetensors', './weights/Wan2.1-I2V-14B-480P/diffusion_pytorch_model-00005-of-00007.safetensors', './weights/Wan2.1-I2V-14B-480P/diffusion_pytorch_model-00006-of-00007.safetensors', './weights/Wan2.1-I2V-14B-480P/diffusion_pytorch_model-00007-of-00007.safetensors'] |
model_name: wan_video_dit model_class: WanModel |
This model is initialized with extra kwargs: {'has_image_input': True, 'patch_size': [1, 2, 2], 'in_dim': 36, 'dim': 5120, 'ffn_dim': 13824, 'freq_dim': 256, 'text_dim': 4096, 'out_dim': 16, 'num_heads': 40, 'num_layers': 40, 'eps': 1e-06} |
The following models are loaded: ['wan_video_dit']. |
Loading models from: ./weights/Wan2.1-I2V-14B-480P/models_t5_umt5-xxl-enc-bf16.pth |
model_name: wan_video_text_encoder model_class: WanTextEncoder |
The following models are loaded: ['wan_video_text_encoder']. |
Loading models from: ./weights/Wan2.1-I2V-14B-480P/Wan2.1_VAE.pth |
model_name: wan_video_vae model_class: WanVideoVAE |
The following models are loaded: ['wan_video_vae']. |
Loading LoRA models from file: weights/Stable-Video-Infinity/version-1.0/svi-shot.safetensors |
Adding LoRA to wan_video_dit (['./weights/Wan2.1-I2V-14B-480P/diffusion_pytorch_model-00001-of-00007.safetensors', './weights/Wan2.1-I2V-14B-480P/diffusion_pytorch_model-00002-of-00007.safetensors', './weights/Wan2.1-I2V-14B-480P/diffusion_pytorch_model-00003-of-00007.safetensors', './weights/Wan2.1-I2V-14B-480P/diffusion_pytorch_model-00004-of-00007.safetensors', './weights/Wan2.1-I2V-14B-480P/diffusion_pytorch_model-00005-of-00007.safetensors', './weights/Wan2.1-I2V-14B-480P/diffusion_pytorch_model-00006-of-00007.safetensors', './weights/Wan2.1-I2V-14B-480P/diffusion_pytorch_model-00007-of-00007.safetensors']). |
400 tensors are updated. |
Using wan_video_text_encoder from ./weights/Wan2.1-I2V-14B-480P/models_t5_umt5-xxl-enc-bf16.pth. |
Using wan_video_dit from ['./weights/Wan2.1-I2V-14B-480P/diffusion_pytorch_model-00001-of-00007.safetensors', './weights/Wan2.1-I2V-14B-480P/diffusion_pytorch_model-00002-of-00007.safetensors', './weights/Wan2.1-I2V-14B-480P/diffusion_pytorch_model-00003-of-00007.safetensors', './weights/Wan2.1-I2V-14B-480P/diffusion_pytorch_model-00004-of-00007.safetensors', './weights/Wan2.1-I2V-14B-480P/diffusion_pytorch_model-00005-of-00007.safetensors', './weights/Wan2.1-I2V-14B-480P/diffusion_pytorch_model-00006-of-00007.safetensors', './weights/Wan2.1-I2V-14B-480P/diffusion_pytorch_model-00007-of-00007.safetensors']. |
Using wan_video_vae from ./weights/Wan2.1-I2V-14B-480P/Wan2.1_VAE.pth. |
Using wan_video_image_encoder from ./weights/Wan2.1-I2V-14B-480P/models_clip_open-clip-xlm-roberta-large-vit-huge-14.pth. |
#################################################################################################### |
STARTING SAMPLE 1/1: train_000001 |
#################################################################################################### |
Reference image: /mnt/vita/scratch/vita-students/users/wuli/code/DigitalHuman/VBench/20260207_test/sample1/train_000001.jpg |
Available prompts: 1 |
Video dimensions: 832x528 |
Processing train_000001 with 1 prompts |
Generating 10 clips using the first prompt repeatedly |
Created output directory for sample: videos/svi_shot/train_000001_20260208_030504 |
================================================================================ |
PROCESSING SAMPLE: train_000001 |
CHUNK: 1/10 |
PROMPT: An Amtrak train, numbered 146, travels along a set of tracks under a clear blue sky with scattered clouds, surrounded by a forested landscape. |
NOTE: Using first prompt only (use_first_prompt_only=True) |
================================================================================ |
Starting video generation... |
0%| | 0/50 [00:00<?, ?it/s] |
2%|β | 1/50 [00:09<07:33, 9.26s/it] |
4%|β | 2/50 [00:17<06:49, 8.54s/it] |
6%|β | 3/50 [00:25<06:30, 8.30s/it] |
8%|β | 4/50 [00:33<06:16, 8.19s/it] |
10%|β | 5/50 [00:41<06:06, 8.13s/it] |
12%|ββ | 6/50 [00:49<05:56, 8.11s/it] |
14%|ββ | 7/50 [00:57<05:48, 8.10s/it] |
16%|ββ | 8/50 [01:05<05:39, 8.10s/it] |
18%|ββ | 9/50 [01:13<05:31, 8.09s/it] |
20%|ββ | 10/50 [01:21<05:23, 8.10s/it] |
22%|βββ | 11/50 [01:29<05:15, 8.10s/it] |
24%|βββ | 12/50 [01:38<05:08, 8.11s/it] |
26%|βββ | 13/50 [01:46<05:00, 8.11s/it] |
28%|βββ | 14/50 [01:54<04:52, 8.11s/it] |
30%|βββ | 15/50 [02:02<04:44, 8.12s/it] |
32%|ββββ | 16/50 [02:10<04:36, 8.12s/it] |
34%|ββββ | 17/50 [02:18<04:27, 8.12s/it] |
36%|ββββ | 18/50 [02:26<04:19, 8.12s/it] |
38%|ββββ | 19/50 [02:34<04:11, 8.12s/it] |
40%|ββββ | 20/50 [02:43<04:03, 8.13s/it] |
42%|βββββ | 21/50 [02:51<03:55, 8.13s/it] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.