Few-Shot-Based Modular Image-to-Video Adapter for Diffusion Models

Zhenhao Li*1 , Shaohan Yi*2 , Zheng Liu†1 , Leonartinu Gao†3 Minh Ngoc Le†4 Ambrose Ling4 Zhuoran Wang2 Md Amirul Islam1 Zhixiang Chi1 Yuanhao Yu1
1Huawei Technologies of Canada, 2University of Waterloo, 3University of British Columbia 4University of Toronto
†* Equal contribution. Shaohan, Leonartinus, Minh, Ambrose and Zhuoran did the work during internships at Huawei. * Corresponding author

Abstract

Diffusion models (DMs) have recently achieved impressive photorealism in image and video generation. However, their application to image animation remains limited, even when trained on large-scale datasets. Two primary challenges contribute to this: the high dimensionality of video signals leads to a scarcity of training data, causing DMs to favor memorization over prompt compliance when generating motion; additionally, DMs struggle to generalize to novel motion patterns not present in the training set, and fine-tuning them to learn such patterns, especially using limited training data, is still under-explored. To address these limitations, we propose Modular Image-to-Video Adapter (MIVA), a lightweight sub-network attachable to a pre-trained DM, each designed to capture a single motion pattern and scalable via parallelization. MIVAs can be efficiently trained on approximately ten samples using a single consumer-grade GPU. At inference time, users can specify motion by selecting one or multiple MIVAs, eliminating the need for prompt engineering. Extensive experiments demonstrate that MIVA enables more precise motion control while maintaining, or even surpassing, the generation quality of models trained on significantly larger datasets.

1-1. Single-motion-pattern: Our Benchmark Dataset (Sec. 4.2.1)

Our benchmark evaluation datasets (single-motion-pattern and multi-motion-pattern) will be made publicly available soon.

Birds Flying (B)

Fireworks (F)

Guitar Playing (G)

Horse running (H)

Helicopter (P)

Raining (R)

Turning to smile (S)

Waterfall (W)

1-2. Single-motion-pattern: LAMP Benchmark

Additionally, we present the animation results on the synthesized-image-based benchmark dataset used in the LAMP paper. The dataset is available at https://github.com/RQ-Wu/LAMP/tree/master/benchmark.

Birds flying (B)

Fireworks (F)

Guitar playing (G)

Horse running (H)

Helicopter (P)

Raining (R)

Turning to smile (S)

Waterfall (W)

2-1. Multi-motion-pattern: Our Benchmark (Sec. 4.2.2)

Guitar & Smile

Waterfall & Horse

Waterfall & Birds

Waterfall & Clouds

Birds & Clouds

Smile & Clouds

2-2. Multi-motion-pattern: Impact of MIVA Weights (Appendix D.1)

The video below showcases the effect of adjusting the MIVA weights to further control the intensity of each constituent motion pattern.

Each row corresponds to a specific pair of MIVAs being combined, and each column corresponds to different weight combinations.

Camera Movement Result (Appendix D.2)

Pan right

Zoom out

3. Ablation Study: MIVA vs M-MIVA (Sec. 4.3)

Birds fly

Firework

Horse run

4. Wan-based MIVA (Sec. 4.4, Appendix D.3)