Diffusion models (DMs) have recently achieved impressive photorealism in image and video generation. However, their application to image animation remains limited, even when trained on large-scale datasets. Two primary challenges contribute to this: the high dimensionality of video signals leads to a scarcity of training data, causing DMs to favor memorization over prompt compliance when generating motion; additionally, DMs struggle to generalize to novel motion patterns not present in the training set, and fine-tuning them to learn such patterns, especially using limited training data, is still under-explored. To address these limitations, we propose Modular Image-to-Video Adapter (MIVA), a lightweight sub-network attachable to a pre-trained DM, each designed to capture a single motion pattern and scalable via parallelization. MIVAs can be efficiently trained on approximately ten samples using a single consumer-grade GPU. At inference time, users can specify motion by selecting one or multiple MIVAs, eliminating the need for prompt engineering. Extensive experiments demonstrate that MIVA enables more precise motion control while maintaining, or even surpassing, the generation quality of models trained on significantly larger datasets.
Our benchmark evaluation datasets (single-motion-pattern and multi-motion-pattern) will be made publicly available soon.
Additionally, we present the animation results on the synthesized-image-based benchmark dataset used in the LAMP paper. The dataset is available at https://github.com/RQ-Wu/LAMP/tree/master/benchmark.
The video below showcases the effect of adjusting the MIVA weights to further control the intensity of each constituent motion pattern.
Each row corresponds to a specific pair of MIVAs being combined, and each column corresponds to different weight combinations.