Stable Video Diffusion Image-to-Video Model
(SVD) Image-to-Video is a latent diffusion model trained to generate short video clips from an image conditioning. This model was trained to generate 25 frames at resolution 576x1024 given a context frame of the same size, finetuned from SVD Image-to-Video [14 frames]. We also finetune the widely used f8-decoder for temporal consistency. For convenience, we additionally provide the model with the standard frame-wise decoder here.
https://stability.ai/news/stable-video-diffusion-open-ai-video-model
You must log in or register to comment.
Oh wow, I know the results are probably cherry picked, but this still seems like such a step-up.
I can make stuff like that with deforum. The real deal video models have all been terrible resolution or paid API :(
So here is hoping.