Giving machines the ability to imagine possible new objects or scenes from linguistic descriptions and produce their realistic renderings is arguably one of the most challenging problems in computer vision. Recent advances in deep generative models have led to new approaches that give promising results towards this goal. In this paper, we introduce a new method called DiCoMoGAN for manipulating videos with natural language, aiming to perform local and semantic edits on a video clip to alter the appearances of an object of interest. Our GAN architecture allows for better utilization of multiple observations by disentangling content and motion to enable controllable semantic edits. To this end, we introduce two tightly coupled networks: (i) a representation network for constructing a concise understanding of motion dynamics and temporally invariant content, and (ii) a translation network that exploits the extracted latent content representation to actuate the manipulation according to the target description. Our qualitative and quantitative evaluations demonstrate that DiCoMoGAN significantly outperforms existing frame-based methods, producing temporally coherent and semantically more meaningful results.
Our goal is to perform seamless and semantically meaningful edits on each video frame. Doing so, we need to preserve identity, motion dynamics and description-irrelevant regions intact.
Our goal is to perform seamless and semantically meaningful edits on each video frame. Doing so, we need to preserve identity, motion dynamics and description-irrelevant regions intact.
DiCoMoGAN learns latent variables depicting highly interpretable concepts decomposed into text relevant, text irrelevant static, and dynamic features. Note that wall and floor colors are not mentioned in the descriptions during training.
DiCoMoGAN has an advantage of utilizing latent ODEs that it allows us to interpolate in-between frames over time.
Spatiotemporal Sampling by Neural ODE
Here we interpolate 256 frames between first(t=0.0) and last(t=1.0) frames of input video thanks to Latent ODE.
We collected Fashion Videos dataset from raw videos present in the website of an online clothing retailer by searching products in the cardigans, dresses, jackets, jeans, jumpsuits, shorts, skirts, tops and trousers categories. There are 3178 video clips (approximately 109K distinct frames), which we split into 2579 for training and 598 for testing.
Please do not hesitate to send us an e-mail to access Fashion Videos dataset.
@inproceedings{Karacan_2022_BMVC,
author = {Levent Karacan and Tolga Kerimoğlu and İsmail Ata İnan and Tolga Birdal and Erkut Erdem and Aykut Erdem},
title = {Disentangling Content and Motion for Text-Based Neural Video Manipulation},
booktitle = {British Machine Vision Conference (BMVC)},
year = {2022}}