SonicDiffusion: Audio-Driven Image Generation and Editing with Pretrained Diffusion ModelsBurak Can Biner1,2 Farrin Sofian1,3 Umur Berkay Karakaş1,2 Duygu Ceylan4 Aykut Erdem1,2 Erkut Erdem1,5 KUIS AI Center1 Koç University2 University of California, Irvine3 Adobe Research4 Hacettepe University5 |
||
|
||
|
AbstractWe are witnessing a revolution in conditional image synthesis with the recent success of large scale text-to-image generation methods. This success also opens up new opportunities in controlling the generation and editing process using multi-modal input. While spatial control using cues such as depth, sketch, and other images has attracted a lot of research, we argue that another equally effective modality is audio since sound and sight are two main components of human perception. Hence, we propose a method to enable audio-conditioning in large scale image diffusion models. Our method first maps features obtained from audio clips to tokens that can be injected into the diffusion model in a fashion similar to text tokens. We introduce additional audio-image cross attention layers which we finetune while freezing the weights of the original layers of the diffusion model. In addition to audio conditioned image generation, our method can also be utilized in conjuction with diffusion based editing methods to enable audio conditioned image editing. We demonstrate our method on a wide range of audio and image datasets. We perform extensive comparisons with recent methods and show favorable performance. |
Greatest Hits |
|
|
|
|
|
|
|
|
|
|
|
|
6. Volume AdjustmentSonicDiffusion can generate images according to the intensity of the input volume in the audio clip. We show some examples below. |
|||||||||||
|
|||||||||||
|