Sequential Compositional Generalization in Multimodal Models (NAACL 2024)

Can multimodality help sequential models to compositionally generalize?

(Compositional Activities) presents a comprehensive benchmark for assessing the compositional generalization abilities of Sequential Multimodal Models. is a carefully constructed, perceptually grounded dataset set within a rich backdrop of egocentric kitchen activity videos. Each instance in our dataset is represented with a combination of raw video footage, naturally occurring sound, and crowd-sourced step-by-step descriptions. More importantly, our setup ensures that the individual concepts are consistently distributed across training and evaluation sets, while their compositions are novel in the evaluation set. We conduct a comprehensive assessment of several unimodal and multimodal models.
(Compositional Activities) presents a comprehensive benchmark for assessing the compositional generalization abilities of Sequential Multimodal Models. is a carefully constructed, perceptually grounded dataset set within a rich backdrop of egocentric kitchen activity videos. Each instance in our dataset is represented with a combination of raw video footage, naturally occurring sound, and crowd-sourced step-by-step descriptions. More importantly, our setup ensures that the individual concepts are consistently distributed across training and evaluation sets, while their compositions are novel in the evaluation set. We conduct a comprehensive assessment of several unimodal and multimodal models.

Paper

For more details about benchmark and experiments, please read our paperour paper. If you find beneficial for your research, please cite it,
@inproceedings{yagcioglu2024compact,
    title={Sequential Compositional Generalization in Multimodal Models},
    author={Semih Yagcioglu and Osman Batur Ince and Aykut Erdem and Erkut Erdem and Desmond Elliott and Deniz Yuret},
    year={2024},
    booktitle={North American Chapter of the Association for Computational Linguistics (NAACL)},
}

Examples

We share a few example instances from the dataset. Each instance consist of a sequence of image-text-audio triplets. The first 3 columns highlighted in yellow illustrate input utterances and target is highlighted in blue. For the target column, the target predictions are textual utterances and the image or other modalities are not used but displayed here to provide context.


pick up chopping board



scrape pepper into pan



put down chopping board



pick up pepper



put red chilli



open tap



wash plate



put plate



open container



stir pasta



pick up pasta



pour pasta

Authors

...
Semih Yagcioglu
...
Osman Batur Ince
...
Aykut Erdem
...
Erkut Erdem
...
Desmond Elliott
...
Deniz Yuret

Contact

For further information, please send an email to Semih Yagcioglu. The website is heavily inspired by the HellaSwag website.