Telling Left from Right: Learning Spatial Correspondence of Sight and Sound
CVPR 2020 (Oral Presentation)
Bryan Russell

Adobe Research

Justin Salamon

Adobe Research

Overview

Paper

Abstract

Self-supervised audio-visual learning aims to capture useful representations of video by leveraging correspondences between visual and audio inputs. Existing approaches have focused primarily on matching semantic information between the sensory streams. We propose a novel self-supervised task to leverage an orthogonal principle: matching spatial information in the audio stream to the positions of sound sources in the visual stream. Our approach is simple yet effective. We train a model to determine whether the left and right audio channels have been flipped, forcing it to reason about spatial localization across the visual and audio streams. To train and evaluate our model, we introduce a large-scale video dataset, YouTube-ASMR-300K, with spatial audio comprising over 900 hours of footage. We demonstrate that understanding spatial correspondence enables models to perform better on three audio-visual tasks, achieving quantitative gains over supervised and self-supervised baselines that do not leverage spatial audio cues. We also show how to extend our self-supervised approach to 360 degree videos with ambisonic audio.

Read the Full Paper   (CVF) (arxiv)

arXiv link

YouTube-ASMR-300K Dataset

Description

We introduce a new large-scale dataset of ASMR videos collected from YouTube that contains stereo audio. ASMR (autonomous sensory meridian response) videos are readily available online and typically feature an individual actor or "ASMRtist" making different sounds while facing towards a camera set up with stereo/binaural or paired microphones. The audio in these videos contains binaural cues such that there is strong correspondence between the visual and spatial audio cues.

Examples of ASMR Videos

Download the videos (code on Github)
Download the video URLs

CVPR Talk


Download the slides
Try the upmixing demo model (code on Github)

Teaser Image (Remote CVPR Poster)

We use self-supervision to train a model to establish spatial correspondence between sight and sound and introduce a video dataset with spatial audio.

Poster

Team

Karren Yang

MIT

Contact   |   Website

Bryan Russell

Adobe Research

Contact   |   Website

Justin Salamon

Adobe Research

Contact   |   Website