Karren D. Yang

I am an AI/ML Researcher at Apple. My research focuses on machine learning and multimodal learning. Presently, I work on controllable generative models for 3D computer vision and speech/audio.

Previously, I was a Ph.D. student at the Laboratory for Information & Decision Systems at MIT working with Caroline Uhler. During my Ph.D., I interned at Apple, Niantic Labs, Meta Reality Labs, Bosch Center for AI, and Adobe Research.

Email  /  CV  /  Google Scholar

profile photo
Publications
Research Topics: all / multimodal learning / audio-visual learning / computational biology / generative modeling / optimal transport / causal inference
Camera Pose Estimation and Localization with Active Audio Sensing
Karren Yang, Clément Godard, Eric Brachmann, Michael Firman
ECCV, 2022

We use audio sensing to improve the performance of visual localization methods on three tasks: relative pose estimation, place recognition, and absolute pose regression.

pdf | abstract

Audio-Visual Speech Codecs: Rethinking Audio-Visual Speech Enhancement by Re-Synthesis
Karren Yang, Dejan Markovic, Steven Krenn, Vasu Agrawal, Alexander Richard
CVPR, 2022   (Oral Presentation)

We perform visual speech enhancement by using audio-visual speech cues to generate the codes of a neural speech codec, enabling efficient synthesis of clean, realistic speech from noisy signals.

pdf | abstract | video | dataset

Defending Multimodal Fusion Models against Single-Source Adversaries
Karren Yang, Wan-Yi Lin, Manash Barman, Filipe Condessa, Zico Kolter
CVPR, 2021

We study the robustness of multimodal models on three tasks-- action recognition, object detection, and sentiment analysis-- and develop a robust fusion strategy that protects against worst-case errors caused by a single modality.

pdf | abstract

Mol2Image: Improved Conditional Flow Models for Molecule-to-Image Synthesis
Karren Yang, Samuel Goldman, Wengong Jin, Alex Lu, Regina Barzilay, Tommi Jaakkola, Caroline Uhler
CVPR, 2021

We build a molecule-to-image synthesis model that predicts the biological effects of molecular treatments on cell microscopy images.

pdf | abstract | code

Telling Left from Right: Learning Spatial Correspondence of Sight and Sound
Karren Yang, Justin Salamon, Bryan Russell
CVPR, 2020   (Oral Presentation)

We leverage spatial correspondence between audio and vision in videos for self-supervised representation learning and apply the learned representations to three downstream tasks: sound localization, audio spatialization, and audio-visual sound separation.

pdf | abstract | project website | dataset | demo code

Multi-domain translation between single-cell imaging and sequencing data using autoencoders
Karren Dai Yang*, Anastasiya Belyaeva*, Saradha Venkatachalapathy, Karthik Damodaran, Abigail Katcoff, Adityanarayanan Radhakrishnan, G. V. Shivashankar, Caroline Uhler
Nature Communications 12, 31 (2021)

We propose a framework for integrating and translating between different modalities of single-cell biological data by using autoencoders to map to a shared latent space.

pdf | abstract | code

Causal network models of SARS-CoV-2 expression and aging to identify candidates for drug repurposing
Anastasiya Belyaeva, Louis Cammarata, Adityanarayanan Radhakrishnan, Chandler Squires, Karren Yang, G. V. Shivashankar, Caroline Uhler
Nature Communications 12, 1024 (2021)

We integrate transcriptomic, proteomic, and structural data to identify candidate drugs and targets that affect the SARS-CoV-2 and aging pathways.

pdf | abstract

Optimal Transport using GANs for Lineage Tracing
Neha Prasad, Karren Yang, Caroline Uhler
ICML Workshop on Computational Biology, 2020   (Oral Spotlight)

We propose a novel approach to computational lineage tracing that combines supervised learning with optimal transport based on generative adversarial networks.

pdf | abstract | code

Scalable Unbalanced Optimal Transport using Generative Adversarial Networks
Karren Yang, Caroline Uhler
ICLR, 2019

We align and translate between datasets by performing unbalanced optimal transport with generative adversarial networks.

pdf | abstract | code

Predicting cell lineages using autoencoders and optimal transport
Karren Yang, Karthik Damodaran, Saradha Venkatachalapathy, Ali C. Soylemezoglu, G. V. Shivashankar, Caroline Uhler
PLOS Computational Biology 16(4): e1007828 (2020)

We combine autoencoding and optimal transport to align biological imaging datasets collected from different time points.

pdf | abstract | code

Multi-Domain Translation by Learning Uncoupled Autoencoders
Karren Yang, Caroline Uhler
ICML Workshop on Computational Biology, 2019   (Oral Spotlight)

We train domain-specific autoencoders to map different data modalities to the same latent space and translate between them.

pdf | abstract

ABCD-Strategy: Budgeted Experimental Design for Targeted Causal Structure Discovery
Raj Agrawal, Chandler Squires, Karren Yang, Karthik Shanmugam, Caroline Uhler
AISTATS, 2019

We propose an experimental design strategy for target causal discovery.

pdf | abstract | code

Characterizing and Learning Equivalence Classes of Causal DAGs under Interventions
Karren Yang, Abigail Katcoff, Caroline Uhler
ICML, 2018   (Oral Presentation)

We characterize interventional Markov equivalence classes of DAGs that can be identified under soft interventions and propose the first provably consistent algorithm for learning DAGs in this setting.

pdf | abstract

Memorization in Overparameterized Autoencoders
Adityanarayanan Radhakrishnan, Karren Yang, Mikhail Belkin, Caroline Uhler
arXiv, 2018

We show that overparameterized autoencoders are biased towards learning step function around training examples.

pdf | abstract

Permutation-based Causal Inference Algorithms with Interventions
Yuhao Wang, Liam Solus, Karren Yang, Caroline Uhler
NeurIPS, 2017   (Oral Presentation)

We present two provably consistent algorithms for learning DAGs from observational and (hard) interventional data.

pdf | abstract


Modified version of template from here and here.