9:00 am - 9:15 amWelcome to BayLearn 2020, BayLearn Organizers: Jerremy Holland,Jean-François Paiement, Sudarshan Lamkhede, Alice Xiang

9:15 am - 10:00 amKeynote 1: Timnit Gebru

10:00 am - 10:15 amQ&A 
10:15 am - 10:30 amBREAK 
10:30 am - 11:00 amKeynote 2: Sandrine Dudoit
11:00 am - 11:15 amQ&A 
11:15 am - 11:55 pm
Keynote 3: Chelsea Finn
11:55 am - 12:10 pmQ&A 
12:10 pm - 1:00 pmLUNCH BREAK 
1:00 pm - 1:30 pmKeynote 4: Susan Athey
1:30 pm - 1:45 pmQ&A 
1:45 pm - 2:00 pmBREAK 

2:00 pm - 3:00 pmPoster Session I 
ROOM # 1
Cluster 1: Fairness, Explainable ML, Privacy, and Robustness 

Neural Additive Models: Interpretable Machine Learning with Neural Nets

siVAE: interpreting latent dimensions within variational autoencoders

Selectivity considered harmful: evaluating the causal impact of class selectivity in DNNs

Synthetic Health Data for Fostering Reproducibility of Private Research Studies

Adversarial Learning for Debiasing Knowledge Base Embeddings Video Not Available

Robustness Analysis of Deep Learning via Implicit Models
ROOM # 2
Cluster 2: Computer Vision 

Protecting Against Image Translation Deepfakes by Leaking Universal Perturbations from Black-Box Neural Networks

Anatomy of Catastrophic Forgetting: HiddenRepresentations and Task Semantics

CoCon: Cooperative-Contrastive Learning

Can Neural Networks Learn Non-Verbal Reasoning?

Modality-Agnostic Attention Fusion for visual search with text feedback
ROOM # 3
Cluster 3: Deep Learning 

Revisiting Spatial Invariance with Low-Rank Local Connectivity

What is being transferred in transfer learning?

Neural Anisotropy Directions

Learning Discrete Energy-based Models via Auxiliary-variable Local Exploration

What Neural Networks Memorize and Why: Discovering the Long Tail via Influence Estimation

ROOM # 4
Cluster 4: ML Methods and Tools 

Bandit-based Monte Carlo Optimization for Nearest Neighbors

LassoNet: A Neural Network with Feature Sparsity

Temperature check: theory and practice for training models with softmax-cross-entropy losses

Meta-Learning Requires Meta-Augmentation

Energy-based View of Retrosynthesis
ROOM # 5
Cluster 5: Reinforcement Learning 

Learning to grow: control of materials self-assembly using evolutionary reinforcement learning

Deployment-Efficient Reinforcement Learning via Model-Based Offline Optimization

Provably Efficient Policy Optimization via Thompson Sampling

Uncovering Task Clusters in Multi-Task Reinforcement Learning

Curriculum and Decentralized Learning in Google Research Football
3:00 pm - 4:00 pmPoster Session II 
ROOM # 5
Cluster 6: Bayesian Learning and Uncertainty 

Autofocused oracles for design

Exact posteriors of wide Bayesian neural networks

Deep Ensembles: a loss landscape perspective

Active Online Domain Adaptation

TSGLR: an Adaptive Thompson Sampling for the Switching Multi-Armed Bandit Problem
ROOM # 2Cluster 7: Computer Vision and Robotics 

Interpretable Planning-Aware Representations for Multi-Agent Trajectory Forecasting 

Learning Mixed-Integer Convex Optimization Strategies for Robot Planning and Control

Beyond Supervision for Monocular Depth Estimation

A Synthetic Data Petri Dish for Studying Mode Collapse in GANs

Attention-Sampling Graph Convolutional Networks

Towards Learning Robots Which Adapt On The Fly
ROOM # 4
Cluster 8: Deep ML and other topics 

Simultaneous Learning of the Inputs and Parameters in Neural Collaborative Filtering

Ads Clickthrough Rate Prediction Models For Multi-Datasource Tasks

Neural Interventional GRU-ODEs
ROOM # 3
Cluster 9: Optimization 

VP-FO: A Variable Projection Method for Training Neural Networks

Optimizing Memory Placement using Evolutionary Graph Reinforcement Learning

Neural Representations in Hybrid Recommender Systems: Prediction versus Regularization

ECLIPSE: An Extreme-Scale Linear Program Solver for Web-Applications
ROOM # 1Cluster 10: Reinforcement Learning 

Safety Aware Reinforcement Learning (SARL)

Meta Attention Networks: Meta Learning Attention to Modulate Information Between Sparsely Interacting Recurrent Modules

Batch Reinforcement Learning Through Continuation Method

See, Hear, Explore: Curiosity via Audio-Visual Association 
4:00 pm - 5:00 pmPoster Session III 
ROOM # 4
Cluster 11: Natural Language Processing 

Automated Utterance Generation

Entity Skeletons as Intermediate Representations for Visual Storytelling

Learning to reason by learning on rationales

MUFASA: Multimodal Fusion Architecture Search for Electronic Health Records

VirAAL: Virtual Adversarial Active Learning

ChemBERTa: Utilizing Transformer-Based Attention for Understanding Chemistry
ROOM # 5
Cluster 12: On-Device ML and Human-Computer Interaction 

GANs for Continuous Path Keyboard Input Modeling

Architecture Compression

A flexible, extensible software framework for model compression based on the LC algorithm

Rotation-Invariant Gait Identification with Quaternion Convolutional Neural Networks
ROOM # 2Cluster 13: Large-Scale Learning 

Heteroskedastic and Imbalanced Deep Learning with Adaptive Regularization

Self-supervised Learning for Deep Models in Recommendations

Learning Multi-granular Quantized Embeddings for Large-Vocab Categorical Features in Recommender Systems

Distributed Sketching Methods for Privacy Preserving Regression

Hamming Space Locality Preserving Neural Hashing for Similarity Search
ROOM # 3
Cluster 14: Optimization 

Exact Polynomial-time Convex Optimization Formulations for Two-Layer ReLU Networks

DisARM: An Antithetic Gradient Estimator for Binary Latent Variables

Boosted Sparse Oblique Decision Trees

Whitening and second order optimization both destroy information about the dataset, and can make generalization impossible