2017 Poster Sessions : Not Just a Black Box: Interpretable Deep Learning for Genomics and Beyond

Student Name : Avanti Shrikumar
Advisor : None
Deep learning models give state-of-the-art results on diverse problems, but their lack of interpretability is a major problem. Consider a model trained to predict which DNA mutations cause disease. If the model performs well, it has likely identified patterns that a biologist would like to understand, but this is difficult if the model is a black box. Here, we present two algorithms that significantly improve upon previous approaches to interpretability. The first assigns importance scores to individual inputs for a given prediction. The second uses these importance scores to identify reoccurring patterns of interest. We present case studies where our algorithms lead to novel biological findings missed by previous approaches, demonstrating the potential of interpretable deep learning in genomics and beyond.

Avanti Shrikumar is a Ph.D student in the Department of Computer Science at Stanford, advised by Professor Anshul Kundaje. Her research is on algorithms to make deep learning models more interpretable, with a focus on applications in regulatory genomics. Avanti has a Bachelor's in Computer Science with Molecular Biology from MIT and was a software engineer for the Healthcare team of Palantir Technologies before starting her PhD. She is a recipient of the HHMI International Student Research Fellowship, the Stanford Bio-X Fellowship, and the Microsoft Women’s Fellowship.