Chris Manning: 2015 AI Workshop


Thursday April 30, 2015
Location: Fisher Conference Center, Arrillaga Alumni Center

"Deep Learning for Natural Language Understanding"


Distributed representations of human language content and structure had a brief boom in the 1980s, but it quickly faded, and the past 15 years have been dominated by continued use of categorical representations of language, despite the use of probabilities or weights over elements of these categorical representations. However, the last five years have seen a resurgence, with highly successful use of distributed representations, in the context of "neural" or "deep learning" models. One great success has been distributed word representations, and I will look at some of our recent work on better understanding word representations and how they can still be thought of as global matrix factorizations. A key challenge in language is then how to deal with the hierarchical structure of sentences, and the determination of their meaning through the compositionality of language. I will show how a dependency parser can gain in accuracy and speed by using distributed representations of not only words but also part-of-speech tags and dependency labels, and how tree-structured recursive neural networks can give state-of-the-art results for meaning-related tasks. Joint work with Danqi Chen and Jeffrey Pennington, Richard Socher, and Kai Sheng Tai.


Christopher Manning is a professor of computer science and linguistics at Stanford University. His research goal is computers that can intelligently process, understand, and generate human language material. Manning concentrates on machine learning approaches to computational linguistic problems, including syntactic parsing, computational semantics and pragmatics, textual inference, machine translation, and hierarchical deep learning for NLP. He is an ACM Fellow, a AAAI Fellow, and an ACL Fellow, and has coauthored leading textbooks on statistical natural language processing and information retrieval. He is a member of the Stanford NLP group (@stanfordnlp).