Chris Manning : 2012 Plenary Session

 

Monday, April 2, 2012
Location: Fisher Conference Center, Arrillaga Alumni Center

"Getting Computers to Understand What They Read (Or Hear)"
11:45am - 12:15pm

Abstract:

Computers routinely process huge amounts of text: search engines, sentiment analysis, topic filtering. Indeed, full text search engines have transformed access to the world's information. And yet these computers still very much "process" text rather than understanding what they read. But new research is now focusing on the goal of Machine Reading -- the autonomous understanding of texts by computers. How can we get simple facts and relations out of texts? How can we show that two pieces of text say the same thing? Over the last five years, a vast amount of natural language understanding work has looked at acquiring word and phrase meanings, extracting relations, learning facts of scientific importance, and reasoning about more general consequences of texts. In particular, there is a new emphasis on ways of learning that require less in the way of hand-labeled training data. But we are still a fair distance from, say, a computer that can read a biology textbook and then understand the field. This talk will survey some of what has been achieved, and what the goals should be for the next decade in order to achieve intelligent machine readers with the knowledge acquisition capacity of people.


Bio:

Christopher Manning is an Associate Professor of Computer Science and Linguistics at Stanford University. His Ph.D. is from Stanford in 1995, and he held faculty positions at Carnegie Mellon University and the University of Sydney before returning to Stanford. He is a fellow of AAAI and the Association for Computational Linguistics. Manning has coauthored leading textbooks on statistical approaches to natural language processing (Manning and Schuetze 1999) and information retrieval (Manning, Raghavan, and Schuetze, 2008), as well as linguistic monographs on ergativity and complex predicates. His recent work has concentrated on probabilistic approaches to NLP problems and computational semantics, particularly including such topics as statistical parsing, robust textual inference, machine translation, large-scale joint inference for NLP, computational pragmatics, and hierarchical deep learning for NLP.