Invited Talks: On Squashing Computational Linguistics and Translating from Multiple Modalities to Text and Back

We’re pleased to have two fabulous orators from our community headlining ACL 2017.  Noah Smith will be speaking about the current challenges of representation learning and Mirella Lapata on multi-modal translation (with text, of course).

– Regina and Min

Squashing Computational Linguistics

Noah Smith (University of Washington)
Tuesday morning plenary session, 9:00-10:10


The computational linguistics and natural language processing community is experiencing an episode of deep fascination with representation learning.  Like many other presenters at this conference, I will describe new ways to use representation learning in models of natural language.  Noting that a data-driven model always assumes a theory (not necessarily a good one), I will argue for the benefits of language-appropriate inductive bias for representation-learning-infused models of language.  Such bias often comes in the form of assumptions baked into a model, constraints on an inference algorithm, or linguistic analysis applied to data.  Indeed, many decades of research in linguistics (including computational linguistics) put our community in a strong position to identify promising inductive biases.  The new models, in turn, may allow us to explore previously unavailable forms of bias, and to produce findings of interest to linguistics.  I will focus on new models of documents and of sentential semantic structures, and I will emphasize abstract, reusable components and their assumptions rather than applications.

Bio: Noah Smith is an Associate Professor in the Paul G. Allen School of Computer Science and Engineering at the University of Washington. Previously, he was an Associate Professor in the School of Computer Science at Carnegie Mellon University. He received his Ph.D. in Computer Science from Johns Hopkins University and his B.S. in Computer Science and B.A. in Linguistics from the University of Maryland. His research spans many topics in natural language processing, machine learning, and computational social science.  He has served on the editorial boards of CL, JAIR, and TACL, as the secretary-treasurer of SIGDAT (2012–2015), and as program co-chair of ACL 2016. Alumni of his research group, Noah’s ARK, are international leaders in NLP in academia and industry. Smith’s work has been recognized with a UW Innovation award, a Finmeccanica career development chair at CMU, an NSF CAREER award, a Hertz Foundation graduate fellowship, numerous best paper nominations and awards, and coverage by NPR, BBC, CBC, the New York Times, the Washington Post, and Time.

Translating from Multiple Modalities to Text and Back

Mirella Lapata (University of Edinburgh)
Wednesday morning plenary session, 9:00-10:10

Recent years have witnessed the development of a wide range of computational tools that process and generate natural language text. Many of these have become familiar to mainstream computer users in the from of web search, question answering, sentiment analysis, and notably machine translation. The accessibility of the web could be further enhanced with applications that not only translate between different languages (e.g., from English to French) but also within the same language, between different modalities, or different data formats. The web is rife with non-linguistic data (e.g., video, images, source code) that cannot be indexed or searched since most retrieval tools operate over textual data.

In this talk I will argue that in order to render electronic data more accessible to individuals and computers alike, new types of translation models need to be developed.  I will focus on three examples, text simplification, source code generation, and movie summarization. I will illustrate how recent advances in deep learning can be extended in order to induce general representations for different modalities and learn how to translate between these and natural language.

Bio: Mirella Lapata is professor of natural language processing in the School of Informatics at the University of Edinburgh. Her research focuses on getting computers to understand, reason with, and generate. She is as an associate editor of the Journal of Artificial Intelligence Research and has served on the editorial boards of Transactions of the ACL and Computational Linguistics. She was the first recipient of the Karen Sparck Jones award of the British Computer Society, recognizing key contributions to NLP and information retrieval. She received two EMNLP best paper awards and currently holds a prestigious Consolidator Grant from the European Research Council.