Tutorial Descriptions (Sunday, 30 July)

Tutorials are a key part of the ACL tradition, in helping new members of the community learn about core techniques in computational linguistics and natural language processing, as well as keeping current practitioners updated with the rapid, new developments in the field.

The tutorial chairs, Jordan and Maja, are pleased that we have the following tutorials available for registration at ACL 2017.  These are often sold-out, so please register for the soon!

Morning

T1  Natural Language Processing for Precision Medicine
Hoifung PoonChris QuirkKristina Toutanova, and Wen-tau Yih

T2  Multimodal Machine Learning
Louis-Philippe Morency and Tadas Baltrusaitis

T3  Deep Learning for Semantic Composition
Xiaodan Zhu and Edward Grefenstette

Afternoon

T4  Deep Learning for Dialogue Systems
Yun-Nung ChenAsli Celikyilmaz, and Dilek Hakkani-Tur

T5  Beyond Words: Deep Learning for Multi-word Expressions and Collocations
Valia Kordoni

T6  Making Better Use of the Crowd
Jennifer Wortman Vaughan


Tutorial 1 (Morning): Natural Language Processing for Precision Medicine
Hoifung Poon, Chris Quirk, Kristina Toutanova and Wen-tau Yih

We will introduce precision medicine and showcase the vast opportunities for NLP in this burgeoning field with great societal impact. We will review pressing NLP problems, state-of-the art methods, and important applications, as well as datasets, medical resources, and practical issues. The tutorial will provide an accessible overview of biomedicine, and does not presume knowledge in biology or healthcare. The ultimate goal is to reduce the entry barrier for NLP researchers to contribute to this exciting domain.


Tutorial 2 (Morning): Multimodal Machine Learning
Louis-Philippe Morency and Tadas Baltrusaitis

Multimodal machine learning is a vibrant multi-disciplinary research field which addresses some of the original goals of artificial intelligence by integrating and modeling multiple communicative modalities, including linguistic, acoustic and visual messages. With the initial research on audio-visual speech recognition and more recently with image and video captioning projects, this research field brings some unique challenges for multimodal researchers given the heterogeneity of the data and the contingency often found between modalities.

This tutorial builds upon a recent course taught at Carnegie Mellon University during the Spring 2016 semester (CMU course 11-777) and two tutorials presented at CVPR 2016 and ICMI 2016. The present tutorial will review fundamental concepts of machine learning and deep neural networks before describing the five main challenges in multimodal machine learning: (1) multimodal representation learning, (2) translation & mapping, (3) modality alignment, (4) multimodal fusion and (5) co-learning. The tutorial will also present state-of-the-art algorithms that were recently proposed to solve multimodal applications such as image captioning, video descriptions and visual question-answer. We will also discuss the current and upcoming challenges.


Tutorial 3 (Morning): Deep Learning for Semantic Composition
Xiaodan Zhu and Edward Grefenstette

Learning representation to model the meaning of text has been a core problem in NLP. The last several years have seen extensive interests on distributional approaches, in which text spans of different granularities are encoded as vectors of numerical values. If properly learned, such representation has showed to achieve the state-of-the-art performance on a wide range of NLP problems.

In this tutorial, we will cover the fundamentals and the state-of-the-art research on neural network-based modeling for semantic composition, which aims to learn distributed representation for different granularities of text, e.g., phrases, sentences, or even documents, from their sub-component meaning representation, e.g., word embedding.


Tutorial 4 (Afternoon): Deep Learning for Dialogue Systems
Yun-Nung Chen, Asli Celikyilmaz and Dilek Hakkani-Tur

In the past decade, goal-oriented spoken dialogue systems have been the most prominent component in today’s virtual personal assistants. The classic dialogue systems have rather complex and/or modular pipelines. The advance of deep learning technologies has recently risen the applications of neural models to dialogue modeling. However, how to successfully apply deep learning based approaches to a dialogue system is still challenging. Hence, this tutorial is designed to focus on an overview of the dialogue system development while describing most recent research for building dialogue systems and summarizing the challenges, in order to allow researchers to study the potential improvements of the state-of-the-art dialogue systems. The tutorial material is available at http://deepdialogue.miulab.tw.


Tutorial 5 (Afternoon): Beyond Words: Deep Learning for Multi-word Expressions and Collocations
Valia Kordoni

Deep learning has recently shown much promise for NLP applications. Traditionally, in most NLP approaches, documents or sentences are represented by a sparse bag-of-words representation. There is now a lot of work which goes beyond this by adopting a distributed representation of words, by constructing a so-called “neural embedding” or vector space representation of each word or document. The aim of this tutorial is to go beyond the learning of word vectors and present methods for learning vector representations for Multiword Expressions and bilingual phrase pairs, all of which are useful for various NLP applications.

This tutorial aims to provide attendees with a clear notion of the linguistic and distributional characteristics of Multiword Expressions (MWEs), their relevance for the intersection of deep learning and natural language processing, what methods and resources are available to support their use, and what more could be done in the future. Our target audience are researchers and practitioners in machine learning, parsing (syntactic and semantic) and language technology, not necessarily experts in MWEs, who are interested in tasks that involve or could benefit from considering MWEs as a pervasive phenomenon in human language and communication.


T6 (Afternoon): Making Better Use of the Crowd
Jennifer Wortman Vaughan

Over the last decade, crowdsourcing has been used to harness the power of human computation to solve tasks that are notoriously difficult to solve with computers alone, such as determining whether or not an image contains a tree, rating the relevance of a website, or verifying the phone number of a business.

The natural language processing community was early to embrace crowdsourcing as a tool for quickly and inexpensively obtaining annotated data to train NLP systems. Once this data is collected, it can be handed off to algorithms that learn to perform basic NLP tasks such as translation or parsing.

Usually this handoff is where interaction with the crowd ends. The crowd provides the data, but the ultimate goal is to eventually take humans out of the loop. Are there better ways to make use of the crowd?

In this tutorial, I will begin with a showcase of innovative uses of crowdsourcing that go beyond data collection and annotation. I will discuss applications to natural language processing and machine learning, hybrid intelligence or “human in the loop” AI systems that leverage the complementary strengths of humans and machines in order to achieve more than either could achieve alone, and large scale studies of human behavior online.

I will then spend the majority of the tutorial diving into recent research aimed at understanding who crowdworkers are, how they behave, and what this should teach us about best practices for interacting with the crowd.

I’ll start by debunking the common myth among researchers that crowdsourcing platforms are riddled with bad actors out to scam requesters. In particular, I’ll describe the results of a research study that showed that crowdworkers on the whole are basically honest.

I’ll talk about experiments that have explored how to boost the quality and quantity of crowdwork by appealing to both well-designed monetary incentives (such as performance-based payments) and intrinsic sources of motivation (such as piqued curiosity or a sense of doing meaningful work).

I’ll then discuss recent research—both qualitative and quantitative—that has opened up the black box of crowdsourcing to uncover that crowdworkers are not independent contractors, but rather a network with a rich communication structure.

Taken as a whole, this research has a lot to teach us about how to most effectively interact with the crowd. Throughout the tutorial I’ll discuss best practices for engaging with crowdworkers that are rarely mentioned in the literature but make a huge difference in whether or not your research studies will succeed. (Here’s a few hints: Be respectful. Be responsive. Be clear.)

Advertisements