Keynote&Invited Speakers

Academic Keynote Speaker

An NLP Framework for Interpreting Implicit and Explicit Opinions in Text and Dialog

Jan Wiebe

Professor, Department of Computer Science
Co-director, Intelligent Systems Program
University of Pittsburgh, USA

   Jan-Wiebe-photo

While previous sentiment analysis research has concentrated on the interpretation of explicitly stated opinions and attitudes, this work addresses the computational study of a type of opinion implicature (i.e., opinion-oriented inference) in text and dialog.  This talk will describe a framework for representing and analyzing opinion implicatures which promises to contribute to deeper automatic interpretation of subjective language.  In the course of understanding implicatures, the system recognizes implicit sentiments (and beliefs) toward various
events and entities in the sentence, often attributed to different sources (holders) and of mixed polarities; thus, it produces a richer interpretation than is typical in opinion analysis.

Janyce Wiebe is Professor of Computer Science and Co-Director of the Intelligent Systems Program at the University of Pittsburgh.  Her research with students and colleagues has been in discourse processing, pragmatics, and word-sense disambiguation.  A major concentration of her research is “subjectivity analysis”, recognizing and interpretating expressions of opinions and sentiments in text, to support NLP applications such as question answering, information extraction, text categorization, and summarization.  Her professional roles have included ACL Program Co-Chair, NAACL Program Chair, NAACL Executive Board member, Transactions of the ACL Action Editor, Computational Linguistics and Language Resources and Evaluation Editorial Board member, AAAI Workshop Co-Chair, ACM Special Interest Group on Artificial Intelligence (SIGART) Vice-Chair, and ACM-SIGART/AAAI Doctoral Consortium Chair.  She received her PhD in Computer Science from the State University of New York at Buffalo, and later was a Post-Doctoral Fellow at the University of Toronto.

Industry Keynote Speaker

Theory versus practice in data science

Charles Elkan

Amazon Fellow,
Amazon.com
Seattle, WA, USA
&
Professor, Department of Computer Science and Engineering
University of California, San Diego, USA (on leave)

Charles-Elkan

In this talk I will discuss examples of how Amazon serves customers and improves efficiency using learning algorithms applied to large-scale datasets. I’ll then discuss the Amazon approach to projects in data science, which is based on applying tenets that are beneficial to follow outside the company as well as inside it. Last but not least, I will discuss which learning algorithms tend to be most successful in practice, and I will explain some unsolved issues that arise repeatedly across applications and should be the topic of more research in the academic community. Note: All information in the talk will be already publicly available, and any opinions expressed will be strictly personal.

Charles Elkan is on leave from being a professor of computer science at the University of California, San Diego, working as Amazon Fellow and leader of the machine learning organization for Amazon in Seattle and Silicon Valley. In the past, he has been a visiting associate professor at Harvard and a researcher at MIT. His published research has been mainly in machine learning, data science, and computational biology. The MEME algorithm that he developed with Ph.D. students has been used in over 3000 published research projects in biology and computer science. He is fortunate to have had inspiring undergraduate and graduate students who are in leadership positions now such as vice president at Google.

Invited speakers

How to Explore to Maximize Future Return

Csaba Szepesvari
Department of Computing Science,
University of Alberta, Edmonton, Canada

Csaba

With access to huge-scale distributed systems and more data than ever before, learning systems that learn to make good predictions break yesterday’s records on a daily basis. Although prediction problems are important, predicting what to do has its own challenges, which calls for specialized solution methods. In this talk, by means of some examples based on recent work on reinforcement learning, I will illustrate the unique opportunities and challenges that arise when a system must learn to make good decisions to maximize long-term return.
In particular, I will start by demonstrating that passive data collection inevitably leads to catastrophic data sparsity in sequential decision making problems (no amount of data is big enough!), while clever algorithms, tailored to this setting, can escape data sparsity, learning essentially arbitrarily faster than what is possible under passive data collection. I will also describe current attempts to scale up such clever algorithms to work on large-scale problems. Amongst the possible approaches, I will discuss the role of sparsity to address this challenge in the practical, yet mathematically elegant setting of ”linear bandits”. Interestingly, while in the
related linear prediction problem, sparsity allows one to deal with huge dimensionality in a seamless fashion, the status of this question in the bandit setting is much less understood.

Csaba Szepesvari is a Professor at the the Department of Computing Science of the University of Alberta and a Principal Investigator of the Alberta Innovates Center for Machine Learning. He received his PhD from the University of Szeged, Hungary in 1999. The coauthor of a book on nonlinear approximate adaptive controllers and the author of a book on Reinforcement Learning, he published about 150 journal and conference papers. He is an Action Editor of the Journal of Machine Learning Research and the Machine Learning Journal. His research interests include reinforcement learning, statistical learning theory and online learning.

Detecting Locations from Twitter Messages

Diana Inkpen
School of Electrical Engineering and Computer Science,
University of Ottawa, Canada

diana2013a

This talk will focus on machine learning methods for detecting locations from Twitter messages. There are two types of locations that we are interested in: location entities mentioned in the text of each message and the physical locations of the users. For the first type of locations, we detected expressions that denote locations and we classified them into names of cities, provinces/states, and countries. We approached the task in a novel way, consisting in two stages. In the first stage, we trained Conditional Random Field models with various sets of
features. We collected and annotated our own dataset for training and testing. In the second stage, we resolved cases when more than one place with the same name exists, by applying a set of heuristics. For the second type of locations, we put together all the tweets written by a user, in order to predict his/her physical location. Only a few users declare their locations in their Twitter profiles, but this is sufficient to automatically produce training and test data for our classifiers. We experimented with two existing datasets collected from users located in the U.S. We propose a deep learning architecture for the solving the task, because deep learning was shown to work well for other natural language processing tasks, and because standard classifiers were already tested for the user location task. We designed a model that predicts the U.S. region of the user and his/her U.S. state, and another model that predicts the longitude and latitude of the user’s location. We found that stacked denoising auto-encoders are well suited for this task, with results comparable to the state-of-the-art.

Diana Inkpen is a Professor at the University of Ottawa, in the School of Electrical Engineering and Computer Science. Her research is in applications of Computational Linguistics and Text Mining. She organized seven international workshops and she was a program co-chair for the AI
2012 conference. She is in the program committees of many conferences and an associate editor of the Computational Intelligence and the Natural Language Engineering journals. She published a book on Natural Language Processing for Social Media (Morgan and Claypool Publishers, Synthesis Lectures on Human Language Technologies), 8 book chapters, more than 25 journal articles and more than 90 conference papers. She received many research grants, including intensive industrial collaborations.