Invited Speakers

Peter Clark

Allen Institute for AI


Systematic reasoning with language models

Abstract & Bio

Jia Deng

Princeton


Learning symbolic rules for reasoning

Abstract & Bio

Greg Durrett

University of Texas Austin


Why natural language is the right vehicle for complex reasoning

Abstract & Bio

Yolanda Gil

USC


Extracting knowledge from text about models and workflows: transparency, reproducibility, and automation in science

Abstract & Bio

Hanna Hajishirzi

University of Washington


Knowledge-rich, robust neural text comprehension and reasoning

Abstract & Bio

Tim Kraska

MIT


Citizen data scientists: how to empower your workforce to make data driven decisions

Abstract & Bio

Monica Lam

Stanford


Scaling the world wide voice web with open standards and pretrained semantic parsers

Abstract & Bio

Percy Liang

Stanford


Knowledge, Language Models, and Adaptation

Abstract & Bio

Devi Parikh

Georgia Tech and Facebook AI Research


Vision & language

Abstract & Bio

Sujith Ravi

SliceX AI


Large-Scale Deep Learning with Structure

Abstract & Bio

Siva Reddy

McGill


Unlikelihood-training and back-training for robust natural language understanding

Abstract & Bio

Dafna Shahaf

Hebrew University


Accelerating innovation through analogy mining

Abstract & Bio

David Sontag

MIT


Learning health knowledge bases

Abstract & Bio




Speaker abstracts and bios


Peter Clark

Allen Institute for AI

Systematic reasoning with language models

While language models are rich, latent “knowledge bases”, and have remarkable question-answering capabilities, they still struggle to explain how their knowledge justifies those answers - and can make opaque, catastrophic mistakes. To alleviate this, I will describe new work on coercing language models to produce answers supported by a faithful chain of reasoning, describing how their knowledge justifies an answer. In the style of fast/slow thinking, conjectured answers suggest which chains of reasoning to build, and chains of reasoning suggest which answers to trust. The resulting reasoning-supported answers can then be inspected, debugged, and corrected by the user, offering new opportunities for meaningful, interactive problem-solving dialogs in future systems.

Bio

Peter Clark (peterc@allenai.org) is a Senior Research Manager at the Allen Institute for AI (AI2) and leads the Aristo Project. His work focuses on natural language processing, machine reasoning, and world knowledge, and the interplay between these three areas.

Back


Jia Deng

Princeton

Learning symbolic rules for reasoning

Symbolic reasoning, rule-based symbol manipulation, is a hallmark of human intelligence. However, rule-based systems have had limited success competing with learning-based systems outside formalized domains such as automated theorem proving. One hypothesis is that this is due to the manual construction of rules in past attempts. In this talk, I will present a method for automatic learning rules from data. This approach can express both formal logic and natural language sentences, and can induce rules from training data consisting of questions and answers, with or without intermediate reasoning steps. This approach performs well on multiple reasoning benchmarks; it learns compact models with much less data and produces not only answers but also checkable proofs.

Bio

Jia Deng is an Assistant Professor of Computer Science at Princeton University. His research focus is on computer vision and machine learning. He received his Ph.D. from Princeton University and his B.Eng. from Tsinghua University, both in computer science. He is a recipient of the Sloan Research Fellowship, the NSF CAREER award, the ONR Young Investigator award, an ICCV Marr Prize, and two ECCV Best Paper Awards.

Back


Greg Durrett

University of Texas Austin

Why natural language is the right vehicle for complex reasoning

Despite their widespread success, end-to-end transformer models consistently fall short in settings involving complex reasoning. Transformers trained on question answering (QA) tasks that seemingly require multiple steps of reasoning often achieve high performance by taking “reasoning shortcuts.” We still do not have models that robustly combine many pieces of information in a logically consistent way. In this talk, I argue that a very attractive solution to this problem is within our grasp: doing multi-step reasoning directly in natural language. Text is flexible and expressive, capturing all of the semantics we need to represent intermediate states of a reasoning process. Working with text allows us to interface with knowledge in pre-trained models and in resources like Wikipedia. And finally, text is easily interpretable and auditable by users. I describe two pieces of work that manipulate language to do inference. First, transformation of question-answer pairs and evidence sentences allows us to seamlessly move between QA and natural language inference (NLI) settings, advancing both calibration of QA models and capabilities of NLI systems. Second, we show how synthetically-constructed data can allow us to build a deduction engine in natural language, which is a powerful building block for putting together natural language “proofs” of claims. I’ll conclude by suggesting how these will eventually yield systems that can tackle complex, long-horizon reasoning problems.

Bio

Greg Durrett is an assistant professor of Computer Science at UT Austin. His current research focuses on making natural language processing systems more interpretable, controllable, and generalizable, spanning application domains including question answering, textual reasoning, summarization, and information extraction. He completed his Ph.D. at UC Berkeley in 2016 where he was advised by Dan Klein.

Back


Extracting knowledge from text about models and workflows: transparency, reproducibility, and automation in science

Scientific publications often describe in a methods section all the steps involved in obtaining a new finding. Studies have shown that those sections are highly incomplete and ambiguous, and making the transparency and reproducibility effort out of the reach of readers. This is particularly challenging in computational modeling, since many real-world problems require integrating diverse models from different disciplines and unfortunately remain one-of efforts. We have been examining computational models from the lens of transparency and reproducibility. We have gathered requirements on the information that scientists need, analyzed sources of information beyond papers and including code repositories and community boards, and started to use text extraction techniques for model metadata to create structured model catalogs with rich knowledge about scientific models. Many challenges remain, including extracting abstractions, code sequences and workflows, and training/calibration procedures for models. If we succeed, this work will open the door to AI systems that can automate important aspects of science and accelerate discoveries.

Bio

Dr. Yolanda Gil is Director of New Initiatives in AI and Data Science in USC’s Viterbi School of Engineering, and Research Professor in Computer Science and in Spatial Sciences. She is also Director of Data Science programs and of the USC Center for Knowledge-Powered Interdisciplinary Data Science. She received her M.S. and Ph. D. degrees in Computer Science from Carnegie Mellon University, with a focus on artificial intelligence. Dr. Gil collaborates with scientists in several domains on developing AI scientists. She is a Fellow of the Association for Computing Machinery (ACM), the Association for the Advancement of Science (AAAS), and the Institute of Electrical and Electronics Engineers (IEEE). She is also Fellow of the Association for the Advancement of Artificial Intelligence (AAAI), and served as its 24th President.

Back


Hanna Hajishirzi

University of Washington

Knowledge-rich, robust neural text comprehension and reasoning

Enormous amounts of ever-changing knowledge are available online in diverse textual styles and diverse formats. Recent advances in deep learning algorithms and large-scale datasets are spurring progress in many Natural Language Processing (NLP) tasks, including question answering. Nevertheless, these models cannot scale up when task-annotated training data are scarce. This talk presents how to build robust models for textual comprehension and reasoning, and how to systematically evaluate them. First, I present general-purpose models for known tasks such as question answering in English and multiple languages that are robust to small domain shifts. Second, I discuss neuro-symbolic approaches that extend modern deep learning algorithms to elicit knowledge from structured data and language models to achieve strong performance in several NLP tasks. Finally, I present a benchmark to evaluate if NLP models can perform NLP tasks only by observing task definitions.


Bio

Hanna Hajishirzi is an Assistant Professor in the Paul G. Allen School of Computer Science & Engineering at the University of Washington and a Research Fellow at the Allen Institute for AI. Her research spans different areas in NLP and AI, focusing on developing machine learning algorithms that represent, comprehend, and reason about diverse forms of data at large scale. Applications for these algorithms include question answering, reading comprehension, representation learning, green AI, knowledge extraction, and conversational dialogue. Honors include the NSF CAREER Award, Sloan Fellowship, Allen Distinguished Investigator Award, Intel rising star award, multiple best paper and honorable mention awards, and several industry research faculty awards. Hanna received her PhD from University of Illinois and spent a year as a postdoc at Disney Research and CMU.

Back


Citizen data scientists: how to empower your workforce to make data driven decisions

Citizen data scientists (CDS) can significantly boost a firm’s business value and analytics maturity. Thus, it comes at no surprise that nowadays almost every company is looking into new tools to empower their workforce to make data-driven decisions and improve their business processes. But according to Gartner most CDS tools are focused on automated machine learning and ignore that data, people, and processes play a similar important role. Citizen data scientists need an environment that allows them to work together with SMEs and data scientists to collaboratively explore a problem across the entire model development lifecycle. Automated ML tools only impact a small portion of the development lifecycle. In this talk, I will present our findings why many current initiatives to increase the number of Citizen Data Scientist fail, explain why current tools for are still inefficient in fully supporting Citizen Data Scientists, and outline how Visual Data Computing, a technique we developed as part of the DARPA D3M program and recently commercialized by Einblick Analytics, addresses many of these challenges in a complete new way by providing the first truly collaborative analytics platform which combines aspects of a workflow engine (like Alteryx) and a visualization tool (like Tableau) with an infinite collaborative canvas (like Miro or Figma).

Bio

Tim Kraska is an Associate Professor of Electrical Engineering and Computer Science in MIT’s Computer Science and Artificial Intelligence Laboratory, co-director of the Data System and AI Lab at MIT (DSAIL@CSAIL), and co-founder of Einblick Analytics. Currently, his research focuses on building systems for machine learning, and using machine learning for systems. Before joining MIT, Tim was an Assistant Professor at Brown, spent time at Google Brain, and was a PostDoc in the AMPLab at UC Berkeley after he got his PhD from ETH Zurich. Tim is a 2017 Alfred P. Sloan Research Fellow in computer science and received several awards including the VLDB Early Career Research Contribution Award, the VMware Systems Research Award, the university-wide Early Career Research Achievement Award at Brown University, an NSF CAREER Award, as well as several best paper and demo awards at VLDB, SIGMOD, and ICDE.

Back


Monica Lam

Stanford

Scaling the world wide voice web with open standards and pretrained semantic parsers

We envision that everybody in the future can simply use their native language to retrieve information and to conduct business naturally on the web. To reach such scale, the community needs open standards to foster world wide collaboration. We contribute to this standardization effort with an initial proposal of (1) a formal, executable representation for the meaning in dialogues, and (2) a communication protocol for inter-operating virtual assistants. Using such a representation and leveraging pretrained language models, we can train a sample-efficient task-oriented semantic parser given the database schemas and API signatures of a domain. Making such domain-specific parsers openly available can lead to a proliferation of conversational agents in practice.

Bio

Dr. Monica Lam has been a Professor of Computer Science at Stanford University since 1988, and is the Faculty Director of the Stanford Open Virtual Assistant Laboratory. She leads the Genie open virtual assistant project, which aims to advance and democratize voice assistant technology, keep the voice web open, and protect the privacy of consumers. Prof. Lam is a member of the National Academy of Engineering and an ACM Fellow. She has won numerous best paper awards, and has published over 150 papers on many topics: natural language processing, machine learning, HCI, compilers, computer architecture, operating systems, and high-performance computing. She is a co-author of the ‘Dragon Book’, the definitive text on compiler technology. She received a B.Sc. from University of British Columbia (1980) and a Ph.D. from Carnegie Mellon University (1987).

Back


Percy Liang

Stanford

Knowledge, Language Models, and Adaptation

I will start by reflecting on different ways of storing ‘knowledge’ over the years (structured databases, raw text, and language models) and their implications for downstream applications. I will then focus on language models, which have a lot of raw potential, but rely on adaptation (prompting or fine-tuning) to be able to ‘extract the knowledge’ and use it productively. I will show that standard fine-tuning of all the parameters can ‘destroy the knowledge’ in the language model, and we then introduce prefix-tuning and composed fine-tuning, which allow us to preserve as much of the language model as possible, leading to improved generalization.

Bio

Percy Liang is an Associate Professor of Computer Science at Stanford University (B.S. from MIT, 2004; Ph.D. from UC Berkeley, 2011). His research spans many topics in machine learning and natural language processing, including robustness, interpretability, semantics, and reasoning. He is also a strong proponent of reproducibility through the creation of CodaLab Worksheets. His awards include the Presidential Early Career Award for Scientists and Engineers (2019), IJCAI Computers and Thought Award (2016), an NSF CAREER Award (2016), a Sloan Research Fellowship (2015), a Microsoft Research Faculty Fellowship (2014), and multiple paper awards at ACL, EMNLP, ICML, and COLT.

Back


Devi Parikh

Georgia Tech and Facebook AI Research

Vision & language

This talk will be about multimodal AI – about research problems at the intersection of computer vision and natural language processing. I will take a step back from the fast pace at which the research landscape tends to move, and give a bit of a historical overview of approaches in vision & language, share where we currently stand, and describe what I believe are exciting open challenges going forward.

Bio

Devi Parikh is a Research Director at Facebook AI Research (FAIR), and an Associate Professor in the School of Interactive Computing at Georgia Tech Her research interests are in computer vision, natural language processing, embodied AI, human-AI collaboration, and AI for creativity. She is a recipient of an NSF CAREER award, an IJCAI Computers and Thought award, a Sloan Research Fellowship, an Office of Naval Research (ONR) Young Investigator Program (YIP) award, an Army Research Office (ARO) Young Investigator Program (YIP) award, a Sigma Xi Young Faculty Award at Georgia Tech, an Allen Distinguished Investigator Award in Artificial Intelligence from the Paul G. Allen Family Foundation, four Google Faculty Research Awards, an Amazon Academic Research Award, a Lockheed Martin Inspirational Young Faculty Award at Georgia Tech, an Outstanding New Assistant Professor award from the College of Engineering at Virginia Tech, a Rowan University Medal of Excellence for Alumni Achievement, Rowan University’s 40 under 40 recognition, a Forbes’ list of 20 “Incredible Women Advancing A.I. Research” recognition, and a Marr Best Paper Prize awarded at the International Conference on Computer Vision (ICCV). Her research interests are in computer vision, natural language processing, embodied AI, human-AI collaboration, and AI for creativity.

Back


Sujith Ravi

SliceX AI

Large-Scale Deep Learning with Structure

Deep learning advances have enabled us to build high-capacity intelligent systems capable of perceiving and understanding the real world from text, speech and images. Yet, building real-world, scalable intelligent systems from “scratch” remains a daunting challenge as it requires us to deal with ambiguity, data sparsity and solve complex language & visual, dialog and generation problems. In this talk, I will formalize some of these challenges involved with machine learning at scale. I will then introduce and describe our powerful neural graph learning framework, pre-cursor to widely-popular GNNs, that tackle these challenges by leveraging the power of deep learning combined with graphs which allow us to model the structure inherent in language and visual data. Our neural graph learning approach has been successfully used to power real-world applications at industry scale for response generation, image recognition and multimodal experiences. Finally, I will highlight our recent work on applying this to solve NLP tasks like Knowledge Graph reasoning and multi-document abstractive summarization.

Bio

Dr. Sujith Ravi is the Founder & CEO of SliceX AI. Previously, he was the Director of Amazon Alexa AI where he led efforts to build the future of multimodal conversational AI experiences at scale. Prior to that, he was leading and managing multiple ML and NLP teams and efforts in Google AI. He founded and headed Google’s large-scale graph-based semi-supervised learning platform, deep learning platform for structured and unstructured data as well as on-device machine learning efforts for products used by billions of people in Search, Ads, Assistant, Gmail, Photos, Android, Cloud and YouTube. These technologies power conversational AI (e.g., Smart Reply), Web and Image Search; On-Device predictions in Android and Assistant; and ML platforms like Neural Structured Learning in TensorFlow, Learn2Compress as Google Cloud service, TensorFlow Lite for edge devices. Dr. Ravi has authored over 100 scientific publications and patents in top-tier machine learning and natural language processing conferences. His work has been featured in press: Wired, Forbes, Forrester, New York Times, TechCrunch, VentureBeat, Engadget, New Scientist, among others, and also won the SIGDIAL Best Paper Award in 2019 and ACM SIGKDD Best Research Paper Award in 2014. For multiple years, he was a mentor for Google Launchpad startups. Dr. Ravi was the Co-Chair (AI and deep learning) for the 2019 National Academy of Engineering (NAE) Frontiers of Engineering symposium. He was also the Co-Chair for ACL 2021, EMNLP 2020, ICML 2019, NAACL 2019, and NeurIPS 2018 ML workshops and regularly serves as Senior/Area Chair and PC of top-tier machine learning and natural language processing conferences like NeurIPS, ICML, ACL, NAACL, AAAI, EMNLP, COLING, KDD, and WSDM.

Back


Unlikelihood-training and back-training for robust natural language understanding

Language models are known to be good at generalization and memorization. These abilities mean that a language model can be directly be used as a knowledge base, e.g., a language model could easily fill the blank in “The capital of Canada is BLANK” with Ottawa, even if the exact construction is never seen during training, a task that requires both generalization and memorization. But we also observe that complex phenomena such as negation are commonly ignored by language models, e.g., the model would still predict Ottawa as the answer to “The capital of Canada is not BLANK”. I will introduce a new training procedure and objective called “unlikelihood training with reference” in order to build language models that understand negation without explicitly training on factual knowledge. In the second part of the talk, I will show that pretrain and fine-tune paradigm breaks in the out-of-distribution setting. For example, question answering and generation models trained on Natural Questions do not generalize to other domains such as education or bio-medical. I will introduce a new technique called back-training that exploits unsupervised data in the target domains much more efficiently than self-training.

Bio

Siva Reddy is an Assistant Professor in the School of Computer Science and Linguistics at McGill University. He is a Facebook CIFAR AI Chair and a core faculty member of Mila Quebec AI Institute. Before McGill, he was a postdoctoral researcher at Stanford University. He received his PhD from the University of Edinburgh in 2017, where he was a Google PhD Fellow. His research focuses on representation learning for language that facilitates systematic generalization and conversational models. He received the 2020 VentureBeat AI Innovation Award in NLP.

Back


Dafna Shahaf

Hebrew University

Accelerating innovation through analogy mining

The availability of large idea repositories (e.g., the U.S. patent database) could significantly accelerate innovation and discovery by providing people with inspiration from solutions to analogous problems. However, finding useful analogies in these large, messy, real-world repositories remains a persistent challenge for either human or automated methods. In this work we explore the viability and value of learning to find analogies. In ideation experiments, analogies retrieved by our models significantly increased people’s likelihood of generating creative ideas.

Bio

Dafna Shahaf is an Associate Professor in computer science at the Hebrew University of Jerusalem. Dafna’s research focuses on helping people make sense of massive amounts of data, with a special emphasis on unlocking the potential of the many digital traces left by human activity to contribute to our understanding (and computer emulation) of human capacities such as humor and creativity. She received her PhD from Carnegie Mellon University, and was a postdoctoral fellow at Stanford University and at Microsoft Research. Prof. Shahaf has won multiple awards, including best research paper awards at KDD 2010 and KDD 2017, an ERC starting grant, a Microsoft Research Fellowship, a Siebel Scholarship, a Magic Grant for innovative ideas and Wolf’s Foundation Krill Award, as well as MIT Tech Review’s “Most thought-provoking paper of the week”.

Back


Learning health knowledge bases

How can we use AI to help with medical diagnosis of rare conditions from symptoms? How can we organize a patient’s longitudinal health record into problem-oriented views? Surface subtle medical errors? Summarize long clinical documents? I will argue that developing algorithms for these tasks that are safe, robust, and easy for clinicians and patients to use will require building on top of large health knowledge bases of clinical entities, relations, and their grounding in human physiology. I will then describe our work over the past several years developing methods for learning health knowledge bases directly from health data of millions of patients, and finish with a series of challenges that the field must solve for us to ultimately achieve this vision.

Bio

David Sontag is an Associate Professor in the Department of Electrical Engineering and Computer Science (EECS) at MIT, and member of the Institute for Medical Engineering and Science (IMES) and the Computer Science and Artificial Intelligence Laboratory (CSAIL). Prior to joining MIT, Dr. Sontag was an Assistant Professor in Computer Science and Data Science at New York University from 2011 to 2016, and a postdoctoral researcher at Microsoft Research New England. Dr. Sontag received the Sprowls award for outstanding doctoral thesis in Computer Science at MIT in 2010, best paper awards at the conferences Empirical Methods in Natural Language Processing (EMNLP), Uncertainty in Artificial Intelligence (UAI), and Neural Information Processing Systems (NeurIPS), faculty awards from Google, Facebook, and Adobe, and a National Science Foundation Early Career Award. Dr. Sontag received a B.A. from the University of California, Berkeley.

Back