Colloquia

Colloquia 2021-2022

Thursday, April 28, 2022 - 11:00am - Virtual

Speaker: Peter Jansen, Ph.D.

Title: "ScienceWorld: Is your Agent Smarter than a 5th Grader?"

Abstract: Question answering models have rapidly increased their ability to answer natural language questions in recent years, due in large part to large pre-trained neural network models called Language Models.  These language models have felled many benchmarks, including recently achieving an "A" grade on answering standardized multiple choice elementary science exams.  But how much do these language models truly know about elementary science, and how robust is their knowledge?  In this work, we present ScienceWorld, a new benchmark to test agents' scientific reasoning abilities.  ScienceWorld is an interactive text game environment that tasks agents with performing 30 tasks drawn from the elementary science curriculum, like melting ice, building simple electrical circuits, using pollinators to help grow fruits, or understanding dominant versus recessive genetic traits.  We show that current state-of-the-art language models that can easily answer elementary science questions, such as whether a metal fork is conductive or not, struggle when tasked to conduct an experiment to test this in a grounded, interactive environment, even with substantial training data.  This presents the question of whether current models are simply retrieving answers to questions by way of observing a large number of similar input examples, or if they have learned to reason about concepts in a reusable manner.  We hypothesize that agents need to be grounded in interactive environments to achieve such reasoning abilities.  Our experiments provide empirical evidence supporting this hypothesis -- showing that a 1.5 million parameter agent trained interactively for 100k steps outperforms an 11 billion parameter model statically trained for scientific question answering and reasoning via millions of expert demonstrations.

Bio: Peter Jansen is an Assistant Professor in the School of Information at the University of Arizona.  He conducts research in natural language processing, primarily centered around question answering, automated inference, and generating human-readable explanations for an agent's reasoning.  His work is supported by the National Science Foundation and Allen Institute for Artificial Intelligence.

Faculty Host: Dr. Mihai Surdeanu

 

Thursday, March 24, 2022 - 11:00am - Virtual

Speaker: El Kindi Rezig, Ph.D.

Title: "Data Preparation: The Biggest Roadblock in Data Science"

Abstract: When building Machine learning (ML) models, data scientists face a significant hurdle: data preparation. ML models are exactly as good as the data we train them on. Unfortunately, data preparation is tedious and laborious because it often requires human judgment on how to proceed. In fact, data scientists spend at least 80% of their time locating the datasets they want to analyze, integrating them together, and cleaning the result.

In this talk, I will present my key contributions in data preparation for data science, which address the following problems: (1) data discovery: how to discover data of interest from a large collection of heterogeneous tables (e.g., data lakes); (2) error detection: how to find errors in the input and intermediate data in complex data workflows; and (3) data repairing: how to repair data errors with minimal human intervention. The developed systems are specifically designed to support data science development which poses particular requirements such as interactivity and modularity. The talk will feature demonstrations of data preparation systems as well as discussions of our developed algorithms and techniques that enable data preparation at scale.

Bio: El Kindi Rezig is a postdoctoral associate at the Computer Science and Artificial Intelligence Laboratory (CSAIL) of MIT where he works under the supervision of Michael Stonebraker. He earned his Ph.D. in Computer Science from Purdue University under the supervision of Walid Aref and Mourad Ouzzani. His research interests revolve around data management in general and data preparation for data science in particular. He has developed systems in collaboration with several organizations including Intel, Massachusetts General Hospital, and the U.S. Air Force.

Faculty Host: Dr. Rick Snodgrass

 

Tuesday, March 15, 2022 - 11:00am - Virtual

Speaker: Jiaqi Ma, Ph.D. Candidate

Title: "Towards Trustworthy Machine Learning on Graph Data"

Abstract: Machine learning on graph data (a.k.a. graph machine learning) has attracted tremendous attention from both academia and industry, with many successful applications ranging from social recommendation to traffic forecasting, even including high-stake scenarios. However, despite the huge empricial success in common cases, popular graph machine learning models often have degraded performance in certain conditions. Given the complexity and diversity of real-world graph data, it is crucial to understand and optimize the model behaviors in specific contexts.

In this talk, I will introduce my recent work on analyzing the robustness and fairness of graph neural networks (GNNs). In the first part of the talk, I will show that existing GNNs could suffer from model misspecification, due to an implicit conditional independence assumption. This observation motivates our design of a copula-based learning framework that improves upon many existing GNNs. In the second part the talk, I will go beyond average model performance and investigate the fairness of GNNs. Through a generalization analysis on GNNs, I will show that there is a predictable disparity in GNN performance among different subgroups of test nodes. I will also discuss potential mitigation strategies.

Bio: Jiaqi Ma is a PhD candidate in School of Information at University of Michigan. His research interests lie in machine learning and data mining. He has done work in the areas of graph machine learning, multi-task learning, learning-to-rank, and recommender systems in his PhD study and his internships at Google Brain. His work has been published in top AI journals and conferences, including JMLR, ICLR, NeurIPS, KDD, WWW, AISTATS, etc. Prior to UMich, he got his B.Eng. degree from Tsinghua University. 

Faculty Host: Dr. Chicheng Zhang

 

Tuesday, March 1, 2022 - 11:00am - Virtual

Speaker: Zeyu Ding, Ph.D. Candidate

Title: Verification and Optimization for Differentially Private Data Analysis

Abstract: Data gathering is easier and more pervasive today than in any previous point in history. While the accelerating growth of data has empowered many research and real-world applications, recent incidents of data leakage and abuse have raised public concerns for privacy. Differential privacy provides a promising way to release analysis on sensitive data in a privacy-preserving manner. It has become mainstream in many research communities and has been deployed in practice in the private sector and some government agencies. However, designing differentially private algorithms is notoriously difficult and error prone. Significant errors happen even in peer-reviewed papers and systems. Also, asking for a little more privacy will usually come at a cost - often to the accuracy of some analysis. Balancing individuals' privacy and data utilization has become a vital yet challenging task for both researchers and data analysts. In this talk, I will present our work on automatically proving/disproving differentially private algorithms and optimizing for their privacy/utility tradeoffs. Our work facilitates the adoption of differential privacy by increasing their utility and making them easier to deploy for non-experts

Bio: Zeyu Ding is a doctoral candidate in Computer Science at Penn State University advised by Prof. Daniel Kifer and Prof. Danfeng Zhang. His research interests lie in the intersection of security, privacy, formal methods and machine learning. He obtained a doctoral degree in Mathematics from SUNY in 2015. His work has been published at top venues and has won one best paper award (CCS 2018), two best paper runner-up awards (CCS 2020, 2021) and the Caper Bowden PET Award Runner-up (2019). He received the graduate student research award in 2019 and teaching award in 2021 from Penn State University.

Faculty Host: Dr. Josh Levine

 

Tuesday, February 22, 2022 - 11:00am - Virtual

Speaker: Jin Sun, Ph.D.

Title: "Visual Analysis of People in Context"

Abstract: Every day, billions of images uploaded to photo-sharing websites feature people - selfies, family gatherings, parades, picnics, social events, etc. Thanks to recent advances in deep neural networks, we have automated tools that can reliably understand basic elements of these photos such as classifying the image category. However, developing a complex understanding of scenes and the rich interaction between people and their environment remains a challenging task.

In this talk, I will discuss People Analysis in Context (PAC) problems: reasoning and understanding people in complex environments from visual data. In this setting, people must be considered jointly with their environment and not as isolated subjects. I define "context" broadly as all factors in a scene other than the person of interest: for example, these may include a picnic table, traffic signals, or a friend waiting across the street. Solving PAC problems has applications in many tasks that can benefit from human behavior analysis such as autonomous driving, healthcare, and urban planning. I will talk about my research on PAC: including detecting existing and missing objects using context with the application on accessibility assessment, learning to predict walkable regions in images, and understanding contextual factors on people's appearance. Finally, I will discuss the practical impact of PAC algorithms and how they could be useful tools in building better, safer, and more livable cities.

Bio: Jin Sun is a Postdoctoral Associate in the Department of Computer Science at Cornell University and Cornell Tech advised by Noah Snavely. Jin's research interest is in developing computer vision methods for a complex understanding of objects and scenes. He has proposed algorithms to learn contextual information from visual data with limited supervision. He is also interested in applying computer vision in applications to improve people's quality of life. Jin received his Ph.D. in computer science from the University of Maryland where he was advised by David W. Jacobs. His work has been published at top computer vision conferences such as CVPR, ICCV, and ECCV, selected as "Notable Books and Articles" in the 19th Annual ACM Best of Computing 2014, and nominated for the best paper award at CVPR 2020.

Faculty Host: Dr. Kobus Barnard

 

Thursday, February 20, 2021 - 11:00am - Virtual

Speaker: Malavika Samak, Ph.D.

Title: "Synthesizing Verified Adapters for Object-Oriented Programs"

Abstract: Software libraries play a critical role in the software development process. They expose APIs that provide useful functionality and create abstractions that enable developers to focus on the core application logic, leading to modular software development. Several factors influence optimal library utilization - (a) awareness of the most appropriate libraries, (b) the ability to reason about a library across various dimensions that include correctness, security, performance, and memory usage, and (c) the ease of incorporating a library to serve the functional requirements of the application. 

In this talk, I present our recent work on searching for replacement classes and automatically synthesizing verified adapters. These adapters are drop-in replacements as they exhibit equivalent program behavior under all contexts. Our experiments demonstrate that the approach can identify suitable replacement classes from a corpus of 600K Java classes and can synthesize complex verified adapters.

Bio: Malavika Samak is a Postdoctoral Associate at CSAIL, MIT, advised by Prof. Martin Rinard. Her goal is to design approaches to discover, reason, customize, and adapt code to build defect-free software systems efficiently. Her research interests are in static and dynamic program analysis, program synthesis, and verification. She designed techniques for synthesizing multithreaded tests for detecting concurrency bugs as part of her doctoral dissertation. She holds a Ph.D. in Computer Science from the Indian Institute of Science (IISc), Bangalore, and is a recipient of a Google Ph.D. fellowship. 

Faculty Host: Beichuan Zhang

 

Tuesday, February 18, 2022 - 11:00am - Virtual

Speaker: Tom McCoy, Ph.D. Candidate

Title: "Opening the Black Box of Deep Learning: Representations, Inductive Biases, and Robustness"

Abstract: Natural language processing has been revolutionized by neural networks, which perform impressively well in applications such as machine translation and question answering. Despite their success, neural networks still have some substantial shortcomings: Their internal workings are poorly understood, and they are notoriously brittle, failing on example types that are rare in their training data. In this talk, I will discuss approaches for addressing these shortcomings. First I will argue for a new evaluation paradigm based on targeted, hypothesis-driven tests that better illuminate what models have learned; using this paradigm, I will show that even state-of-the-art models sometimes fail to recognize the hierarchical structure of language (e.g., to conclude that "The book on the table is blue" implies "The table is blue.") Second, I will show how these behavioral failings can be explained through analysis of models - inductive biases and internal representations, focusing on the puzzle of how neural networks represent discrete symbolic structure in continuous vector space. I will close by showing how insights from these analyses can be used to make models more robust through approaches based on meta-learning, structured architectures, and data augmentation. 

Bio: Tom McCoy is a PhD candidate in the Department of Cognitive Science at Johns Hopkins University. His research combines natural language processing, cognitive science, and machine learning to study how we can achieve robust generalization in models of language, as this remains one of the main areas where current AI systems fall short. In particular, he focuses on inductive biases and representations of linguistic structure, since these are two of the major components that determine how learners generalize to novel types of input.

Faculty Host: Dr. Mihai Surdeanu

 

Thursday, December 9, 2021 - 11:00am - Virtual

Speaker: Kathryn Cunningham, Ph.D.

Title: "Diversifying Pathways to Computer Science Education"

Abstract: Computer science departments are faced with skyrocketing enrollments as more students recognize the opportunities that computing provides. Surprisingly, this growth is fueled by the increase in non-majors more than the increase in computer science majors. As computing becomes a more universal subject, like English or Mathematics, we must investigate whether current instructional approaches work equally well for everyone, or present barriers for some more than others. How can we address the learning needs of students who are increasingly diverse in terms of goals, interests, and prior experience? In this talk, I will present three examples of computing education innovations from my research that draw on program comprehension, educational psychology, and the study of higher education. Through my work, I have developed new approaches to support the success of a variety of undergraduates as they study computing, including female non-majors, traditional computer science majors, and low-income community college transfer students.

Bio: Dr. Kathryn Cunningham is a Postdoctoral Scholar and CIFellow at Northwestern University. Her passion is addressing inequities in computing education by diversifying and improving the way we support students at the undergraduate level. She received her PhD from the University of Michigan in Information, her MSc from Georgia Tech in Computer Science (Human-centered Computing), and her BS from the University of Arizona in Computer Science and Molecular and Cellular Biology.  

Faculty Host: Dr. Dave Lowenthal

 
 

Tuesday, November 2, 2021 - 11:00am - Virtual

Speaker: Eduardo Blanco, Ph.D.

Title: "Towards Deeper Natural Language Understanding"

Abstract: Extracting meaning from text is key to natural language understanding and many end-user applications. Natural language is notoriously ambiguous, and humans intuitively understand many nuances in meaning as well as implicit inferences. In this talk I will present models that enable intelligent systems to better understand natural language. First, I will present our work on extracting implicit positive meaning hidden in sentences containing negation. I will discuss approaches to pinpoint the few elements that are actually negated and strategies to generate plausible affirmative interpretations. Second, I will briefly present some of our ongoing work on extracting other semantic representations.

Bio: Eduardo Blanco is an Associate Professor in the School of Computing and Augmented Intelligence at Arizona State University. He conducts research primarily in natural language processing with a focus on computational semantics, including semantic relation extraction and intricate linguistic phenomena such as negation, modality, and uncertainty. He is interested in both fundamental research and applications in the social sciences, medicine, and robotics among others. His work is supported by the National Science Foundation, the National Geospatial-Intelligence Agency, the Patient-Centered Outcomes Research Institute, and generous gifts from industry. Blanco is a recipient of the Bloomberg Data Science Research Grant and the National Science Foundation CAREER Award.

Faculty Host: Dr. Mihai Surdeanu