Invited talks

1. Actionable Visual Interpretation and Diagnosis for Deep Learning Models [Information] [vimeo]

Liang Gou
Bosch Research North America Sunnyvale, CA

Abstract:
There is an increasing tension between the interpretability and prediction power of machine learning models, especially for deep learning models. Many visual analytics research has attempted to mitigate this tension by interpreting and diagnosing deep learning models (e.g. CNN, RNN, and GANs) in various domains (e.g. computer vision, natural language, and robotics). Here, one burning need is to generate actionable insights to improve models with human knowledge.
In this talk, I present two research work of generating and injecting insights for model improvement via visual interpretation and diagnosis:
a) By injecting human insights to model space, DQNVis helps domain experts understand and improve deep Q-learning networks for Atari games;
b) Through insight injection into data space, VALTD assists model developers assess, understand, and improve the accuracy and robustness of traffic light detectors in autonomous driving.
Hopefully, the two work can capture a silver of ways of generating and applying actionable insights for model improvement via visual analytics and human-in-the-loop approaches.

Bio: Liang Gou is a Principal Research Scientist and Senior Research Manager at Research and Technology Center North America, Robert Bosch LLC. His research interests lie in the fields of visual analytics, deep learning and human-computer interaction. He is leading a Visual Analytics & eXplainable AI group to shape the future industrial AI experience for Robert Bosch products and services by combining cutting-edge technologies of machine learning, data analysis and interactive visualization. Prior to joining Bosch Research, Liang was a Principal Research Scientist at Visa Research and a Research Staff Member at IBM Almaden Research Center. He received his Ph.D. in Information Science and Technology from the Penn State Univ. He received multiple best paper awards or honorable mentions at IEEE VIS and Pacific Vis. He was the Papers Co-Chair for the IEEE Symposium on Visualization in Data Science in 2020, and has been the EiC assistant of ACM TiiS since 2018.


2. AI Meets Visualization in Healthcare [Information] [vimeo]

Bum Chul Kwon
IBM Research,
Cambridge, MA

Abstract:
Artificial intelligence (AI) techniques provide great opportunities for improving healthcare research and clinical practice. With the recent advancements along with ever-increasing clinical data, researchers have demonstrated successful application of AI techniques in predicting diagnosis, unexpected readmission, and mortality of patients. However, there has been limited adoption of the techniques for clinical use because of their black-box nature. Despite growing research in explainable AI methods, healthcare professionals may still find difficulties in understanding and using the techniques without visual aids. Visual analytics can help clinical experts to gain transparency and trust in using AI techniques for analyzing healthcare data. This talk aims to provide a brief overview of the stated problems and to describe the potential role of visual analytics in healthcare research and clinical practice by discussing two case studies from previous research.

Bio: Bum Chul Kwon (goes by “BC”) is a Research Staff Member at IBM Research, where he is a member of the Center for Computational Health. His research goal is to enhance users' abilities to derive knowledge from data and to make informed decisions using interactive visualization systems powered by AI. His work has been published at premier venues in visualization and human-computer interaction, such as IEEE InfoVis, IEEE VAST, IEEE TVCG, and ACM SIGCHI. He also has served as an associate chair for the ACM CHI Paper Program Committee, a publicity chair and program committee for IEEE VIS, and a general chair for the Visual Analytics Health Care workshop. Prior to joining IBM Research, he worked as a postdoctoral researcher in University of Konstanz, Germany. He earned his Ph.D. and M.S. in Industrial Engineering from Purdue University, West Lafayette, Indiana, and his B.S. from University of Virginia, Charlottesville, Virginia.


3. From data to decisions, a mixed path of data visualization and machine learning [Information] [vimeo]

Qianwen Wang
Harvard Medical School, Boston, MA

Abstract:
We can treat data visualization and machine learning as different paths from data to decisions by understanding patterns in the data. Ideally, machine learning has no human intervention, and data visualization is all about human interaction and perception. But in practice, they can not be totally separated from each other. When we use data visualization, we can still benefit from the automation of certain steps. When we use machine learning, human intervention is actually inevitable.
This talk first discusses how data visualization can assist in steps of a machine learning pipeline where human interventions are needed, including model development, model evaluation, and model application. This talk then extends an existing visualization model and introduces how the needs in visualizations can be fulfilled by employing machine learning techniques. Six key visualization processes where the employment of ML techniques can benefit visualizations are identified. The six processes are mapped into main machine learning tasks to align the capabilities of machine learning with the needs in visualization. This talk ends with a discussion about how to better combine data visualization and machine learning methods based on the analysis context (i.e., the type of tasks and the amount of information).

Bio: Qianwen Wang is a PostDoc researcher at Harvard University. Her research interests include data visualization, explainable machine learning, and narrative visualization. Her work strives to facilitate the communication between humans and machine learning models through interactive visualization. She obtained her Ph.D in Hong Kong University of Science and Technology and her BS degree from Xi’an Jiaotong University. Please refer to http://wangqianwen0418.github.io for more details.


Paper presentations

4. USEVis: Visual Analytics of Attention-based Neural Embedding in Information Retrieval [Information] [vimeo]

Authors:

Xiaonan Ji Washington University in St. Louis, United States; The Ohio State University, United States
Yamei Tu The Ohio State University, United States
Wenbin He The Ohio State University, United States
Junpeng Wang The Ohio State University, United States
Han-Wei Shen The Ohio State University, United States
Po-Yin Yen Washington University in St. Louis, United States

Abstract:
Neural attention-based encoders, which effectively attend sentence tokens to their associated context without being restricted by long-term distance or dependency, have demonstrated outstanding performance in embedding sentences into meaningful representations (embeddings). The Universal Sentence Encoder (USE) is one of the most well-recognized deep neural network (DNN) based solutions, which is facilitated with an attention-driven transformer architecture and has been pre-trained on a large number of sentences from the Internet. Besides the fact that USE has been widely used in many downstream applications, including information retrieval (IR), interpreting its complicated internal working mechanism remains challenging. In this work, we present a visual analytics solution towards addressing this challenge. Specifically, focused on semantics and syntactics (concepts and relations) that are critical to domain clinical IR, we designed and developed a visual analytics system, i.e., USEVis. The system investigates the power of USE in effectively extracting sentences' semantics and syntactics through exploring and interpreting how linguistic properties are captured by attentions. Furthermore, by thoroughly examining and comparing the inherent patterns of these attentions, we are able to exploit attentions to retrieve sentences/documents that have similar semantics or are closely related to a given clinical problem in IR. By collaborating with domain experts, we demonstrate use cases with inspiring findings to validate the contribution of our work and the effectiveness of our system.


5. Bridging Cognitive Gaps Between User and Model in Interactive Dimension Reduction [Information] [vimeo]

Authors:

Ming Wang Virginia Polytechnic Institute and State University, United States
John Wenskovitch Virginia Polytechnic Institute and State University, United States; Pacific Northwest National Laboratory, United States
Leanna House Virginia Polytechnic Institute and State University, United States
Nicholas Polys Virginia Polytechnic Institute and State University, United States
Chris North Virginia Polytechnic Institute and State University, United States

Abstract:
Interactive machine learning (ML) systems are difficult to design because of the "Two Black Boxes" problem that exists at the interface between human and machine. Many algorithms that are used in interactive ML systems are black boxes that are presented to users, while the human cognition represents a second black box that can be difficult for the algorithm to interpret. These black boxes create cognitive gaps between the user and the interactive ML model. In this paper, we identify several cognitive gaps that exist in a previously-developed interactive visual analytics (VA) system, Andromeda, but are also representative of common problems in other VA systems. Our goal with this work is to open both black boxes and bridge these cognitive gaps by making usability improvements to the original Andromeda system. These include designing new visual features to help people better understand how Andromeda processes and interacts with data, as well as improving the underlying algorithm so that the system can better implement the intent of the user during the data exploration process. We evaluate our designs through both qualitative and quantitative analysis, and the results confirm that the improved Andromeda system outperforms the original version in a series of high-dimensional data analysis tasks.


6. Visualization of Topic Transitions in SNSs Using Document Embedding and Dimensionality Reduction [Information] [vimeo]

Authors:

Tiandong Xiao Nihon University, Japan
Yosuke Onoue Nihon University, Japan

Abstract:
Social networking services (SNSs) have become the main avenue, where people speak their thoughts. Accordingly, we can explore people's thoughts by analyzing topics in SNS. When do topics change? Do they ever come back? What do people mainly talk about? In this study, we design and propose a novel visual analytics system to answer these interesting questions. We abstract the topic per unit time as a point in a two-dimensional space through document embedding and dimensionality reduction techniques and provide supplemented charts that represent words appearing at a certain time and the time-series change of word occurrence over the entire period. We employ a novel text visualization technique, called semantic preserving word bubbles, to visualize words at a certain time. In addition, we demonstrate the effectiveness of our system using Twitter data about early COVID-19 trends in Japan. We propose our system to help users to explore and understand transitions in posted contents on SNS.