Abstract
Inherent uncertainties derived from different root causes have realized as serious hurdles to find effective solutions for real-world problems. Critical safety concerns have been brought due to a lack of considering diverse causes of uncertainties, resulting in high risk due to misinterpretation of uncertainties (e.g., misdetection or misclassification of an object by an autonomous vehicle). Graph neural networks (GNNs) have received tremendous attention in the data science community. Despite their superior performance in classification and regression tasks, they didn’t consider various types of uncertainties in their decision process. In this talk, I will present a general approach to quantifying the inherent uncertainties of GNNs that are derived from different root causes in training data, such as vacuity (i.e., uncertainty due to a lack of evidence) and dissonance (i.e., uncertainty due to conflicting evidence).
Speaker Bio
Dr. Feng Chen is currently an Associate Professor at the Department of Computer Science at UT Dallas. He received his Ph.D. in Computer Science from Virginia Tech in 2012. Dr. Chen’s research interests include large-scale data mining, network mining, and machine learning, with a focus on event and pattern detection in massive, complex networks. His current research includes applications in disease outbreak detection in disease surveillance networks, societal event detection/forecasting in social networks, cyber attack detection in computer networks, and subnetwork marker detection in biological networks, among others. His research has been funded by NSF, NIH, ARO, IARPA, and the U.S. Department of Transportation, and published in over 100 journal and conference papers in data science and machine learning. Dr. Chen was awarded an NSF CAREER award in 2018 for his research on “Complex Pattern Discovery in Big Attributed Networks”.