Announced
Explainability in Anomaly Detection for Disease Outbreak Surveillance
Announced:
2025-04-09
Description
The goal of this project is to enhance the explainability of anomaly detection models used in disease outbreak surveillance. Specifically, employing and develop explainable AI (xAI) techniques to improve the transparency of data-driven systems that identify clusters of unusual health counseling contacts. To achieve this, you will first work on implementing anomaly detection models and then applying a range of explainability techniques, including pre-model methods (such as feature selection and transformation), in-model approaches (such as interpretable models like decision trees and subspace anomaly detection), and post-hoc techniques (such as SHapley Additive exPlanations (SHAP) and Local Interpretable Model-agnostic Explanations (LIME)) to provide localized insights on time series data. You can also then explore combinations of these strategies to build a comprehensive framework for enhancing transparency in anomaly detection. By integrating explainability into these models, we aim to help healthcare professionals better understand, trust, and act on the results of automated surveillance systems. This project offers students the opportunity to work at the intersection of machine learning, public health, and AI interpretability, ultimately improving how we monitor and respond to emerging disease threats with more reliable and explainable systems.
Contacts: Amir Aminifar (EIT)