Name

Alexander Jung

Assistant Professor for Machine Learning @ Aalto University

Short bio

Alexander Jung has obtained a Phd (Sub auspiciis Praesidentis) in statistical signal processing from TU Vienna in 2012. He has been Post-Doc at ETH Zurich and TU Vienna before he joined Aalto University as Assistant Professor for Machine Learning in 2015. The focus of his research is to understand fundamental limits and efficient methods for machine learning problems arising in various application domains. The quality of his research has been recognized by a Best Student Paper Award at the conference IEEE ICASSP 2011 and an Amazon Web Services Machine Learning Award in 2018. He co-authored a paper that was finalist for the best student paper award at Asilomar 2017. While at Aalto University, he has redesigned the main course on Machine Learning and developed a new online course “Machine Learning with Python”.
He has been selected as Teacher of the Year 2018 by the Department of Computer Science at Aalto University. He serves as the chair of the Signal Processing and Circuits & Systems Chapter within the IEEE Finland Section.

Social
Schedule

Explainability and Ethics

July 17, 14:05

Talk

An Information-Theoretic Approach to Personalized Explanations of Machine Learning

Description

Automated decision making is used routinely throughout our every-day life. Recommender systems decide which jobs, movies, or other user profiles might be interesting to us. Spell checkers help us to make good use of language. Fraud detection systems decide if credit card transactions should be verified more closely. Many of these decision making systems use machine learning methods that fit complex models to massive datasets. The successful deployment of machine learning (ML) methods to many (critical) application domains crucially depend on its explainability. Indeed, humans have a strong desire to get explanations that resolve the uncertainty about experienced phenomena like the predictions and decisions obtained from ML methods. Explainable ML is challenging since explanations must be tailored (personalized) to individual users with varying backgrounds. Some users might have received university-level education in ML, while other users might have no formal training in linear algebra. Linear regression with few features might be perfectly interpretable for the first group but might be considered a black-box by the latter. We propose a simple probabilistic model for predictions and user knowledge. This model allows us to study explainable ML using information theory. Explaining is here considered as the task of reducing the “surprise” incurred by a prediction. We quantify the effect of an explanation by the conditional mutual information between the explanation and prediction, given the user background.