Nura Kawa
Nura Kawa is a Research Scientist at neurocat (Berlin, Germany), a startup that delivers innovation in the development of safe and secure AI systems. Her current research focus is on adversarial robustness of deep neural networks. Additionally, she is interested in privacy-preserving machine learning and in explainable AI. Nura holds an MSc in Statistics and Data Science from KU Leuven (Leuven, Belgium) and a BA in Statistics from UC Berkeley (Berkeley, USA).
Sessions
Explainable Artificial Intelligence (XAI) is crucial for the development of responsible, trustworthy AI. Machine learning models such as deep neural networks can perform highly complex computational tasks at scale, but they do not reveal their decision-making process. This becomes problematic when such models are used to make high-stakes decisions, such as medical diagnoses, which require clear explanations in order to be trusted.
This talk discusses Explainable AI using examples of interest for both machine learning practitioners and non-technical audiences. This talk is not very technical; it does not focus on how to apply an existing method to their model. Rather, the talk discusses the problem of Explainability_ as whole, namely: what is the Explainability Problem and why it must be solved, how recent academic literature addresses the problem, and how the problem will evolve with new legislation.
To get the most from this talk, the audience should have some familiarity with standard machine learning algorithms. However, no technical background is needed to grasp the key takeaways: the necessity of explainability in machine learning, the challenges of developing explainability methods, and the impact that XAI has on businesses, practitioners and end-users.