Time |
Title |
Author(s) |
Duration |
2.30-2.35 |
Welcome and Introduction |
Christos Diou & Vasilis Gkolemis |
5 minutes |
2.35-2.50 |
Short introduction on the intersection of uncertainty and explainability in machine learning |
Christos Diou |
15' |
2.50-3.10 |
Using Stochastic Methods to Setup High Precision Experiments |
Kristina Veljković |
(17' presentation + 3' questions) |
3.10-3.30 |
Using Part-based Representations for Explainable Deep Reinforcement Learning |
Manos Kirtas |
(17' presentation + 3' questions) |
3.30-4.00 |
Explaining an image classifier with a GAN conditioned by uncertainty |
Adrien Le Coz |
7 minutes |
|
Identifying Trends in Feature Attributions during Training of Neural Networks |
Elena Terzieva |
7 minutes |
|
Relation of Activity and Confidence when Training Deep Neural Networks |
Valerie Krug |
7 minutes |
|
Temperature scaling for reliable uncertainty estimation: Application to automatic music genre classification |
Hanna Lukashevich |
7 minutes |
Coffee Break |
|
4.30-4.50 |
Explainable Learning with Hierarchical Online Deterministic Annealing |
Christos Mavridis |
(17' presentation + 3' questions) |
4.50-5.10 |
Regionally Additive Models: Explainable-by-design models minimizing feature interactions |
Vasilis Gkolemis |
(17' presentation + 3' questions) |
5.10-5.45 |
FALE: Fairness aware ALE plots for auditing bias in subgroups |
Giorgos Giannopoulos |
7 minutes |
|
Improving the Validity of Decision Trees as Explanations |
Jiří Němeček |
7 minutes |
|
Towards Explainability in Monocular Depth Estimation |
Vasileios Arampatzakis |
7 minutes |
|
Explaining uncertainty in AI for clinical decision support systems |
Elisabeth Heremans |
7 minutes |
|
Designing a Method to Identify Explainability Requirements in Cancer Research |
Didier Domínguez |
7 minutes |
5.45-6.00 |
Poster session - Poster dimensions (75x200 cm) double-side |
|
15 minutes |