Call for Papers: (Deadline Extended until 23 June 2023 (AoE))

We invite submissions from researchers interested in Interpretable Machine Learning (IML) and/or Uncertainty Quantification, for our Workshop. The scope of the topics covers all types of data (tabular, text, images, etc.) and all type of ML models. We particularly encourage interdisciplinary work that lies on the intersection of explainability and uncertainty.

The workshop’s topics of interest include (but not limited to) methods and applications on:

  • Intersection of Explainability and Uncertainty:
    • Explainability methods that produce uncertain explanations
    • Explainability for probabilistic ML models
    • Explainability on Bayesian Models
    • Explainability on Ensemble Methods
    • Identifying and explaining the sources of uncertainty
    • Interpretable-by-design models that incorporate uncertainty
  • Explainability:
    • XAI and IML (Interpretable Machine Learning)
    • Counterfactual Explanations
    • Global and Local Explainability Techniques
    • Interpretable-by-design models
    • Adversarial Attacks on explainability methods
    • Stability of explainability methods
    • AI model robustness and explainability
    • Explaining trade-offs between objectives, such as effectiveness, bias, uncertainty
    • Explainability and privacy
    • Explainability and fairness

Submission Instructions:

  • Full Papers: Suitable for novel contributions related to Explainability and/or Uncertainty. This can include a novel method or new insights on these two fields that would be valuable for the community. Please note that the page limit for the paper 14 pages, excluding references.

  • Extended Abstracts: Suitable for discussing novel ideas related to Explainability and/or Uncertainty. This can include open research challenges or industrial applications to foster discussion among panelists and facilitate future collaborations. The page limit for the extended abstract is 2-4 pages, excluding references.

  • Abstracts of already published work: Suitable for discussing previously published work related to Explainability and/or Uncertainty. The page limit for the extended abstract is 2, excluding references.


Instructions to authors:

  • To submit your papers, please use this link: https://cmt3.research.microsoft.com/ECMLPKDDworkshop2023/Track/3/Submission/Create
  • Post-workshop proceedings will be published by Springer Communications in Computer and Information Science. They will be organized by focused scope and possibly indexed by WOS. However, authors can choose to opt-in or opt-out.
  • Papers must be written in English and formatted in accordance with the Springer Lecture Notes in Computer Science (LNCS), using this template.
  • Up to 10 MB of additional materials (e.g. proofs, audio, images, video, data, or source code) can be uploaded with your submission. The reviewers and the program committee reserve the right to judge the paper solely on the basis of the main paper; looking at any additional material is at the discretion of the reviewers and is not required.
  • At least one author of each accepted paper is required to attend the workshop. For the accepted papers, we plan to have regular talks and additional poster presentations to foster further discussions, based on local venue capabilities.