How to Evaluate Explainability? – A Case for Three Criteria
The increasing complexity of software systems and the influence of software-supported decisions in our society have sparked the need for software that is safe, reliable, and fair. Explainability has been identified as a means to achieve these qualities. It is recognized as an emerging non-functional requirement (NFR) that has a significant impact on system quality. However, in order to develop explainable systems, we need to understand when a system satisfies this NFR. To this end, appropriate evaluation methods are required. However, the field is crowded with evaluation methods, and there is no consensus on which are the "right" ones. Much less, there is not even agreement on which criteria should be evaluated. In this vision paper, we will provide a multidisciplinary motivation for three such quality criteria concerning the information that systems should provide: comprehensibility, fidelity, and assessability. Our aim is to to fuel the discussion regarding these criteria, such that adequate evaluation methods for them will be conceived.
READ FULL TEXT