Do ML Experts Discuss Explainability for AI Systems? A discussion case in the industry for a domain-specific solution

02/27/2020
by   Juliana Jansen Ferreira, et al.
0

The application of Artificial Intelligence (AI) tools in different domains are becoming mandatory for all companies wishing to excel in their industries. One major challenge for a successful application of AI is to combine the machine learning (ML) expertise with the domain knowledge to have the best results applying AI tools. Domain specialists have an understanding of the data and how it can impact their decisions. ML experts have the ability to use AI-based tools dealing with large amounts of data and generating insights for domain experts. But without a deep understanding of the data, ML experts are not able to tune their models to get optimal results for a specific domain. Therefore, domain experts are key users for ML tools and the explainability of those AI tools become an essential feature in that context. There are a lot of efforts to research AI explainability for different contexts, users and goals. In this position paper, we discuss interesting findings about how ML experts can express concerns about AI explainability while defining features of an ML tool to be developed for a specific domain. We analyze data from two brainstorm sessions done to discuss the functionalities of an ML tool to support geoscientists (domain experts) on analyzing seismic data (domain-specific data) with ML resources.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset