The Role of Fuzzy Sets and Systems in Explainable Artificial Intelligence Applications

University of Santiago de Compostela, Spain

Quoting some words extracted from the 2016 challenge stated by the USA Defense Advanced Research Projects Agency (DARPA), "Even though current artificial intelligent (AI) systems offer many benefits in many applications, their effectiveness is limited by a lack of explanation ability when interacting with humans." In addition, since most AI applications interact with humans, ethical and legal issues become essential. For instance, Explanation is highlighted in the ACM Code of Ethics as a basic principle in the search for "Algorithmic Transparency and Accountability". Moreover, the "Right to Explanation" to European citizens, no matter if decisions are made by humans or by AI agents, is highlighted in the new European General Data Protection Regulation (GDPR) which takes effect in May 2018. Thus, people and companies demand a new generation of eXplainable AI (XAI) systems, i.e., AI systems ready to explain their automatic decisions in a human-like fashion. They are expected to naturally interact with humans, thus providing comprehensible explanations of decisions automatically made.

In this talk, I will explain how taking profit of my previous background as designer of interpretable fuzzy systems, I have moved a step forward in the generation of explainable AI systems. Firstly, I will briefly review the current trends on XAI. Then, I will sketch how certain computational intelligence techniques, namely interpretable fuzzy systems, are ready to play a key role in the development of XAI systems. Then, I will introduce a preliminary approach for building explainable fuzzy systems. It is based in combining interpretable fuzzy systems and natural language generation systems. Then, I will discuss several application examples. Finally, the talk will end with some conclusions and my roadmap for the near future.