Loading

A Novel Approach to Explainable AI using Formal Concept Lattice
Bhaskaran Venkatsubramaniam1, Pallav Kumar Baruah2

1Bhaskaran Venkatsubramaniam*, Department of Math and Computer Science, Sri Sathya Sai Institute of Higher Learning, Muddenahalli (Karnataka), India. 
2Prof. Pallav Kumar Baruah, Department of Math and Computer Science, Sri Sathya Sai Institute of Higher Learning, Puttaparthi (Andhra Pradesh), India.
Manuscript received on 26 May 2022. | Revised Manuscript received on 02 June 2022. | Manuscript published on 30 June 2022. | PP: 36-48 | Volume-11 Issue-7, June 2022. | Retrieval Number: 100.1/ijitee.G99920611722 | DOI: 10.35940/ijitee.G9992.0611722
Open Access | Ethics and Policies | Cite | Mendeley | Indexing and Abstracting
© The Authors. Blue Eyes Intelligence Engineering and Sciences Publication (BEIESP). This is an open access article under the CC-BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/)

Abstract: Current approaches in explainable AI use an interpretable model to approximate a black box model or use gradient techniques to determine the salient parts of the input. While it is true that such approaches provide intuition about the black box model, the primary purpose of an explanation is to be exact at an individual instance and also from a global perspective, which is difficult to obtain using such model based approximations or from salient parts. On the other hand, traditional, deterministic approaches satisfy this primary purpose of explainability of being exact at an individual instance and globally, while posing a challenge to scale for large amounts of data. In this work, we propose a deterministic, novel approach to explainability using a formal concept lattice for classification problems, that reveal accurate explanations both globally and locally, including generation of similar and contrastive examples around an instance. This technique consists of preliminary lattice construction, synthetic data generation using implications from the preliminary lattice followed by actual lattice construction which is used to generate local, global, similar and contrastive explanations. Using sanity tests like Implementation Invariance, Input transformation Invariance, Model parameter randomization sensitivity and model-outcome relationship randomization sensitivity, its credibility is proven. Explanations from the lattice are compared to a white box model in order to prove its trustworthiness. 
Keywords: Explainable AI, Deterministic methods for XAI, Concept Lattice, Formal Concept Analysis, Lattice explanation for black box models, Lattice for XAI, XAI.
Scope of the Article: Artificial Intelligence