BRIDGING THE INTERPRETABILITY GAP: EXPLORING EXPLAINABLE AI IN DATA ANALYTICS

Authors

  • K Aishwarya Pill Product Manager, SiriusMindshare, United States. Author
  • Prakash Somasundaram Lead Software Engineer, Alteryx, Inc, United States. Author

Keywords:

Accuracy, AI Models, Artificial Intelligence, Data Analytics, Decision Making, Explainable AI, Interpretability

Abstract

This research paper explores the crucial nexus between data analytics and artificial intelligence (AI), with an emphasis on Explainable AI (XAI) as a means of overcoming the interpretability gap. In a time when sophisticated AI models frequently operate as "black boxes," it is critical to comprehend how they make decisions. The paper addresses the challenge of achieving a balance between the accuracy of AI models and the interpretability required for stakeholders to trust and comprehend the insights derived from data analytics. The study covers a range of approaches and techniques utilized in Explainable AI, providing insight into how complex AI models can incorporate interpretability. By analyzing the evolving landscape of explainability techniques, the paper evaluates their effectiveness in enhancing the transparency and interpretability of AI-driven data analytics. Additionally, the ethical implications of using AI models are examined, with a focus on the significance of openness, responsibility, and user confidence. The work lays the groundwork for future developments in the field by offering insights into the changing field of Explainable AI and its function in resolving interpretability issues. In summary, the study highlights the importance of Explainable AI as a link that guarantees precise outcomes while also promoting a more profound comprehension of the processes involved in making decisions in the field of data analytics.

References

Feiyu Xu, Hans Uszkoreit, Yangzhou Du, Wei Fan, Dongyan Zhao, and Jun Zhu, "Explainable AI: A Brief Survey on History, Research Areas, Approaches and Challenges. http://dx.doi.org/10.1007/978-3-030-32236-6_51.

F.K. Doilovi, M. Bri, N. Hlupi, "Explainable artificial intelligence: A survey," in: 2018 41st International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO), pp. 0210–0215, May 2018. https://doi.org/10.23919/MIPRO.2018.8400040.

B. Goodman, S. Flaxman, "European union regulations on algorithmic decision-making and a 'right to explanation'," AI Magazine, vol. 38, no. 3, pp. 50–57, 2017. http://dx.doi.org/10.1609/aimag.v38i3.2741.

Z.C. Lipton, "The mythos of model interpretability," ACM Queue - Machine Learning, vol. 16. http://dx.doi.org/10.1145/3233231.

Parisineni Sai Ram Aditya, Mayukha Pal, "Enhancing trust and interpretability of complex machine learning models using local interpretable model agnostic

shap explanations," International Journal of Data Science and Analytics, October 2023. https://doi.org/10.1007/s41060-023-00458-w.

Downloads

Published

2024-01-17

How to Cite

BRIDGING THE INTERPRETABILITY GAP: EXPLORING EXPLAINABLE AI IN DATA ANALYTICS. (2024). INTERNATIONAL JOURNAL OF ARTIFICIAL INTELLIGENCE & MACHINE LEARNING (IJAIML), 3(01), 13-18. https://mylib.in/index.php/IJAIML/article/view/IJAIML_03_01_002