**Peer Review Journal ** DOI on demand of Author (Charges Apply) ** Fast Review and Publicaton Process ** Free E-Certificate to Each Author

Current Issues
     2026:7/2

International Journal of Multidisciplinary Research and Growth Evaluation

ISSN: (Print) | 2582-7138 (Online) | Impact Factor: 9.54 | Open Access

The Role of Explainable AI in Enhancing Trust and Transparency in Supply Chain Risk Mitigation

Full Text (PDF)

Open Access - Free to Download

Download Full Article (PDF)

Abstract

Increasing complexity of global supply chains along with opacity of the traditional AI models brought out the need for Explainable Artificial Intelligence (XAI) for improving the trust and transparency in risk mitigation. This systematic review investigates the effect that XAI techniques have to the degree of trust and decision-making transparency among stakeholders on supply chain risk Management. Databases such as Scopus, Web of Science, IEEE Xplore, and other databases result in the identification of 21 studies from an earlier 160 records in a rigorous selection process. Inclusion of studies had been done based on the empirical or conceptual application of XAI in supply chain risk contexts. The findings demonstrate that post-hoc explanation methods such as SHAP, LIME, counterfactuals are widely used across transportation, supplier assessment, cyber resilience, fraud detection applications. Consistently, integration of XAI improves interpretability of AI decision outcomes, develops trust in stakeholders, and is useful for complying with regulatory and governance standards. However, there are many applications that remain as concept or prototype, and with little real world longitudinal studies. The synthesised findings while discussing how to improve the XAI methods discuss the need of the industry specific XAI frameworks that are scalable, adaptive and are sensitive to the varied interpretability requirement of supply chain stakeholders. The limitations are the qualitative nature of the synthesis, variation of the study designs and limited empirical validation for the long-term trust impacts. Future research recommends longitudinal case studies, scalability testing across the complex global supply chains, as well as pure scientific work on human AI collaboration dynamics, so as to establish the application of trustworthy and explainable AI systems in the supply chain management.

How to Cite This Article

Josephine Akubilla, Oluwafunmilayo Ifeoluwa Somoye, Fatomi Abiodun, Oreoluwa Abimbola Serifat (2025). The Role of Explainable AI in Enhancing Trust and Transparency in Supply Chain Risk Mitigation . International Journal of Multidisciplinary Research and Growth Evaluation (IJMRGE), 6(3), 367-377. DOI: https://doi.org/10.54660/.IJMRGE.2025.6.3.367-377

Share This Article: