In modern times, the importance of Explainable Artificial Intelligence (XAI) is growing rapidly across various manufacturing industries. All industries are focusing on developing automotive products effectively to reduce implementation costs. The proposal for the special issue centers on solving problems related to the applications of XAI in several domains, such as the Banking and Finance Sector (BFS), image classification, feature extraction, and more. To address real-time problems in various industries and business management processes, a decision-making approach is essential, whereby the process can be perceptible and reproducible. Integrating XAI into real-world applications will enable new perspectives on future demands, including appropriate learning formalisms, interpretation and explanation techniques, their metrics, and the respective assessment options that arise. Although AI has been developed and is applied in the current manufacturing process, a trustworthy result cannot be obtained at critical moments. Thus, XAI provides an opportunity for all manufacturing industries to meet expectations with distinct algorithms, such as machine earning, sub-optimal routing, and non-linear algorithms. The process of XAI can be accomplished by characterizing the following parameters: exactitude, impartiality rate, and limpidity ratio, where the decision can be processed to build up the assurance percentage in primary areas related to BFS. In XAI, all developers in the emerging stages can check whether the designed system is functioning according to the specifications, and the challenges in terms of meeting the regulatory standards can also be tested. Furthermore, in commercial sectors, the process of troubleshooting can be enabled using XAI in order to scale the behavior of deliberated models with rootless practices through automatons. Moreover, the speed of evaluation is significantly increased in XAI as compared to AI, with a display on behavior for positive and negative precision values. Even interaction between humans and machines is possible with exportable documents, and interpretability provides opportunities for correcting the biases that are present in machine learning with trained datasets. In addition, the major advantage of argumentative predictions for enhancing the robustness of XAI, with expressive variables to infer the input in decision variables, can also be processed with model reasoning.
Objectives
The major objectives of the special issue are as follows:
Topics of interest include but are not limited to:
IMPORTANT DATES
Submissions Open: June 20, 2025
Submission Deadline: August 20, 2025
First Review Due: November 30, 2025
Revision Due: January 31, 2026
2nd Reviews Due: February 28, 2026
Final Manuscript Due: March 31, 2026
GUEST EDITORS
Lead Guest Editor:
Dr Shitharth Selvarajan
Lecturer Cyber Security & Digital Forensics
School of Built Environment, Engineering and Computing, Leeds Beckett University, LS1 3HE Leeds, U.K.
Email: s.selvarajan@leedsbeckett.ac.uk
Co Guest Editor:
Dr Farrukh Saleem
School of Built Environment, Engineering and Computing, Leeds Beckett University, LS1 3HE Leeds, U.K.
Email: f.saleem@leedsbeckett.ac.uk