Explainable Artificial Intelligence with Single Layer Feedforward Neural Network and Improved Crowned Porcupine Optimization Algorithm for Classification Problems

Authors

  • S. Caxton Emerald Department of Computer Science, Pondicherry University, Puducherry, India
  • T. Vengattaraman Department of Computer Science, Pondicherry University, Puducherry, India
Volume: 15 | Issue: 2 | Pages: 21593-21598 | April 2025 | https://doi.org/10.48084/etasr.10070

Abstract

The increasing occurrence of network intrusions calls for the development of advanced Artificial Intelligence (AI) techniques to tackle classification challenges in Intrusion Detection Systems (IDSs). However, the complex decision-making processes of AI often prevent human security professionals from fully understanding the behavior of the model. Explainable AI (XAI) enhances trust in IDSs by providing transparency and assisting professionals in interpreting data and reasoning. This study explores AI techniques that improve both accuracy and interpretability, strengthening trust management in cybersecurity. Integrating performance with explainability improves decision-making and builds confidence in automated systems for classifying network intrusions. This study presents an Explainable Artificial Intelligence Kernel Extreme Learning Machine Improved with the Crowned Porcupine Optimization Algorithm (XAIKELM-ICPOA) approach. Initially, the proposed XAIKELM-ICPOA method preprocesses the data using min-max scaling to ensure uniformity and improve model performance. Next, the Kernel Extreme Learning Machine (KELM) model is employed for classification. The Improved Crowned Porcupine Optimization (ICPO) method is used to optimize KELM hyperparameters, improving classification performance. Finally, SHAP is employed as an XAI technique to provide insights into feature contributions and decision-making processes. The XAIKELM-ICPOA method was evaluated on the NSL-KDD dataset, achieving an accuracy of 96.82%.

Keywords:

explainable artificial intelligence, kernel extreme learning machine, intrusion detection, crowned porcupine optimization, min-max scaling

Downloads

Download data is not yet available.

References

P. P. Angelov, E. A. Soares, R. Jiang, N. I. Arnold, and P. M. Atkinson, "Explainable artificial intelligence: an analytical review," WIREs Data Mining and Knowledge Discovery, vol. 11, no. 5, 2021, Art. no. e1424.

B. Kovalerchuk and E. McCoy, "Explainable Machine Learning for Categorical and Mixed Data with Lossless Visualization," in Artificial Intelligence and Visualization: Advancing Visual Knowledge Discovery, B. Kovalerchuk, K. Nazemi, R. Andonie, N. Datia, and E. Banissi, Eds. Springer Nature Switzerland, 2024, pp. 73–123.

C. I. Nwakanma et al., "Explainable Artificial Intelligence (XAI) for Intrusion Detection and Mitigation in Intelligent Connected Vehicles: A Review," Applied Sciences, vol. 13, no. 3, Jan. 2023, Art. no. 1252.

D. Minh, H. X. Wang, Y. F. Li, and T. N. Nguyen, "Explainable artificial intelligence: a comprehensive review," Artificial Intelligence Review, vol. 55, no. 5, pp. 3503–3568, Jun. 2022.

D. Gunning, M. Stefik, J. Choi, T. Miller, S. Stumpf, and G.-Z. Yang, "XAI—Explainable artificial intelligence," Science Robotics, vol. 4, no. 37, Dec. 2019, Art. no. eaay7120.

R. Confalonieri, L. Coba, B. Wagner, and T. R. Besold, "A historical perspective of explainable Artificial Intelligence," WIREs Data Mining and Knowledge Discovery, vol. 11, no. 1, 2021, Art. no. e1391.

F. K. Dosilovic, M. Brcic, and N. Hlupic, "Explainable artificial intelligence: A survey," in 2018 41st International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO), Opatija, Croatia, May 2018, pp. 0210–0215.

K. A. B. Hamou, Z. Jarir, and S. Elfirdoussi, "Application of LightGBM Algorithm in Production Scheduling Optimization on Non-Identical Parallel Machines," Engineering, Technology & Applied Science Research, vol. 14, no. 6, pp. 17973–17978, Dec. 2024.

E. Ramakrishna, J. Gadhamappagari, and P. Sujatha, "Auto tuning of PI Gains using Cuttlefish Optimization for DC Link Voltage Control in a 5-level HB MMC D-STATCOM," Engineering, Technology & Applied Science Research, vol. 13, no. 6, pp. 12086–12091, Dec. 2023.

S. Phimphisan and N. Sriwiboon, "A Customized CNN Architecture with CLAHE for Multi-Stage Diabetic Retinopathy Classification," Engineering, Technology & Applied Science Research, vol. 14, no. 6, pp. 18258–18263, Dec. 2024.

B. Sharma, L. Sharma, C. Lal, and S. Roy, "Explainable artificial intelligence for intrusion detection in IoT networks: A deep learning based approach," Expert Systems with Applications, vol. 238, Mar. 2024, Art. no. 121751.

L. Almuqren, M. S. Maashi, M. Alamgeer, H. Mohsen, M. A. Hamza, and A. A. Abdelmageed, "Explainable Artificial Intelligence Enabled Intrusion Detection Technique for Secure Cyber-Physical Systems," Applied Sciences, vol. 13, no. 5, Jan. 2023, Art. no. 3081.

O. Arreche, T. Guntur, and M. Abdallah, "XAI-IDS: Toward Proposing an Explainable Artificial Intelligence Framework for Enhancing Network Intrusion Detection Systems," Applied Sciences, vol. 14, no. 10, Jan. 2024, Art. no. 4170.

X. Larriva-Novo, C. Sánchez-Zas, V. A. Villagrá, A. Marín-Lopez, and J. Berrocal, "Leveraging Explainable Artificial Intelligence in Real-Time Cyberattack Identification: Intrusion Detection System Approach," Applied Sciences, vol. 13, no. 15, Jan. 2023, Art. no. 8587.

U. Ahmed et al., "Explainable AI-based innovative hybrid ensemble model for intrusion detection," Journal of Cloud Computing, vol. 13, no. 1, Oct. 2024, Art. no. 150.

S. Wali and I. Khan, "Explainable AI and Random Forest Based Reliable Intrusion Detection system." Dec. 18, 2021.

O. Arreche, T. R. Guntur, J. W. Roberts, and M. Abdallah, "E-XAI: Evaluating Black-Box Explainable AI Frameworks for Network Intrusion Detection," IEEE Access, vol. 12, pp. 23954–23988, 2024.

B. B. Gupta et al., "Advance drought prediction through rainfall forecasting with hybrid deep learning model," Scientific Reports, vol. 14, no. 1, Dec. 2024, Art. no. 30459.

M. Wang et al., "Grey wolf optimization evolving kernel extreme learning machine: Application to bankruptcy prediction," Engineering Applications of Artificial Intelligence, vol. 63, pp. 54–68, Aug. 2017.

W. Lei, Y. Gu, and J. Huang, "An Enhanced Crowned Porcupine Optimization Algorithm Based on Multiple Improvement Strategies," Applied Sciences, vol. 14, no. 23, Jan. 2024, Art. no. 11414.

M. Keshk, N. Koroniotis, N. Pham, N. Moustafa, B. Turnbull, and A. Y. Zomaya, "An explainable deep learning-enabled intrusion detection framework in IoT networks," Information Sciences, vol. 639, Aug. 2023, Art. no. 119000.

"NSL-KDD." Kaggle, [Online]. Available: https://www.kaggle.com/datasets/hassan06/nslkdd.

W. F. S. Stolfo et al., "KDD Cup 1999 Data." UCI Machine Learning Repository, 1999.

N. Omer, A. H. Samak, A. I. Taloba, and R. M. Abd El-Aziz, "A novel optimized probabilistic neural network approach for intrusion detection and categorization," Alexandria Engineering Journal, vol. 72, pp. 351–361, Jun. 2023.

Downloads

How to Cite

[1]
Emerald, S.C. and Vengattaraman, T. 2025. Explainable Artificial Intelligence with Single Layer Feedforward Neural Network and Improved Crowned Porcupine Optimization Algorithm for Classification Problems. Engineering, Technology & Applied Science Research. 15, 2 (Apr. 2025), 21593–21598. DOI:https://doi.org/10.48084/etasr.10070.

Metrics

Abstract Views: 27
PDF Downloads: 12

Metrics Information