Abstract:
Machine Learning (ML)-based Intrusion Detection Systems (IDS) are integral to securing modern IoT networks, yet they often suffer from a lack of transparency, functioning as "black boxes" with opaque decision making processes. This study proposes the enhancement of IDS through the integration of Explainable Artificial Intelligence (XAI), aiming to improve the interpretability and trustworthiness of ML models used for intrusion detection. Using the UNSW-NB15 dataset, which includes diverse attack types, we developed and evaluated several MLmodels—Decision Trees, Multi-Layer Perceptron (MLP), XGBoost, Random Forest, Cat- Boost, Logistic Regression, and Gaussian Naive Bayes—by incorporating XAI techniques such as LIME, SHAP, and ELI5 to explain their predictions. The study found that integrating XAI significantly enhances the transparency of these models without compromising their predictive performance. Among the evaluated models, XGBoost and MLP, in particular, demonstrated superior accuracy while providing valuable insights into feature importance and decision processes. This improved interpretability allows human analysts to better understand and trust the IDS, facilitating more effective responses to potential security threats. The findings of this research offer practical implications for enhancing the design and deployment of IDS in IoT networks. By bridging the gap between high accuracy and explainability, this study contributes to the
growing body of work in explainable cybersecurity, presenting a path forward for the development of more interpretable and reliable MLbased security solutions. Future work could explore further advancements in XAI techniques and their application to more complex datasets and network environments.