OpenHub Repository

Improving Intrusion Detection Systems with Machine Learning and Interpretable Models: A focus on explainable artificial intelligence (XAI) for model transparency and interpretability

Show simple item record

dc.contributor.author Mohale, Vincent
dc.date.accessioned 2025-08-11T10:59:14Z
dc.date.available 2025-08-11T10:59:14Z
dc.date.issued 2024
dc.identifier.uri http://hdl.handle.net/20.500.12821/574
dc.description.abstract Machine Learning (ML)-based Intrusion Detection Systems (IDS) are integral to securing modern IoT networks, yet they often suffer from a lack of transparency, functioning as "black boxes" with opaque decision making processes. This study proposes the enhancement of IDS through the integration of Explainable Artificial Intelligence (XAI), aiming to improve the interpretability and trustworthiness of ML models used for intrusion detection. Using the UNSW-NB15 dataset, which includes diverse attack types, we developed and evaluated several MLmodels—Decision Trees, Multi-Layer Perceptron (MLP), XGBoost, Random Forest, Cat- Boost, Logistic Regression, and Gaussian Naive Bayes—by incorporating XAI techniques such as LIME, SHAP, and ELI5 to explain their predictions. The study found that integrating XAI significantly enhances the transparency of these models without compromising their predictive performance. Among the evaluated models, XGBoost and MLP, in particular, demonstrated superior accuracy while providing valuable insights into feature importance and decision processes. This improved interpretability allows human analysts to better understand and trust the IDS, facilitating more effective responses to potential security threats. The findings of this research offer practical implications for enhancing the design and deployment of IDS in IoT networks. By bridging the gap between high accuracy and explainability, this study contributes to the growing body of work in explainable cybersecurity, presenting a path forward for the development of more interpretable and reliable MLbased security solutions. Future work could explore further advancements in XAI techniques and their application to more complex datasets and network environments. en_US
dc.language.iso en en_US
dc.publisher Sol Plaatje University en_US
dc.subject Intrusion Detection Systems (IDS) en_US
dc.subject Explainable Artificial Intelligence en_US
dc.subject Model interpretability en_US
dc.subject SHAP en_US
dc.subject ELI5 en_US
dc.subject LIME en_US
dc.subject Machine Learning (ML) en_US
dc.subject Computer Science, special computer methods en_US
dc.subject Artificial intelligence, genetic algorithms en_US
dc.subject Data encryption, malware en_US
dc.subject Data security en_US
dc.subject Computer network security, access control applications en_US
dc.subject Cybersecurity en_US
dc.title Improving Intrusion Detection Systems with Machine Learning and Interpretable Models: A focus on explainable artificial intelligence (XAI) for model transparency and interpretability en_US
dc.type Thesis en_US


Files in this item

This item appears in the following Collection(s)

Show simple item record

Search OpenHub


Browse

My Account