- Tytuł:
- Explainable deep neural network-based analysis on intrusion-detection systems
- Autorzy:
-
Pande, Sagar Dhanraj
Khamparia, Aditya - Powiązania:
- https://bibliotekanauki.pl/articles/27312883.pdf
- Data publikacji:
- 2023
- Wydawca:
- Akademia Górniczo-Hutnicza im. Stanisława Staszica w Krakowie. Wydawnictwo AGH
- Tematy:
-
IDS
deep neural network
explainable AI
NSL-KDD
local explainability
global explainability - Opis:
- The research on intrusion-detection systems (IDSs) has been increasing in recent years. Particularly, this research widely utilizes machine-learning concepts, and it has proven that these concepts are effective with IDSs – particularly, deep neural network-based models have enhanced the rates of the detection of IDSs. In the same instance, these models are turning out to be very complex, and users are unable to track down explanations for the decisions that are made; this indicates the necessity of identifying the explanations behind those decisions to ensure the interpretability of the framed model. In this aspect, this article deals with a proposed model that can explain the obtained predictions. The proposed framework is a combination of a conventional IDS with the aid of a deep neural network and the interpretability of the model predictions. The proposed model utilizes Shapley additive explanations (SHAPs) that mixes the local explainability as well as the global explainability for the enhancement of interpretations in the case of IDS. The proposed model was implemented by using popular data sets (NSL-KDD and UNSW-NB15), and the performance of the framework was evaluated by using their accuracy. The framework achieved accuracy levels of 99.99 and 99.96%, respectively. The proposed framework can identify the top-4 features using local explainability and the top-20 features using global explainability.
- Źródło:
-
Computer Science; 2023, 24 (1); 97--111
1508-2806
2300-7036 - Pojawia się w:
- Computer Science
- Dostawca treści:
- Biblioteka Nauki