Introduction - If you have any usage issues, please Google them yourself
The cybersecurity community is slowly leveraging machine learning (ML) to
combat ever-evolving threats. One of the biggest drivers for the successful
adoption of these models is how well domain experts and users can under
stand and trust their functionality. Most models are perceived as a black box
despite the growing popularity of machine learning models in cybersecurity
applications (e.g., an intrusion detection system (IDS)). As these black-
box models are employed to make meaningful predictions, the stakeholders’
demand for transparency and explainability increases. Explanations support
ing the output of ML models are crucial in cybersecurity, where experts
require far more information from the model than a simple binary output for
their analysis.
Packet : Big_Data_Analytics_and_Intelligent_Systems_for_Cyber_Threat_Intelligence.pdf.zip filelist
Big_Data_Analytics_and_Intelligent_Systems_for_Cyber_Threat_Intelligence.pdf