Member-only story

Explainable AI for Malware Hunting

Mohd Saqib
6 min readNov 16, 2024

We have tremendous examples where AI is effectively applied to real-world data, especially with images, natural language processing (NLP), and other multimedia-related tasks. However, when it comes to binary data, the field remains largely unexplored. In this article, we’ll dive into the potential of Explainable AI (XAI) for analyzing malicious binaries, opening doors to insights and breakthroughs in cybersecurity with the help of this article.

Source: AI generated

Typically, in malware analysis, we work with binary files because source code is often unavailable. Most XAI algorithms are designed for NLP or image data, not for binary data. Even if we adapt these algorithms to highlight a malicious binary string within a large binary, it often doesn’t provide meaningful insights.

Self Created

So, we need to customize our explainability methods to provide meaningful clues to malware analysts and reduce their workload. Malware analysis using XAI can generally be broken down into static, dynamic, and hybrid approaches.

source

Create an account to read the full story.

The author made this story available to Medium members only.
If you’re new to Medium, create a new account to read this story on us.

Or, continue in mobile web

Already have an account? Sign in

Mohd Saqib
Mohd Saqib

Written by Mohd Saqib

Scholar @ McGill University, Canada | IIT (ISM) | AMU | Travel | msaqib.cs@gmail.com

No responses yet

Write a response