Deep learning (DL) is a powerful technique for many real-time applications, but it is vulnerable to adversarial attacks. Herein, we consider DL-based modulation classification, with the objective to create DL models that are robust against attacks. Specifically, we introduce three defense techniques: i) randomized smoothing, ii) hybrid projected gradient descent adversarial training, and iii) fast adversarial training, and evaluate them under both white-box (WB) and black-box (BB) attacks. We show that the proposed fast adversarial training is more robust and computationally efficient than the other techniques, and can create models that are extremely robust to practical (BB) attacks.
Funding Agencies|Security-Link; Start-Up Research Grant of IIT Guwahati