TY - GEN
T1 - Enhanced CNN Security based on Adversarial FGSM Attack Learning
T2 - 20th International Multi-Conference on Systems, Signals and Devices, SSD 2023
AU - Khriji, Lazhar
AU - Messaoud, Seifeddine
AU - Bouaafia, Soulef
AU - Ammari, Ahmed Chiheb
AU - Machhout, Mohsen
N1 - Publisher Copyright:
© 2023 IEEE.
PY - 2023
Y1 - 2023
N2 - Convolutional Neural Networks (CNNs) have grown in popularity for clinical image processing applications like as Covid and cancer detection. A new study, however, shows that hostile attacks with modest, unnoticeable disruptions can damage deep healthcare learning systems. This creates safety issues about using these technologies in healthcare situations. In this study, we will look at the approaches used to fight adversarial attacks on medical imaging. Next, we intend to investigate the resilience of pre-trained CNN architectures, as well as LeNet5 and MobileNetV1 models against Fast Gradient Sign Method (FGSM) attacks in a medical healthcare application-based chest X-ray dataset. We discover that pre-trained CNN models are much more sensitive to antagonistic assaults than other models, due to key feature discrepancies between them and regular models. Finally, we propose to improve the CNN' models security by investigating adversarial training. According to the numerical results, models with lower computational complexity and restricted layers are much more safe against malicious attacks than bigger models which are commonly utilized in medical healthcare systems.
AB - Convolutional Neural Networks (CNNs) have grown in popularity for clinical image processing applications like as Covid and cancer detection. A new study, however, shows that hostile attacks with modest, unnoticeable disruptions can damage deep healthcare learning systems. This creates safety issues about using these technologies in healthcare situations. In this study, we will look at the approaches used to fight adversarial attacks on medical imaging. Next, we intend to investigate the resilience of pre-trained CNN architectures, as well as LeNet5 and MobileNetV1 models against Fast Gradient Sign Method (FGSM) attacks in a medical healthcare application-based chest X-ray dataset. We discover that pre-trained CNN models are much more sensitive to antagonistic assaults than other models, due to key feature discrepancies between them and regular models. Finally, we propose to improve the CNN' models security by investigating adversarial training. According to the numerical results, models with lower computational complexity and restricted layers are much more safe against malicious attacks than bigger models which are commonly utilized in medical healthcare systems.
KW - Adversarial Attacks
KW - CNNs
KW - Medical Data
KW - Security and Privacy
UR - http://www.scopus.com/inward/record.url?scp=85185830335&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85185830335&partnerID=8YFLogxK
U2 - 10.1109/SSD58187.2023.10411241
DO - 10.1109/SSD58187.2023.10411241
M3 - Conference contribution
AN - SCOPUS:85185830335
T3 - 2023 20th International Multi-Conference on Systems, Signals and Devices, SSD 2023
SP - 360
EP - 365
BT - 2023 20th International Multi-Conference on Systems, Signals and Devices, SSD 2023
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 20 February 2023 through 23 February 2023
ER -