当前位置>主页 > 期刊在线 > 计算机技术 >

计算机技术22年12期

基于梯度复用的对抗鲁棒性模型的加速
见玉昆
(安徽理工大学 计算机科学与工程学院,安徽 淮南 232001)

摘  要:深度神经网络对于对抗样本表现出脆弱性,为了提高网络的对抗鲁棒性,一种有效的方法是对抗训练。具有良好的对抗鲁棒性相较于自然训练的模型需要更大的模型容量,使得模型的存储容量和计算量增加。文章将对抗训练的加速方案用于对抗模型的压缩过程中,对鲁棒模型的压缩过程进行加速。在 MNIST 数据集上做测试,结果表明文章在对抗鲁棒模型的压缩与加速上获得一定的改进。


关键词:深度学习;对抗样本;模型压缩



DOI:10.19850/j.cnki.2096-4706.2022.012.023


中图分类号:TP391.4                                       文献标识码:A                                文章编号:2096-4706(2022)12-0089-03


Acceleration of Adversarial Robust Models Based on Gradient Reuse

JIAN Yukun

(School of Computer Science and Engineering, Anhui University of Science and Technology, Huainan 232001, China)

Abstract: Deep neural networks show vulnerability to adversarial examples, and in order to improve the adversarial robustness of the network, an effective method is adversarial training. A model with good adversarial robustness requires a larger model capacity than a naturally trained model, which increases the storage capacity and computational capacity of the model. In this paper, the acceleration scheme of adversarial training is used in the compression process of the adversarial model, and the compression process of the robust model is accelerated. It does the tests on the MNIST dataset, and the results show that this paper has achieved certain improvements in the compression and acceleration of the adversarial robust model.

Keywords: deep learning; adversarial example; model compression


参考文献:

[1] YU Y R,GAO X T,XU C Z.LAFEAT:Piercing Through Adversarial Defenses with Latent Features [J/OL].arXiv:2104.09284 [cs. LG].[2022-04-03].https://arxiv.org/abs/2104.09284.

[2] YUAN X Y,HE P,ZHU Q L,et al.Adversarial Examples: Attacks and Defenses for Deep Learning [J/OL].arXiv:1712.07107 [cs. LG].[2022-04-04].https://arxiv.org/abs/1712.07107.

[3] SHAFAHI A,NAJIBI M,GHIASI A,et al.Adversarial Training for Free! [J/OL].arXiv:1904.12843 [cs.LG].[2022-04-05]. https://arxiv.org/abs/1904.12843v1.

[4] GOODFELLOW I J,SHLENS J,SZEGEDY C.Explaining and Harnessing Adversarial Examples [J/OL].arXiv:1412.6572 [stat. ML].[2022-04-04].https://arxiv.org/abs/1412.6572v1. 

[5] MADRY A,MAKELOV A,SCHMIDT L,et al.Towards Deep Learning Models Resistant to Adversarial Attacks [J/OL].arXiv: 1706.06083 [stat.ML].[2022-04-04].https://arxiv.org/abs/1706.06083. 

[6] FRANKLE J,CARBIN M.The Lottery Ticket Hypothesis: Finding Sparse,Trainable Neural Networks [J/OL].arXiv:1803.03635 [cs.LG].[2022-04-04].https://arxiv.org/abs/1803.03635v3. 

[7] YE S K,XU K D,LIU S J,et al.Adversarial Robustness vs Model Compression,or Both? [J/OL].arXiv:1903.12561 [cs.CV]. [2022-04-04].https://arxiv.org/abs/1903.12561v5.

[8] ZHANG T Y,YE S K,ZHANG Y P,et al.Systematic Weight Pruning of DNNs using Alternating Direction Method of Multipliers [J/OL].arXiv:1802.05747 [cs.LG].[2022-04-04].https:// arxiv.org/abs/1802.05747.

[9] SHAFAHI A,NAJIBI M,GHIASI A,e t al.Adversarial Training for Free! [J/OL].arXiv:1904.12843 [cs. LG].[2022-04-04].https://arxiv.org/abs/1904.12843v1.


作者简介:见玉昆(1999—),男,汉族,安徽阜阳人,硕士研究生在读,研究方向:深度学习对抗鲁棒性。