当前位置>主页 > 期刊在线 > 计算机技术 >

计算机技术22年13期

轻量网络 MHNet 对新冠肺炎的识别研究
侯麟朔,王寅,龙启航,李宇翔,马淑康
(中国矿业大学(北京)机电与信息工程学院,北京 100083)

摘  要:为了能够快速准确地诊断出新冠肺炎患者,文章参考 MobileNetV2 架构并结合注意力网络,改进损失优化函数,依据 CNN 网络设计准则搭建新型轻量网络 MHNet。在 COVIDx CXR-2 公开数据集上进行的实验表明,该网络在准确率、召回率、特异性、精准率、F1 分数、模型大小、CPU 单张图推理耗时、GPU 单张图推理耗时上的指标分别为 92%、99%、85%、86.84%、92.52%、3.91 MB、59.51 ms、17.66 ms。相较于其他传统网络,该网络对新冠肺炎感染者的诊断率较高、诊断效果较好。


关键词:新冠肺炎;ECA-Net;FocalLoss;高效 CNN 网络设计准则



DOI:10.19850/j.cnki.2096-4706.2022.013.020


中图分类号:TP391.4                                       文献标识码:A                                  文章编号:2096-4706(2022)13-0082-05


Study on the Identification of COVID-19 by Light Weight Network MHNet

HOU Linshuo, WANG Yin, LONG Qihang, LI Yuxiang, MA Shukang

(College of Mechatronics and Information Engineering, China University of Mining and Technology-Beijing, Beijing 100083, China)

Abstract: In order to diagnose COVID-19 patients more accurately and quickly, this paper refers to MobileNetV2 architecture and combines attention network, improves loss optimization function, and builds a new lightweight network MHNet according to CNN network design criteria. Experiments on a public COVIDx CXR-2 dataset show that the indicators of the network in accuracy, recall, specificity, accuracy, F1 score, model size, CPU single graph reasoning time, GPU single graph reasoning time are 92%, 99%, 85%, 86.84%, 92.52%, 3.91 MB, 59.51 ms, 17.66 ms respectively. Compared with other traditional networks, this network has higher diagnostic rate and better diagnostic effect for patients infected with covid-19 infection.

Keywords: COVID-19; ECA-Net; FocalLoss; Guidelines for efficient CNN network design


参考文献:

[1] 高静文,蔡永香,何宗宜 .TP-FER:基于优化卷积神经网络的三通道人脸表情识别方法 [J].计算机应用研究,2021,38(7):2213-2219.

[2] 顾德英,王娜,李文超,等 .基于模型融合的低照度环境下车道线检测方法 [J]. 东北大学学报(自然科学版),2021,42(3):305-309.

[3] 李沿宏,江茜,邹可,等 .融合注意力机制的多流卷积肌电手势识别网络 [J]. 计算机应用研究,2021,38(11):3258-3263.

[4] KRIZHEVSKY A,SUTSKEVER I,HINTON G E. ImageNet Classification with Deep Convolutional Neural Networks [J]. Communications of the ACM,2017,60(6):84-90. 

[5] KAREN S,ANDREW Z. Very Deep Convolutional Networks for Large-Scale Image Recognition [EB/OL].[2022-04-13].https:// ui.adsabs.harvard.edu/abs/2014arXiv1409.1556S/abstract. 

[6] HE K M,ZHANG X Y,REN S Q,et al. Deep Residual Learning for Image Recognition [C]//2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Las Vegas:IEEE, 2016:770-778. 

[7] SZEGEDY C,LIU W,JIA Y Q,et al. Going Deeper with Convolutions [C]//2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).Boston:IEEE,2015:1-9. 

[8] IANDOLA F N,HAN S,MOSKEWICZ M W, et al. SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size [J/OL].arXiv:1602.07360v4 [cs.CV].[2022-04-12]. https://arxiv.org/pdf/1602.07360.pdf.

[9] HOWARD A G,ZHU M,CHEN B,et al. MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications [J/0L].arXiv:1704.04861 [cs.CV].[2022-04-12].https://arxiv.org/ abs/1704.04861.

[10] CO A,OA A,MM B . CVDNet: A novel deep learning architecture for detection of coronavirus (Covid-19) from chest x-ray images [J]. Chaos,Solitons & Fractals,2020,140:110245.

[11] BAYOUDH K,HAMDAOUI F,MTIBAA A. HybridCOVID: a novel hybrid 2D/3D CNN based on cross-domain adaptation approach for COVID-19 screening from chest X-ray images [J].Physical and Engineering Sciences in Medicine,2020,43(4):1415-1431.

[12] 易三莉,王天伟,杨雪莲,等 .ARS-CNN 算法在新冠肺炎识别中的研究 [J]. 液晶与显示,2021,36(11):1565-1572.

[13] CAO Y,XU J,LIN S,et al. GCNet: Non-local Networks Meet Squeeze-Excitation Networks and Beyond [EB/OL].[2022-04- 10].https://www.researchgate.net/publication/332669752_GCNet_Nonlocal_Networks_Meet_Squeeze-Excitation_Networks_and_Beyond. 

[14] WANG Q L,WU B G,ZHU P F,et al. ECA-Net: Efficient Channel Attention for Deep Convolutional Neural Networks [C]//2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Seattle:IEEE,2020:11531-11539.

[15] MA N N,ZHANG X Y,ZHENG H T,et al. ShuffleNet V2: Practical Guidelines for Efficient CNN Architecture Design [J/ OL]. arXiv:1807.11164v1 [cs.CV].[2022-04-05].https://arxiv.org/ pdf/1807.11164.pdf.

[16] LIN T Y,GOYAL P,Girshick R,et al. Focal Loss for Dense Object Detection [J].IEEE Transactions on Pattern Analysis and Machine Intelligence,2020,42(2):318-327.

[17] HU J,LI S,ALBANIE S,et al. Squeeze-and-Excitation Networks [J].IEEE Transactions on Pattern Analysis and Machine Intelligence,2020,42(8):2011-2023.


作者简介:侯麟朔(1998—),男,汉族,河南南阳人,硕士研究生在读,研究方向:图像处理;王寅(1998—),男,汉族,江苏连云港人,硕士研究生在读,研究方向:图像处理;龙启航(1996—),男,汉族,湖北襄阳人,硕士研究生在读,研究方向:图像处理;李宇翔(1998—),男,汉族,湖南株洲人,硕士研究生在读,研究方向:图像处理;马淑康(1998—),男,汉族,山东临沂人,硕士研究生在读,研究方向:图像处理。