ZHAO Yuanyuan, ZHU Jun, XIE Yakun, LI Weilian, GUO Yukun. A Real-Time Video Flame Detection Algorithm Based on Improved Yolo-v3[J]. Geomatics and Information Science of Wuhan University, 2021, 46(3): 326-334. DOI: 10.13203/j.whugis20190440
Citation: ZHAO Yuanyuan, ZHU Jun, XIE Yakun, LI Weilian, GUO Yukun. A Real-Time Video Flame Detection Algorithm Based on Improved Yolo-v3[J]. Geomatics and Information Science of Wuhan University, 2021, 46(3): 326-334. DOI: 10.13203/j.whugis20190440

A Real-Time Video Flame Detection Algorithm Based on Improved Yolo-v3

Funds: 

National Natural Science Foundation of China 41871289

Scientific Research Project of Sichuan Provincial Department of Natural Resources KJ-2020-4

Sichuan Youth Science and Technology Innovation Team 2020JDTD0003

More Information
  • Author Bio:

    ZHAO Yuanyuan, postgraduate, specializes in virtual geographic environment and disaster scenario modeling. E-mail: 3011441848@qq.com

  • Corresponding author:

    ZHU Jun, PhD, professor. E-mail: vgezj@163.com

  • Received Date: August 04, 2020
  • Published Date: March 04, 2021
  •   Objectives  In order to solve the problems of the low accuracy and the slow speed of the existing video image flame detection methods, we propose a real-time video flame detection algorithm based on improved Yolo-v3 to achieve real-time and efficient detection of flames in the video.
      Methods  Firstly, in the feature extraction stage, the multi-scale detection network is improved. We add a new-scale feature and then improved the networks ability to learn the shallow information of the images by further integrating multi-scale features. Using this method, the accurate identification of small flame is achieved. Secondly, in the target detection stage, we use the improved K-means clustering algorithm to optimize the multi-scale prior frames, and make them adapt to the changing posture and shape of the flame. Finally, after detecting video flames based on improved Yolo-v3, we use the unique flicker characteristics of the flame to check the video again, and eliminate the false detection frame in the detection result. And in this method the detection accuracy is further improved.
      Results  In order to prove the effectiveness of our method, the video flames are detected from both accuracy and speed, and the results are compared with the advanced flame detection methods in recent years. The results show that the average accuracy rate of our method can reach 98.5%, the false detection rate is as low as 2.3%, and the average detection rate is 52 frames/s, so our method has better performance in terms of accuracy and speed.
      Conclusions  The effectiveness of this method is proved through multiple sets of experiments. Comparing with the existing flame detection methods, our method can be more effectively applied to video flame detection.
  • [1]
    Çelik T, Demirel H, Ozkaramanli H. Automatic Fire Detection in Video Sequences[C]. The 14th European Signal Processing Conference, Florence, Italy, 2006
    [2]
    Çelik T, Demirel T. Fire Detection in Video Sequences Using a Generic Color Model[J]. Fire Safety Journal, 2009, 44(2): 147-158 http://www.sciencedirect.com/science/article/pii/S0379711208000568
    [3]
    Pasquale F, Alessia S, Mario V. Real-Time Fire Detection for Video-Surveillance Applications Using a Combination of Experts Based on Color, Shape, and Motion[J]. IEEE Transactions on Circuits and Systems for Video Technology, 2015, 25(9): 1 545 - 1 556 doi: 10.1109/TCSVT.2015.2392531
    [4]
    Ju H, Li W, Tighe S, et al. CrackU-Net: A Novel Deep Convolutional Neural Network for Pixelwise Pavement Crack Detection[J]. Structural Control and Health Monitoring, 2020, 27(8), DOI: 10.1002/stc.2551
    [5]
    Sun W, Paiva A, Xu P, et al. Fault Detection and Identification Using Bayesian Recurrent Neural Networks[J]. Computers and Chemical Engineering, 2020, 141, DOI: 10.1016/j.compchemeng.2020.106991
    [6]
    Rei S, Yuhei H, Ayako O. Non-destructive Detection of Tea Leaf Chlorophyll Content Using Hyperspectral Reflectance and Machine Learning Algorithms[J]. Plants, 2020, 9(3), DOI: 10.3390/plants9030368
    [7]
    季顺平, 田思琦, 张驰. 利用全空洞卷积神经元网络进行城市土地覆盖分类与变化检测[J]. 武汉大学学报·信息科学版, 2020, 45(2): 233-241 doi: 10.13203/j.whugis20180481

    Ji Shunping, Tian Siqi, Zhang Chi. Using All-Hole Convolutional Neural Network for Urban Land Cover Classification and Change Detection[J]. Geomatics and Information Science of Wuhan University, 2020, 45(2): 233-241 doi: 10.13203/j.whugis20180481
    [8]
    Chirra V, Uyyala S, Kolli V. Deep CNN: A Machine Learning Approach for Driver Drowsiness Detection Based on Eye State[J]. International Information and Engineering Technology Association, 2019, 33(6): 461-466 http://www.researchgate.net/publication/338251837_Deep_CNN_A_Machine_Learning_Approach_for_Driver_Drowsiness_Detection_Based_on_Eye_State
    [9]
    胡涛, 朱欣焰, 呙维, 等. 融合颜色和深度信息的运动目标提取方法[J]. 武汉大学学报·信息科学版, 2019, 44(2): 276-282 doi: 10.13203/j.whugis20160535

    Hu Tao, Zhu Xinyan, Guo Wei, et al. A Moving Object Detection Method Combining Color and Depth Data[J]. Geomatics and Information Science of Wuhan University, 2019, 44(2): 276-282 doi: 10.13203/j.whugis20160535
    [10]
    Kantorov V, Oquab M, Cho M, et al. Context LocNet: Context-Aware Deep Network Models for Weakly Supervised Localization[C]. European Conference on Computer Vision, Amsterdam, Netherlands, 2016
    [11]
    Xie Y, Zhu J, Cao Y, et al. Refined Extraction of Building Outlines from High-Resolution Remote Sensing Imagery Based on a Multifeature Convolutional Neural Network and Morphological Filtering[J]. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2020, 13: 1 842-1 855 doi: 10.1109/JSTARS.2020.2991391
    [12]
    Mikolov T, Chen K, Corrado G, et al. Efficient Estimation of Word Representations in Vector Space[EB/OL]. (2013-09-07)[2019-09-25]. https://arxiv.org/abs/1301.3781
    [13]
    Hinton E, Osindero S, Teh Y. A Fast Learning Algorithm for Deep Belief Nets[J]. Neural Computation, 2006, 18(7): 1 527-1 554 doi: 10.1162/neco.2006.18.7.1527
    [14]
    Xie Y, Zhu J, Cao Y, et al. Efficient Video Fire Detection Exploiting Motion-Flicker-Based Dynamic Features and Deep Static Features[J]. IEEE Access, 2020, 8: 81 904 - 81 917 doi: 10.1109/ACCESS.2020.2991338
    [15]
    Frizzi S, Kaabi R, Bouchouicha M, et al. Convolutional Neural Network for Video Fire and Smoke Detection[C]. Conference of the IEEE Industrial Electronics Society, Florence, Italy, 2016
    [16]
    Zhang Q, Xu J, Xu L, et al. Deep Convolutional Neural Networks for Forest Fire Detection[C]. International Forum on Management, Education and Information Technology Application, Paris, France, 2016
    [17]
    傅天驹, 郑嫦娥, 田野, 等. 复杂背景下基于深度卷积神经网络的森林火灾识别[J]. 计算机与现代化, 2016 (3): 52-57 http://www.cnki.com.cn/article/cjfdtotal-jyxh201603012.htm

    Fu Tianju, Zheng Change, Tian Ye, et al. Forest Fire Recognition Based on Deep Convolutional Neural Network Under Complex Background[J]. Computer and Modernization, 2016 (3): 52-57 http://www.cnki.com.cn/article/cjfdtotal-jyxh201603012.htm
    [18]
    Redmon J, Divvala S, Girshick R, et al. You Only Look Once: Unified, Real-Time Object Detection[C]. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Lasvegas, USA, 2016
    [19]
    Redmon J, Farhadi A. YOLO9000: Better, Faster, Stronger[C]. IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, USA, 2017
    [20]
    Redmon J, Farhadi A. YOLOv3: An Incremental Improvement[EB/OL]. (2018-04-08)[2019-09-25]. https://arxiv.org/abs/1804.02767
    [21]
    陈垂雄, 严云洋, 刘以安, 等. 运动区域提取和闪频分析并行的火焰检测算法[J]. 数据采集与处理, 2017, 32(2): 424-430 http://www.cqvip.com/QK/96163X/201702/672013730.html

    Chen Chuixiong, Yan Yunyang, Liu Yian, et al. Fire Detection Based on Parallel Computing of Motion and Flicker Frequency Feature[J]. Journal of Data Acquisition and Processing, 2017, 32(2): 424-430 http://www.cqvip.com/QK/96163X/201702/672013730.html
    [22]
    Toulouse T, Rossi L, Campana A, et al. Computer Vision for Wildfire Research: An Evolving Image Dataset for Processing and Analysis[J]. Fire Safety Journal, 2017, 92: 188-194 doi: 10.1016/j.firesaf.2017.06.012
    [23]
    Cetin A. Fire Detection Samples[EB/OL]. (2013-09-03)[2019-09-25]. http://Signal.ee.bilkent.edu.tr/VisiFire
    [24]
    Steffens, Botelho S, Rodrigues R. A Texture Driven Approach for Visible Spectrum Fire Detection on Mobile Robots[C]. The XⅢ Latin American Robotics Symposium and IV Brazilian Robotics Symposium (LARS/SBR), Recife, Brazil, 2016
  • Related Articles

    [1]MA Jingzhen, SUN Qun, WEN Bowei, ZHOU Zhao, LU Chuanwei, LÜ Zheng, SUN Shijie. A Hybrid Multi-feature Road Network Selection Method Based on Trajectory Data[J]. Geomatics and Information Science of Wuhan University, 2022, 47(7): 1009-1016. DOI: 10.13203/j.whugis20190480
    [2]YANG Hao, HE Zongyi, CHEN Huayang, ZHOU Zhuanxiang, FAN Yong. A Method for Automatic Generalization of Urban Settlements Considering Road Network[J]. Geomatics and Information Science of Wuhan University, 2018, 43(6): 965-970. DOI: 10.13203/j.whugis20160094
    [3]CAO Weiwei, ZHANG Hong, HE Jing, LAN Tian. Road Selection Considering Structural and Geometric Properties[J]. Geomatics and Information Science of Wuhan University, 2017, 42(4): 520-524. DOI: 10.13203/j.whugis20140862
    [4]YANG Lin, WAN Bo, WANG Run, ZUO Zejun, AN Xiaoya. Matching Road Network Based on the Structural Relationship Constraint of Hierarchical Strokes[J]. Geomatics and Information Science of Wuhan University, 2015, 40(12): 1661-1668. DOI: 10.13203/j.whugis20140295
    [5]tianjin g, renchan g, wangyihen g, xiongfu q uan, leiyin g zhe. imp rovementofself-best-fitstrate gyforstrokebuildin g[J]. Geomatics and Information Science of Wuhan University, 2015, 40(9): 1209-1214. DOI: 10.13203/j .whu g is20140455
    [6]LIU Hailong, QIAN Haizhong, WANG Xiao, HE Haiwei. Road Networks Global Matching Method Using Analytical Hierarchy Process[J]. Geomatics and Information Science of Wuhan University, 2015, 40(5): 644-651. DOI: 10.13203/j.whugis20130350
    [7]TIAN Jing, HE Qingsong, YAN Fen. Formalization and New Algorithm of stroke Generation in Road Networks[J]. Geomatics and Information Science of Wuhan University, 2014, 39(5): 556-560. DOI: 10.13203/j.whugis20120127
    [8]TIAN Jing, WU Dang, ZHAN Yifei. Degree Correlation of Urban Street Networks[J]. Geomatics and Information Science of Wuhan University, 2014, 39(3): 332-334. DOI: 10.13203/j.whugis20120675
    [9]CHEN Jun, HU Yungang, ZHAO Renliang, LI Zhilin. Road Data Updating Based on Map Generalization[J]. Geomatics and Information Science of Wuhan University, 2007, 32(11): 1022-1027.
    [10]HUANG Shuqiang, SUN Chengzhi, FU Zhongliang. License Plate Binarization Algorithm Based on the Features of Characters' Strokes[J]. Geomatics and Information Science of Wuhan University, 2003, 28(1): 71-73,79.
  • Cited by

    Periodical cited type(9)

    1. 赵天明,孙群,马京振,张付兵,温伯威. 融合路段和stroke特征的道路自动选取方法. 地球信息科学学报. 2024(12): 2673-2685 .
    2. 郭漩,钱海忠,王骁,刘俊楠,任琰,赵钰哲,陈国庆. 多源道路智能选取的本体知识推理方法. 测绘学报. 2022(02): 279-289 .
    3. 马京振,孙群,温伯威,周炤,陆川伟,吕峥,孙士杰. 结合轨迹数据的混合多特征道路网选取方法. 武汉大学学报(信息科学版). 2022(07): 1009-1016 .
    4. 朱余德,杨敏,晏雄锋. 利用图卷积神经网络的道路网选取方法. 北京测绘. 2022(11): 1455-1459 .
    5. 韩远,王中辉,徐智邦,余贝贝. 结合引力场理论的道路自动选取方法. 测绘科学. 2021(01): 189-195 .
    6. 韩远,王中辉,禄小敏. POI辅助下的道路选取. 测绘科学. 2021(04): 165-171 .
    7. 陈晓东,余劲松弟. 顾及语义关联信息的道路选取方法. 海南大学学报(自然科学版). 2021(03): 227-234 .
    8. 王晓妍. 土地利用图中线状要素综合的质量评价. 测绘通报. 2020(04): 116-120 .
    9. 冯云,朱素华,孙益清,王金鑫. 郑州轨道交通5号线开通对城市交通格局的影响. 城市勘测. 2020(04): 54-58 .

    Other cited types(11)

Catalog

    Article views (2310) PDF downloads (254) Cited by(20)
    Related

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return