ZHU Zhihao, LI Jiatian, GAO Peng, A Xiaohui, YAN Ling, WANG Wentao. Prototype of Directional Observation Camera Considering Coaxial Constraint of Short Baselines[J]. Geomatics and Information Science of Wuhan University, 2022, 47(5): 769-779. DOI: 10.13203/j.whugis20190466
Citation: ZHU Zhihao, LI Jiatian, GAO Peng, A Xiaohui, YAN Ling, WANG Wentao. Prototype of Directional Observation Camera Considering Coaxial Constraint of Short Baselines[J]. Geomatics and Information Science of Wuhan University, 2022, 47(5): 769-779. DOI: 10.13203/j.whugis20190466

Prototype of Directional Observation Camera Considering Coaxial Constraint of Short Baselines

Funds: 

The National Natural Science Foundation of China 41561082

More Information
  • Author Bio:

    ZHU Zhihao, master, specializes in photogrammetry, machine vision, and pattern recognition. E-mail: zhihao19960403@foxmail.com

  • Corresponding author:

    LI Jiatian, PhD, professor. E-mail: ljtwcx@163.com

  • Received Date: August 05, 2020
  • Published Date: May 04, 2022
  •   Objectives  Considering the insufficient resolution of interested targets in a large field of view camera, a coaxial constraint model of short baselines is proposed, and a master-slave camera prototype is designed. A large field of view camera is used to monitor the entire field of view, and the directional high-definition observation is carried out by an active camera.
      Methods  The yaw angle and pitch angle of the active camera are solved by using the image point of the target center point in the large field of view camera image. Firstly, the mapping relationship between the large field of view camera and the active camera is constructed, and the external parameter matrix of the camera is simplified by using the coaxial constraint model of short baselines. The initial control parameters of the active camera are solved through trigonom‍et‍ric functions. Secondly, the pitch-angle compensation values of close-range scenes are calculated according to the installation position of the prototype.
      Results  Experimental results show that(1) The compensation for the control parameters of the active camera in close-range scenes can improve the control accuracy to a certain extent, and the compensation effect in the central area is better. (2) In close-range scenes where the distance between the observation target and the large field of view camera is 3‍-‍10 m, the prototype can actively observe the details of the target. The actual and ideal position error of the target in the active camera image is less than 30 pixels, and the horizontal direction error can be ignored. (3) In the outdoor long-distance scenes, the prototype can actively and accurately observe the target details within the effective distance. As the depth of the observed target increases, the error decreases gradually. In the range of 40‍-‍50 m, the system error caused by steering engine control accuracy and baseline length becomes the main error, and the overall error is maintained at about 6 pixels. (4) The time for prototype to calculate the control parameters of the active camera of a single target point is within 0.2 ms, which can meet the real-time requirement. (5) The algorithm solves the control parameters of the active camera without scene dependence. It can control the rotation of the active cam‍era with high precision and strong adaptability in different scenes. (6) When the large field of view cam‍era is used to observe the far target at low resolution, the target cannot be located accurately in the im‍age. Improving the resolution of the large field camera is helpful to improve the accuracy of active camera control parameters and effective observation distance.
      Conclusions  The directional high-definition observation of interested regions in the large field of view can be realized through the prototype. The camera part is modularized, and only one offline calibration is needed to adapt to different scenes and target depth. The accurate and fast target observation is realized. Compared with oth‍er methods, it has higher accuracy and timeliness with better applicability. Additionally, the accuracy and the observation distance can be increased by properly improving the resolution of the large field of view cam‍era.
  • [1]
    Arroyo R, Yebes J J, Bergasa L M, et al. Expert Video-Surveillance System for Real-Time Detection of Suspicious Behaviors in Shopping Malls[J]. Epert Systems with Applications, 2015, 42(21): 7991- 8005 doi: 10.1016/j.eswa.2015.06.016
    [2]
    王建功, 林国余. 基于多特征融合的多摄像机人体跟踪方法[J]. 吉林大学学报(信息科学版), 2014, 32(6): 675-683 doi: 10.3969/j.issn.1671-5896.2014.06.019

    Wang Jiangong, Lin Guoyu. Human Tracking Method Based on Multiple Features Fusion Across Multiple Cameras[J]. Journal of Jilin University (Information Science Edition), 2014, 32(6): 675 - 683 doi: 10.3969/j.issn.1671-5896.2014.06.019
    [3]
    祝琨, 杨唐文, 阮秋琦, 等. 基于双目视觉的运动物体实时跟踪与测距[J]. 机器人, 2009, 31(4): 327-334 doi: 10.3321/j.issn:1002-0446.2009.04.006

    Zhu Kun, Yang Tangwen, Ruan Qiuqi, et al. ReaTime Tracking and Measuring of Moving Objects Based on Binocular Vision[J]. Robot, 2009, 31 (4): 327-334 doi: 10.3321/j.issn:1002-0446.2009.04.006
    [4]
    Allotta B, Caiti A, Chisci L, et al. An Unscented Kalman Filter Based Navigation Algorithm for Atonomous Underwater Vehicles[J]. Mechatronics, 2016, 39: 185-195 doi: 10.1016/j.mechatronics.2016.05.007
    [5]
    Wojke N, Bewley A, Paulus D. Simple Online and Realtime Tracking with a Deep Association Metric [C]//IEEE International Conference on Image Prcessing, Beijing, China, 2017
    [6]
    Redmon J, Divvala S, Girshick R, et al. You Only Look Once: Unified, Real-Time Object Detection [C]//The 29th IEEE Conference on Computer Vsion and Pattern Recognition, Las Vegas, USA, 2016
    [7]
    黎曦, 郑宏, 刘操. 利用HSI色彩空间的PCNN人脸识别方法[J]. 武汉大学学报·信息科学版, 2014, 39(12): 1499-1503 http://ch.whu.edu.cn/article/id/3148

    Li Xi, Zheng Hong, Liu Cao. A Method of Face Recognition Based on HSI-PCNN[J]. Geomatics and Information Science of Wuhan University, 2014, 39(12): 1499-1503 http://ch.whu.edu.cn/article/id/3148
    [8]
    廖海斌, 陈庆虎, 王宏勇. 融合局部形变模型的鲁棒性人脸识别[J]. 武汉大学学报·信息科学版, 2011, 36(7): 877-881 http://ch.whu.edu.cn/article/id/611

    Liao Haibin, Chen Qinghu, Wang Hongyong. Rbust Face Recognition by Fusion Local Deformable Model[J]. Geomatics and Information Science of Wuhan University, 2011, 36(7): 877 -881 http://ch.whu.edu.cn/article/id/611
    [9]
    谭小慧, 李昭伟, 樊亚春. 基于多尺度细节增强的面部表情识别方法[J]. 电子与信息学报, 2019, 41 (11): 2752-2759 doi: 10.11999/JEIT181088

    Tan Xiaohui, Li Zhaowei, Fan Yachun. Facial Epression Recognition Method Based on Multi-scale Detail Enhancement[J]. Journal of Electronics & Iformation Technology, 2019, 41(11): 2752 -2759 doi: 10.11999/JEIT181088
    [10]
    Iraqui A, Dupuis Y, Boutteau R, et al. Fusion of Omnidirectional and PTZ Cameras for Face Detetion and Tracking[C]//International Conference on Emerging Security Technologies(EST2010), Caterbury, UK, 2010
    [11]
    Bastanlar Y. A Simplified Two-View Geometry Based External Calibration Method for Omnidiretional and PTZ Camera Pairs[J]. Pattern Recogntion Letters, 2016, 71, DOI: 10.1016/j.patrec.2015.11.013
    [12]
    Liao H C, Chen W Y. Eagle-Eye: A Dual-PTZ- Camera System for Target Tracking in a Large Open Area[J]. Information Technology and Control, 2010, 39(3): 227-235
    [13]
    Neves J C, Moreno J C, Proença H. A Master - Slave Calibration Algorithm with Fish-Eye Corretion[J]. Mathematical Problems in Engineering, 2015: 427270
    [14]
    Hampapur A, Pankanti S, Senior A, et al. Face Cataloger: Multi-scale Imaging for Relating Identity to Location[C]//IEEE Conference on Advanced Video and Signal Based Surveillance, Miami, USA, 2003
    [15]
    Horaud R, Knossow D, Michaelis M. Camera Cooperation for Achieving Visual Attention[J]. Mchine Vision and Applications, 2006, 16 (6): 331-342
    [16]
    杨广林, 孔令富, 赵逢达. 双摄像机系统对移动目标的跟踪[J]. 机器人, 2007, 29(2): 133-139 doi: 10.3321/j.issn:1002-0446.2007.02.007

    Yang Guanglin, Kong Lingfu, Zhao Fengda. Tracking of Moving Object with Bi -camera System [J]. Robot, 2007, 29(2): 133-139 doi: 10.3321/j.issn:1002-0446.2007.02.007
    [17]
    孙卓金. 双摄像机协同探测与鹰眼观测系统设计[D]. 上海: 上海交通大学, 2012

    Sun Zhuojin. Design of a Cooperative Dual-Cameras System for Object Detection and Eagle Eye Surveilance[D]. Shanghai: Shanghai Jiao Tong Universty, 2012
    [18]
    Hu J L, Hu S Q, Sun Z J. A Real Time Dual-Camera Surveillance System Based on Tracking-LearninDetection Algorithm[C]//The 25th Chinese Cotrol and Decision Conference, Guiyang, China, 2013
    [19]
    Neves J C, Moreno J C, Barra S, et al. A Calibrtion Algorithm for Multi-camera Visual Surveillance Systems Based on Single -View Metrology[C]// The 7th Iberian Conference on Pattern Recognition and Image Analysis, Santiago de Compostela, Spain, 2015
    [20]
    李中振. 大场景主从相机协同的运动目标检测和跟踪[D]. 西安: 西安电子科技大学, 2018

    Li Zhongzhen. Detection and Tracking of Moving Objects with Master-Slave Camera Coordination in Large Scene[D]. Xi'an: Xidian University, 2018
    [21]
    Cui Z G, Li A H, Feng G Y, et al. Cooperative Oject Tracking Using Dual-Pan-Tilt-Zoom Cameras Based on Planar Ground Assumption[J]. IET Computer Vision, 2015, 9(1): 149-161
    [22]
    Lin C W, Hung Y P, Hsu W K, et al. The Costruction of a High-Resolution Visual Monitoring for Hazard Analysis[J]. Natural Hazards, 2013, 65 (3): 1285-1292 doi: 10.1007/s11069-012-0409-9
    [23]
    Bimbo A D, Dini F, Lisanti G, et al. Exploiting Distinctive Visual Landmark Maps in Pan-Tilt-Zoom Camera Networks[J]. Computer Vision and Image Understanding, 2010, 114(6): 611-623 doi: 10.1016/j.cviu.2010.01.007
    [24]
    David A F, Jean P. Computer Vision: A Modern Approach[M]. NJ, US: Prentice Hall, 2012
    [25]
    Zhang Z. A Flexible New Technique for Camera Calibration[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2000, 22(11): 1330-1334 doi: 10.1109/34.888718

Catalog

    Article views (501) PDF downloads (24) Cited by()
    Related

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return