余旺盛, 田孝华, 侯志强, 陈校平. 基于OLDM与贝叶斯估计的鲁棒视觉跟踪[J]. 武汉大学学报 ( 信息科学版), 2015, 40(11): 1539-1544. DOI: 10.13203/j.whugis20130535
引用本文: 余旺盛, 田孝华, 侯志强, 陈校平. 基于OLDM与贝叶斯估计的鲁棒视觉跟踪[J]. 武汉大学学报 ( 信息科学版), 2015, 40(11): 1539-1544. DOI: 10.13203/j.whugis20130535
YU Wangsheng, TIAN Xiaohua, HOU Zhiqiang, CHEN Xiaoping. Robust Visual Tracking Based on OLDM and Bayesian Estimation[J]. Geomatics and Information Science of Wuhan University, 2015, 40(11): 1539-1544. DOI: 10.13203/j.whugis20130535
Citation: YU Wangsheng, TIAN Xiaohua, HOU Zhiqiang, CHEN Xiaoping. Robust Visual Tracking Based on OLDM and Bayesian Estimation[J]. Geomatics and Information Science of Wuhan University, 2015, 40(11): 1539-1544. DOI: 10.13203/j.whugis20130535

基于OLDM与贝叶斯估计的鲁棒视觉跟踪

Robust Visual Tracking Based on OLDM and Bayesian Estimation

  • 摘要: 提出了一种在线学习判别式模型OLDM(online learning discriminative model),并结合贝叶斯估计实现了对视觉运动目标的鲁棒跟踪。首先,通过对初始化的跟踪区域进行样本标记与聚类分析得到目标的判别式模型;然后,利用该模型在预测的跟踪区域内计算目标的似然分布;最后,在贝叶斯框架下完成目标状态的确定并对模型进行学习与更新。算法通过在线学习适时更新目标模型,增强了算法对目标表观变化的适应性。实验结果表明,本文算法能够有效地适应目标表观特征的复杂变化,对目标的尺度、光照、遮挡以及非刚性形变等具有较强的鲁棒性,算法的跟踪精度与稳定性比当前主流算法均有一定提高。

     

    Abstract: Constructing the appearance model of object is a key problem that affects visual tracking performance. To solve this problem, we propose an online learning discriminative model for visual tracking;a robust tracking algorithm with this model with Bayesian estimation. Firstly, we segmented the initial tracking area to generate training samples, and obtained a discriminative model via clustering of the training samples. Then, we computed a likelihood map of the predicted tracking area of the current frame using the discriminative model. Finally, we estimated the object state via maximum a posterior estimation and updated the discriminative model online. The proposed algorithm updates the appearance model via online learning, which improves adaptability for large variations in appearance. Experimental results indicate that the proposed algorithm can cope well with complex change of object's appearance, demonstrating an especially robust performance when tracking an object undergoing scaling, illumination change, occlusion, and non-rigid deformation. Both qualitative and quantitative comparisons show the superiority of the proposed algorithm to now current state-of-the-art approaches.

     

/

返回文章
返回