single-rb.php

JRM Vol.24 No.3 pp. 531-539
doi: 10.20965/jrm.2012.p0531
(2012)

Paper:

Vision-Based Object Tracking by Multi-Robots

Takayuki Umeda, Kosuke Sekiyama, and Toshio Fukuda

Department of Micro System Engineering, Nagoya University, 1 Furo-cho, Chikusa-ku, Nagoya 464-8603, Japan

Received:
September 30, 2011
Accepted:
April 19, 2012
Published:
June 20, 2012
Keywords:
object tracking, multi-robot, feature evaluation, feature selection, autonomous landmark generation
Abstract
This paper proposes a cooperative visual object tracking by a multi-robot system, where robust cognitive sharing is essential between robots. Robots identify the object of interest by using various types of information in the image recognition field. However, the most effective type of information for recognizing an object accurately is the difference between the object and its surrounding environment. Therefore we propose two evaluation criteria, called ambiguity and stationarity, in order to select the best information. Although robots attempt to select the best available feature for recognition, it will lead a failure of recognition if the background scene contains very similar features with the object of concern. To solve this problem, we introduce a scheme that robots share the relation between the landmarks and the object of interest where landmark information is generated autonomously. The experimental results show the effectiveness of the proposed multi-robot cognitive sharing.
Cite this article as:
T. Umeda, K. Sekiyama, and T. Fukuda, “Vision-Based Object Tracking by Multi-Robots,” J. Robot. Mechatron., Vol.24 No.3, pp. 531-539, 2012.
Data files:
References
  1. [1] M. J. Swain and D. H. Ballard, “Color Indexing,” Int. J. of Computer Vision, Vol.7, No.1, pp. 11-32, 1991.
  2. [2] B. Schiele and J. L. Crowley, “Recognition using Multidimensional Receptive Field Histograms,” Proc. European Conf. on Computer Vision, pp. 610-619, 1996.
  3. [3] C. Schmid and R. Mohr, “Local Grayvalue Invariants for Image Retrieval,” IEEE Trans. Pattern Analysis and Machine Intelligence, Vol.19, No.5, pp. 530-535, 1997.
  4. [4] C. Harris and M. Stephens, “A Combined Corner and Edge Detector,” Proc. Alvey Conference, pp. 147-152, 1998.
  5. [5] D. G. Lowe, “Distinctive image features from scale-invariant key points,” Proc. of Int. J. of Computer Vision (IJCV), Vol.60, No.2, pp. 91-110, 2004.
  6. [6] H. Bay, A. Ess, T. Tuytelaars, and L. V. Gool, “SURF: Speeded Up Robust Features,” Computer Vision and Image Understanding (CVIU), Vol.110, No.3, pp. 346-359, 2008.
  7. [7] R. Fergus, P. Perona, and A. Zisserman, “Object Class Recognition by Unsupervised Scale-Invariant Learning,” Proc. IEEE Computer Vision and Pattern Recognition, pp. 264-271, 2003.
  8. [8] E. Pagello, A. D’Angelo, and E. Menegatti, “Cooperation Issues and Distributed Sensing for Multirobot Systems,” Proc. of the IEEE, Vol.94, No.7, pp. 1370-1383, 2006.
  9. [9] D. Gohring and J. Homann, “Multi Robot Object Tracking and Self Localization Using Visual Percept Relations,” Proc. of IEEE/RSJ Int. Conf. of Intelligent Robots and Systems, pp. 31-36, 2006.
  10. [10] K. G. Tan, A. R. Wasif, and C. P. Tan, “Objects Tracking Utilizing Square Grid RFID Reader Antenna Network,” J. of Electromagnetic Waves and Applications, Vol.22, pp. 27-38, 2008.
  11. [11] K. LeBlanc and A. Saffiotti, “Cooperative Anchoring in Heterogeneous Multi-Robot Systems,” Proc. of the IEEE Int. Conf. on Robotics and Automation (ICRA), 2008.
  12. [12] Y. Xue, G. Tian, R. Li, and H. Jiang, “A new object search and recognition method based on artificial object mark in complex indoor environment,” World Congress on Intelligent Control and Automation, pp. 6648-6653, 2010.
  13. [13] D. Gohring and J. Homann, “Multi Robot Object Tracking and Self Localization Using Visual Percept Relations,” Proc. of IEEE/RSJ Int. Conf. of Intelligent Robots and Systems, pp. 31-36, 2006.
  14. [14] J. B. Hayet, F. Lerasle, andM. Devy, “A visual landmark framework for mobile robot navigation,” Image and Vision Computing, Vol.25, Issue 8, pp. 1341-1351, 2007.
  15. [15] S. Frintrop and A. B. Cremers, “Visual Landmark Generation and Redetection with a Single Feature Per Frame,” Proc. of the IEEE Int. Conf. on Robotics and Automation (ICRA), 2010.
  16. [16] M. Hu, “Visual pattern recognition by moment invariants,” IRE Trans. on Information Theory, pp. 179-187, 1962.

*This site is desgined based on HTML5 and CSS3 for modern browsers, e.g. Chrome, Firefox, Safari, Edge, Opera.

Last updated on Dec. 27, 2024