Humanoid Localization on Robocup Field using Corner Intersection and Geometric Distance Estimation
DOI:
https://doi.org/10.9781/ijimai.2019.04.001Keywords:
Visual Intelligence, Robotics, Localization, Analytic Geometry, Single CameraAbstract
In the humanoid competition field, identifying landmarks for localizing robots in a dynamic environment is of crucial importance. By convention, state-of-the-art humanoid vision systems rely on poles located outside the middle of the field as an indicator for generating landmarks. However, in compliance with the recent rules of Robocup, the middle pole has been discarded to deliberately provide less prior information for the humanoid vision system to strategize its winning tactics on the field. Previous localization method used middle poles as a landmark. Therefore, robot localization tasks should apply accurate corner and distance detection simultaneously to locate the positions of goalposts. State-of-the-art corner detection algorithms such as the Harris corner and mean projection transformation are excessively sensitive to image noise and suffer from high processing times. Moreover, despite their prevalence in robot motor log and fish-eye lens calibration for humanoid localization, current distance estimation techniques nonetheless remain highly dependent on multiple poles as vision landmarks, apart from being prone to huge localization errors. Thus, we propose a novel localization method consisting of a proposed corner extraction algorithm, namely, the contour intersection algorithm (CIA), and a distance estimation algorithm, namely, analytic geometric estimation (AGE), for efficiently identifying salient goalposts. At first, the proposed CIA algorithm, which is based on linear contour intersection using a projection matrix, is utilized to extract corners of a goalpost after performing an adaptive binarization process. Then, these extracted corner features are fed into our proposed AGE algorithm to estimate the real-word distance using analytic geometry methods. As a result, the proposed localization vision system and the state-of-the-art method obtained approximately 3-4 and 7-23 centimeter estimation errors, respectively. This demonstrates the capability of the proposed localization algorithm to outperform other methods, which renders it more effective in indoor task localization for further actions such as attack or defense strategies.Downloads
References
Nur Ariffin, M. Z. (2013). Incorporating Camshift Algorithm and Support Vector Machine for Object Tracking. Master’s Thesis, Universiti Kebangsaan Malaysia.
Awrangjeb, M. and Lu, G. (2008). “Robust image corner detection based on the chord-to-point distance accumulation technique.” IEEE Transactions on Multimedia, 10(6), pp. 1059–1072.
Birchfield, S. T. and Rangarajan, S. (2005). “Spatiograms versus histograms for region-based tracking.” IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), vol. 2, pp. 1710–1714.
Birchfield, S. T. and Rangarajan, S. (2007). “Spatial histogram for region-base tracking.” Electronics and Telecommunications Research Institute (ETRI) Journal, 29(5), pp. 697–699.
Tian, B., Ng, C. L., and Chew, C. M. (2010). “Self-localization of humanoid robots with fish-eye lens in a soccer field.” 2010 IEEE Conference on Robotics Automation and Mechatronics (RAM), pp. 522–527.
Bouthemy, P. and Francois, E. (1993). “Motion segmentation and qualitative dynamic scene analysis from an image sequence.” International Journal of Computer Vision, 10(2), pp. 157–182.
Bowyer, K., Kranenburg, C., and Dougherty, S. (1999). “Edge detector evaluation using empirical ROC curves.” IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), vol. 1, pp. 354–359.
Bradski, G. R. and Kaehler, A. (2008). Learning OpenCV: Computer Vision with the OpenCV Library. California: O’Reilly Media.
Bradski, G. R. (1998). “Computer vision, face tracking for use in perceptual user interface.” Intel Technology Journal, 2(2), pp. 12–21.
Chen, W., Shi, Y. Q., and Xuan, G. (2007). “Identifying computer graphics using HSV colour model and statistical moments of characteristic functions.” Proceedings of IEEE International Conference on Multimedia and Expo, pp. 1123–1126.
Cheng, Y. (1995). “MeanShift, mode seeking and clustering.” IEEE Transactions on Pattern Analysis and Machine Intelligence, 17(8), pp. 790–799.
Davison, A. J. (2003). “Real-time simultaneous localisation and mapping with a single camera.” Proceedings of the Ninth IEEE International Conference on Computer Vision (ICCV), vol. 2, pp. 1403–1410.
Elias, R. and Laganiere, R. (2012). “JUDOCA: JUnction Detection Operator Based on Circumferential Anchors.” IEEE Transactions on Image Processing, 21(4), pp. 2109–2118.
Siswantoro, J., Prabuwono, A. S., and Abdullah, A. (2013). “Real World Coordinate from Image Coordinate Using Single Calibrated Camera Based on Analytic Geometry.” In: Communications in Computer and Information Science, vol. 378, Springer, pp. 1–11. 2nd International Multi-Conference on Artificial Intelligence Technology (M-CAIT 2013), Shah Alam, 28–29 August. https://doi.org/10.1007/978-3-642-40567-9-1
Pirahansiah, F., Abdullah, S. N. H. S., and Sahran, S. (2015). “Camera calibration for multi-modal robot vision based on image quality assessment.” 2015 10th Asian Control Conference (ASCC), pp. 1–6, May 31–June 3, 2015. https://doi.org/10.1109/ASCC.2015.7360336
Gutmann, J.-S., Goel, D., Fong, P., and Munich, M. E. (2014). “Extensions to vector field SLAM for large environments.” Robotics and Autonomous Systems, 62(9), pp. 1248–1258. https://doi.org/10.1016/j.robot.2014.03.021
Kahaki, S. M. M., Nordin, M. J., and Ashtari, A. H. (2014). “Contour-Based Corner Detection and Classification by Using Mean Projection Transform.” Sensors, 14(3), pp. 4126–4143.
Munaro, M., Basso, F., and Menegatti, E. (2016). “OpenPTrack: Open source multi-camera calibration and people tracking for RGB-D camera networks.” Robotics and Autonomous Systems, 75(Part B), pp. 525–538. https://doi.org/10.1016/j.robot.2015.10.004
Shui, P. and Zhang, W. (2013). “Corner detection and classification using anisotropic directional derivative representations.” IEEE Transactions on Image Processing, 22(8), pp. 3204–3218.
Gao, X. and Zhang, T. (2015). “Robust RGB-D simultaneous localization and mapping using planar point features.” Robotics and Autonomous Systems, 72, pp. 1–14. https://doi.org/10.1016/j.robot.2015.03.007
Zhang, Z. (2000). “A flexible new technique for camera calibration.” IEEE Transactions on Pattern Analysis and Machine Intelligence, 22(11), pp. 1330–1334.
P. An, Y. Liu, W. Zhang and Z. Jin, “Vision-Based Simultaneous Localization and Mapping on Lunar Rover,” 2018 IEEE 3rd International Conference on Image, Vision and Computing (ICIVC), Chongqing, 2018, pp. 487–493. https://doi.org/10.1109/ICIVC.2018.8492755
V. P. Kirnos, V. A. Kokovkina, V. A. Antipov and A. L. Priorov, “Simultaneous Localization and Mapping Using the Support Image Base Algorithm,” 2018 Systems of Signal Synchronization, Generating and Processing in Telecommunications (SYNCHROINFO), Minsk, 2018, pp. 1–4. https://doi.org/10.1109/SYNCHROINFO.2018.8456965
X. Guo, S. Zhu, L. Li, F. Hu and N. Ansari, “Accurate WiFi Localization by Unsupervised Fusion of Extended Candidate Location Set,” in IEEE Internet of Things Journal. doi: 10.1109/JIOT.2018.2870659.
Downloads
Published
-
Abstract55
-
PDF20






