ANFIS-Based Visual Pose Estimation of Uncertain Robotic Arm Using Two Uncalibrated Cameras
International Journal of Wireless Communications and Mobile Computing
Volume 6, Issue 1, March 2018, Pages: 20-30
Received: Jan. 2, 2018;
Accepted: Jan. 17, 2018;
Published: Feb. 6, 2018
Views 1691 Downloads 65
Aung Myat San, Department of Mechatronic Engineering, Mandalay Technological University, Mandalay, Myanmar
Wut Yi Win, Department of Mechatronic Engineering, Mandalay Technological University, Mandalay, Myanmar
Saint Saint Pyone, Department of Mechatronic Engineering, Mandalay Technological University, Mandalay, Myanmar
This paper describes a new approach for the visual pose estimation of an uncertain robotic manipulator using ANFIS (Artificial Neuro-Fuzzy Inference System) and two uncalibrated cameras. The main emphasis of this work is on the ability to estimate the positioning accuracy and repeatability of a low-cost robotic arm with unknown parameters under uncalibrated vision system. The vision system is composed of two cameras; installed on the top and on the lateral side of the robot, respectively. These two cameras need no calibration; thus, they can be installed in any position and orientation with just the condition that the end-effector of the robot must remain always visible. A red-colored feature point is fixed on the end of the third robotic arm link. In this study, captured image data via two fixed-cameras vision system are used as the sensor feedback for the position tracking of an uncertain robotic arm. LabVolt R5150 manipulator in our laboratory is used as case study. The visual estimation system is trained using ANFIS with subtractive clustering method in MATLAB. In MATLAB, the robot, feature point and cameras are simulated as physical behaviors. To get the required data for ANFIS, the manipulator was maneuvered within its workspace using forward kinematics and the feature point image coordinates were acquired with the two cameras. Simulation experiments show that the location of the robotic arm can be trained in ANFIS using two uncalibrated cameras; and problems for computational complexity and calibration requirement of multi-view geometry can be eliminated. Observing Mean Square Error (MSE), Root Mean Square Error (RMSE), Error Mean and Standard Deviation Errors, the performance of the proposed approach is efficient for using as visual feedback in uncertain robotic manipulator. Further, the proposed approach using ANFIS and uncalibrated vision system has better in flexibility, user-friendly manner and computational concepts over conventional techniques.
Aung Myat San,
Wut Yi Win,
Saint Saint Pyone,
ANFIS-Based Visual Pose Estimation of Uncertain Robotic Arm Using Two Uncalibrated Cameras, International Journal of Wireless Communications and Mobile Computing.
Vol. 6, No. 1,
2018, pp. 20-30.
Jang, J. –S. R.; Sun, C. –T. and E. Mizutani. Neuro-Fuzzy and Soft Computing: A Computational Approach to Learning and Machine Intelligence. Prentice Hall. 1997
Peter Corke, Robotics, Vision and Control. Bruno S., Oussama K., Frans G., Ed. Belin, Germany. Spring Verlag. 2011.
J J. Denavit, R. S. Hartenberg, “A kinematics Notation For Lower-Pair Mechanisms Based On atrices”, ASME Journal of Applied Mechanics, vol. 22, pp. 215–221, 1955.
C.-H. Huang, C.-S. Hsu, P.-C. Tsai, R.-J. Wang, and W.-J. Wang, “Vision Based 3-D Position Control for a Robot Arm,” IEEE Control Systems, Nov. 2011, pp. 1699-1703.
E. Zhou, M. Zhu, A. Hong, G. Nejat and B. Benhabib, “Line-Of-Sight Based 3D Localization of Parallel Kinematic Mechanisms,” International Journal on Smart Sensing and Intelligent Systems, vol. 8, no. 2, Jun. 2015, pp. 842-868.
S. Skaar, W. Brockman and W. Jang. “Three-dimensional camera space manipulation,” Int. J. Robotics Research, Apr. 2009, pp. 1172-1183.
H. Hashimoto, T. Kubota, M. Kudou and F. Harashima. “Self-organizing visual servo system based on neural networks,” IEEE Control Systems, Apr. 1992.
Gordon Wells, Christophe Venaille, Carme Torras, “Vision-based robot positioning using neural networks”, Image and Vision Computing 14, Elsevier, 1996, pp. 715-732.
J.-Y. Baek and M.-C. Lee. “A study on detecting elevator entrance door using stereo vision in multi floor environment.” in Proc. ICROS-SICE Int. Joint Conf., Fukuoka, Japan, Aug. 2009, pp. 1370–1373.
K. Okada, M. Kojima, S. Tokutsu, T. Maki, Y. Mori, and M. Inaba. “Multi-cue 3D object recognition in knowledge-based vision-guided humanoid robot system,” in Proc. 2007 IEEE/RSJ Int. Conf. Intell. Robot. Syst., CA, USA, Oct. 2007, pp. 3217–3222.
S. Winkelbach, S. Molkenstruck, and F. M. Wahl. “Low-cost laser range scanner and fast surface registration approach,” in Proc. DAGM Symp. Pattern Recognit., Berlin, Germany, Sep. 2006.
T. Dallej, M. Gouttefarde, N. Andreff, M. Michelin, and P. Martinet, “Towards vision-based control of cable-driven parallel robots”, IEEE International Conference Intelligent Robots and Systems, IROS’11., Sep. 2011, San Francisco, United States. sur CD ROM, pp. 2855-2860.
C. Torras. “Neural learning for robot control,” Proc. 11th Euro. Conf. on Artificial Intelligence (ECAI ’94), Amsterdam, Netherlands, August 1994, pp. 814-819.
J. Cid and F. Reyes, “Visual Servoing Controller for Robot Manipulators,” 4th WSEAS International Conference on Mathematical Biology and Ecology (MABE'08), Acapulco, Mexico, Jan. 2008, pp. 25-27.