Optimization Algorithms Incorporated Fuzzy Q-Learning for Solving Mobile Robot Control Problems
American Journal of Software Engineering and Applications
Volume 5, Issue 3-1, May 2016, Pages: 25-29
Received: Sep. 14, 2016;
Accepted: Sep. 23, 2016;
Published: Aug. 21, 2017
Views 1875 Downloads 73
Sima Saeed, Department of Computer Engineering, Faculty of Engineering, Shahid Bahonar University of Kerman, Kerman, Iran
Aliakbar Niknafs, Department of Computer Engineering, Faculty of Engineering, Shahid Bahonar University of Kerman, Kerman, Iran
Follow on us
Designing the fuzzy controllers by using evolutionary algorithms and reinforcement learning is an important subject to control the robots. In the present article, some methods to solve reinforcement fuzzy control problems are studied. All these methods have been established by combining Fuzzy-Q Learning with an optimization algorithm. These algorithms include the Ant colony, Bee Colony and Artificial Bee Colony optimization algorithms. Comparing these algorithms on solving Track Backer-Upper problem –a reinforcement fuzzy control problem– shows that Artificial Bee Colony Optimization algorithm has the best efficiency in combining with fuzzy- Q Learning.
Mobile Robot, Fuzzy-Qlearning, Ant Colony Optimization-Fuzzy Q Learning, Bee Colony Optimization-Fuzzy-Q Learning, Artificial Bee Colony-Fuzzy Q Learning
To cite this article
Optimization Algorithms Incorporated Fuzzy Q-Learning for Solving Mobile Robot Control Problems, American Journal of Software Engineering and Applications. Special Issue: Advances in Computer Science and Information Technology in Developing Countries.
Vol. 5, No. 3-1,
2016, pp. 25-29.
Copyright © 2016 Authors retain the copyright of this article.
This article is an open access article distributed under the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0/
) which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
H. R̒. Berenji, “Fuzzy Q-learning for generalization of reinforcement,” IEEE Int. Conf. Fuzzy Syst, 1996.
P. Y. Glorennec, “Fuzzy Q-learning and dynamic fuzzy Q-learning,” IEEE Int. Conf. Fuzzy Syst., Orlando, 1994.
P. Y. Glorennec, L. Jouffe, “Fuzzy Q-learning,” IEEE Int. Conf. Fuzzy Syst, 1997.
L. Jouffe, “Fuzzy, inference system learning by reinforcement methods̕,” IEEE Trans. Syst., Man, Cybern. C, Appl. Rev., Vol. 28 (3), pp. 338–355, 1998.
C. F. Juang, “Ant Colony Optimization Incorporated With Fuzzy Q-Learning for Reinforcement Fuzzy Control,” IEEE Transactions on systems, man, and cybernetics—part a: systems and humans, Vol. 39, May 2009.
L. P. Wong, M. Yoke Hean Low, C. S. Chong, “A Bee Colony Optimization Algorithm for Traveling Salesman Problem,” Second Asia International Conference on Modelling & Simulation, IEEE, 2008.
L. P. Wong, Y. H. Malcolm Low, C. S. Chong, “Bee Colony Optimization with Local Search for Traveling Salesman Problem,” 2008.
M. Servet Kiran, H. Iscan, M. Gounduz, “The analysis of discrete artificial bee colony algorithm with neighborhood operator on traveling salesman problem,” Neural Comput and Applic, 2013.
W. li. Xiang, M. Qing An, “An efficient and robust artificial bee colony algorithm for numerical optimization,” Computers and Operations Research, pp. 1256–1265, 2013.
S. Saeed, A. Niknafs, “Artificial Bee Colony-Fuzzy Q Learning for Reinforcement Fuzzy Control (Truck Backer-Upper Control Problem),” International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems, Vol. 24, No. 1, pp. 123-136, 2016.