Minimization of Unconstrained Nonpolynomial Large-Scale Optimization Problems Using Conjugate Gradient Method Via Exact Line Search
American Journal of Mechanical and Materials Engineering
Volume 1, Issue 1, March 2017, Pages: 10-14
Received: Feb. 28, 2017; Accepted: Mar. 22, 2017; Published: Apr. 7, 2017
Views 1348      Downloads 70
Authors
Adam Ajimoti, Department of Mathematics, University of Ilorin, Ilorin, Nigeria
Onah David Ogwumu, Department of Mathematics, Federal University Wukari, Wukari, Nigeria
Article Tools
Follow on us
Abstract
The nonlinear conjugate gradient method is an effective iterative method for solving large-scale optimization problems using the iterative scheme x(k+1) = x(k) + αkd(k) where: x(k+1) is the new iterative point, x(k) is the current iterative point, αk is the step-size and d(k) is the descent direction. In this research work, we employed the technique of exact line search to compute the step-size in the iterative scheme mentioned above. The line search technique gave good results when applied to some non-polynomial unconstrained optimization problems.
Keywords
Iterative Point, Non Polynomial, Unconstrained Optimization, Conjugate Gradient Method, Descent Direction, Exact Line Search, Iterative Scheme
To cite this article
Adam Ajimoti, Onah David Ogwumu, Minimization of Unconstrained Nonpolynomial Large-Scale Optimization Problems Using Conjugate Gradient Method Via Exact Line Search, American Journal of Mechanical and Materials Engineering. Vol. 1, No. 1, 2017, pp. 10-14. doi: 10.11648/j.ajmme.20170101.13
Copyright
Copyright © 2017 Authors retain the copyright of this article.
This article is an open access article distributed under the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0/) which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
References
[1]
Andrei, N. (2008). Unconstrained optimization text functions. Unpub-lished manuscript. Research Institute of Informatics. Bucharest, Romania.
[2]
Ali, M. Lecture on Nonlinear unconstrained optimization. School of Computation and Applied Mathematics, University of Witwatersand, Johannesburg, South Africa.
[3]
Bamigbola, O. M, Ali, M. and Nwaeze E. (2010). An efficient and convergent conjugate gradient method for unconstrained nonlinear optimization (submitted).
[4]
Fletcher, R. and Reeves, C. M. (1964). Function minimization by con-jugate gradient. Computer Journal. Vol. 7, No. 2, pp. 149-154.
[5]
Dai, Y. and Yuan, Y. (2000). A nonlinear conjugate gradient with a strong global convergence properties: SIAM Journal on Optimization. Vol. 10, pp. 177-182.
[6]
Fletcher, R. (1997). Practical method of optimization, second edition John Wiley, New York.
[7]
Polak, E. and Ribiere, G. (1969). Note sur la convergence de directions conjugees. Rev. Francaise Informant Recherche operationlle, 3e Annee 16, pp. 35-43.
[8]
Polyak, B. T. (1969). The conjugate gradient in extreme problems. USSR comp. Math. Math. phys. 94-112.
[9]
Hestenes, M. R. and Steifel, E. (1952). Method of conjugate gradient for solving linear equations. J. Res. Nat. Bur. Stand, pp. 49.
[10]
Liu, Y and Storey, C. (1992). Efficient generalized conjugate gradient algorithms. Journal of Optimization Theory and Applications. Vol. 69, pp. 129-137.
[11]
Rao, S. S. (1980). Optimization theory and applications, second edi-tion, Wiley Eastern Ltd., New Delhi.
[12]
Getr, G. and Trond, S. (2000). On large-scale unconstrained optimization problems and higher order methods. University of Bergen, Department of Informatics, High Technology Centre N-5020 Bergen, Norway.
ADDRESS
Science Publishing Group
1 Rockefeller Plaza,
10th and 11th Floors,
New York, NY 10020
U.S.A.
Tel: (001)347-983-5186