Minimization of Unconstrained Nonpolynomial Large-Scale Optimization Problems Using Conjugate Gradient Method Via Exact Line Search
American Journal of Mechanical and Materials Engineering
Volume 1, Issue 1, March 2017, Pages: 10-14
Received: Feb. 28, 2017; Accepted: Mar. 22, 2017; Published: Apr. 7, 2017
Views 1400      Downloads 70
Adam Ajimoti, Department of Mathematics, University of Ilorin, Ilorin, Nigeria
Onah David Ogwumu, Department of Mathematics, Federal University Wukari, Wukari, Nigeria
Article Tools
Follow on us
The nonlinear conjugate gradient method is an effective iterative method for solving large-scale optimization problems using the iterative scheme x(k+1) = x(k) + αkd(k) where: x(k+1) is the new iterative point, x(k) is the current iterative point, αk is the step-size and d(k) is the descent direction. In this research work, we employed the technique of exact line search to compute the step-size in the iterative scheme mentioned above. The line search technique gave good results when applied to some non-polynomial unconstrained optimization problems.
Iterative Point, Non Polynomial, Unconstrained Optimization, Conjugate Gradient Method, Descent Direction, Exact Line Search, Iterative Scheme
To cite this article
Adam Ajimoti, Onah David Ogwumu, Minimization of Unconstrained Nonpolynomial Large-Scale Optimization Problems Using Conjugate Gradient Method Via Exact Line Search, American Journal of Mechanical and Materials Engineering. Vol. 1, No. 1, 2017, pp. 10-14. doi: 10.11648/j.ajmme.20170101.13
Copyright © 2017 Authors retain the copyright of this article.
This article is an open access article distributed under the Creative Commons Attribution License ( which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Andrei, N. (2008). Unconstrained optimization text functions. Unpub-lished manuscript. Research Institute of Informatics. Bucharest, Romania.
Ali, M. Lecture on Nonlinear unconstrained optimization. School of Computation and Applied Mathematics, University of Witwatersand, Johannesburg, South Africa.
Bamigbola, O. M, Ali, M. and Nwaeze E. (2010). An efficient and convergent conjugate gradient method for unconstrained nonlinear optimization (submitted).
Fletcher, R. and Reeves, C. M. (1964). Function minimization by con-jugate gradient. Computer Journal. Vol. 7, No. 2, pp. 149-154.
Dai, Y. and Yuan, Y. (2000). A nonlinear conjugate gradient with a strong global convergence properties: SIAM Journal on Optimization. Vol. 10, pp. 177-182.
Fletcher, R. (1997). Practical method of optimization, second edition John Wiley, New York.
Polak, E. and Ribiere, G. (1969). Note sur la convergence de directions conjugees. Rev. Francaise Informant Recherche operationlle, 3e Annee 16, pp. 35-43.
Polyak, B. T. (1969). The conjugate gradient in extreme problems. USSR comp. Math. Math. phys. 94-112.
Hestenes, M. R. and Steifel, E. (1952). Method of conjugate gradient for solving linear equations. J. Res. Nat. Bur. Stand, pp. 49.
Liu, Y and Storey, C. (1992). Efficient generalized conjugate gradient algorithms. Journal of Optimization Theory and Applications. Vol. 69, pp. 129-137.
Rao, S. S. (1980). Optimization theory and applications, second edi-tion, Wiley Eastern Ltd., New Delhi.
Getr, G. and Trond, S. (2000). On large-scale unconstrained optimization problems and higher order methods. University of Bergen, Department of Informatics, High Technology Centre N-5020 Bergen, Norway.
Science Publishing Group
1 Rockefeller Plaza,
10th and 11th Floors,
New York, NY 10020
Tel: (001)347-983-5186