Conjugate Direction Methods in Optimization

Author: M.R. Hestenes

Publisher: Springer Science & Business Media

ISBN:

Category: Science

Page: 325

View: 431

Shortly after the end of World War II high-speed digital computing machines were being developed. It was clear that the mathematical aspects of com putation needed to be reexamined in order to make efficient use of high-speed digital computers for mathematical computations. Accordingly, under the leadership of Min a Rees, John Curtiss, and others, an Institute for Numerical Analysis was set up at the University of California at Los Angeles under the sponsorship of the National Bureau of Standards. A similar institute was formed at the National Bureau of Standards in Washington, D. C. In 1949 J. Barkeley Rosser became Director of the group at UCLA for a period of two years. During this period we organized a seminar on the study of solu tions of simultaneous linear equations and on the determination of eigen values. G. Forsythe, W. Karush, C. Lanczos, T. Motzkin, L. J. Paige, and others attended this seminar. We discovered, for example, that even Gaus sian elimination was not well understood from a machine point of view and that no effective machine oriented elimination algorithm had been developed. During this period Lanczos developed his three-term relationship and I had the good fortune of suggesting the method of conjugate gradients. We dis covered afterward that the basic ideas underlying the two procedures are essentially the same. The concept of conjugacy was not new to me. In a joint paper with G. D.

Nonlinear Conjugate Gradient Methods for Unconstrained Optimization

Author: Neculai Andrei

Publisher: Springer Nature

ISBN:

Category: Mathematics

Page: 498

View: 152

Two approaches are known for solving large-scale unconstrained optimization problems—the limited-memory quasi-Newton method (truncated Newton method) and the conjugate gradient method. This is the first book to detail conjugate gradient methods, showing their properties and convergence characteristics as well as their performance in solving large-scale unconstrained optimization problems and applications. Comparisons to the limited-memory and truncated Newton methods are also discussed. Topics studied in detail include: linear conjugate gradient methods, standard conjugate gradient methods, acceleration of conjugate gradient methods, hybrid, modifications of the standard scheme, memoryless BFGS preconditioned, and three-term. Other conjugate gradient methods with clustering the eigenvalues or with the minimization of the condition number of the iteration matrix, are also treated. For each method, the convergence analysis, the computational performances and the comparisons versus other conjugate gradient methods are given. The theory behind the conjugate gradient algorithms presented as a methodology is developed with a clear, rigorous, and friendly exposition; the reader will gain an understanding of their properties and their convergence and will learn to develop and prove the convergence of his/her own methods. Numerous numerical studies are supplied with comparisons and comments on the behavior of conjugate gradient algorithms for solving a collection of 800 unconstrained optimization problems of different structures and complexities with the number of variables in the range [1000,10000]. The book is addressed to all those interested in developing and using new advanced techniques for solving unconstrained optimization complex problems. Mathematical programming researchers, theoreticians and practitioners in operations research, practitioners in engineering and industry researchers, as well as graduate students in mathematics, Ph.D. and master students in mathematical programming, will find plenty of information and practical applications for solving large-scale unconstrained optimization problems and applications by conjugate gradient methods.

Convergence Rates for Hestenes' Gram-Schmidt Conjugate Direction Method Without Derivatives in Numerical Optimization

Author: Raihen Md Nurul

Publisher:

ISBN:

Category: Conjugate direction methods

Page: 37

View: 316

In this paper we study convergence rates using quotient convergence factors and root convergence factors as described by Ortega and Rheinboldt for Hestenes' Gram-Schmidt conjugate direction method without derivatives. We do study this computationally and analytically by comparing this conjugate direction method for minimizing a non quadratic function f with Newton's method for solving ∇f = 0. This method of Hestenes is different from that of Smith, Powell, and Zangwill in its mathematical development. All of these ideas have already been developed and based upon 1952 paper of Hestenes and Siefel, the 1980 book Conjugate Direction Methods in Optimization by Hestenes and the 1975 paper by Dennemeyer and Mookini. The primary purpose of this work is to provide details analytically and computationally of what has been done before.

Conjugate Gradient Algorithms in Nonconvex Optimization

Author: Radoslaw Pytlak

Publisher: Springer Science & Business Media

ISBN:

Category: Mathematics

Page: 478

View: 719

This book details algorithms for large-scale unconstrained and bound constrained optimization. It shows optimization techniques from a conjugate gradient algorithm perspective as well as methods of shortest residuals, which have been developed by the author.

Numerical Optimization

Author: Jorge Nocedal

Publisher: Springer Science & Business Media

ISBN:

Category: Mathematics

Page: 636

View: 454

The new edition of this book presents a comprehensive and up-to-date description of the most effective methods in continuous optimization. It responds to the growing interest in optimization in engineering, science, and business by focusing on methods best suited to practical problems. This edition has been thoroughly updated throughout. There are new chapters on nonlinear interior methods and derivative-free methods for optimization, both of which are widely used in practice and are the focus of much current research. Because of the emphasis on practical methods, as well as the extensive illustrations and exercises, the book is accessible to a wide audience.

Optimization Theory and Methods

Nonlinear Programming

Author: Wenyu Sun

Publisher: Springer Science & Business Media

ISBN:

Category: Mathematics

Page: 688

View: 282

Optimization Theory and Methods can be used as a textbook for an optimization course for graduates and senior undergraduates. It is the result of the author's teaching and research over the past decade. It describes optimization theory and several powerful methods. For most methods, the book discusses an idea’s motivation, studies the derivation, establishes the global and local convergence, describes algorithmic steps, and discusses the numerical performance.

Introduction to Unconstrained Optimization with R

Author: Shashi Kant Mishra

Publisher: Springer Nature

ISBN:

Category: Mathematics

Page: 304

View: 264

This book discusses unconstrained optimization with R—a free, open-source computing environment, which works on several platforms, including Windows, Linux, and macOS. The book highlights methods such as the steepest descent method, Newton method, conjugate direction method, conjugate gradient methods, quasi-Newton methods, rank one correction formula, DFP method, BFGS method and their algorithms, convergence analysis, and proofs. Each method is accompanied by worked examples and R scripts. To help readers apply these methods in real-world situations, the book features a set of exercises at the end of each chapter. Primarily intended for graduate students of applied mathematics, operations research and statistics, it is also useful for students of mathematics, engineering, management, economics, and agriculture.

Practical Methods of Optimization

Author: R. Fletcher

Publisher: John Wiley & Sons

ISBN:

Category: Mathematics

Page: 456

View: 873

Fully describes optimization methods that are currently most valuable in solving real-life problems. Since optimization has applications in almost every branch of science and technology, the text emphasizes their practical aspects in conjunction with the heuristics useful in making them perform more reliably and efficiently. To this end, it presents comparative numerical studies to give readers a feel for possibile applications and to illustrate the problems in assessing evidence. Also provides theoretical background which provides insights into how methods are derived. This edition offers revised coverage of basic theory and standard techniques, with updated discussions of line search methods, Newton and quasi-Newton methods, and conjugate direction methods, as well as a comprehensive treatment of restricted step or trust region methods not commonly found in the literature. Also includes recent developments in hybrid methods for nonlinear least squares; an extended discussion of linear programming, with new methods for stable updating of LU factors; and a completely new section on network programming. Chapters include computer subroutines, worked examples, and study questions.

Conjugate Gradient Algorithms and Finite Element Methods

Author: Michal Krizek

Publisher: Springer Science & Business Media

ISBN:

Category: Computers

Page: 382

View: 288

The position taken in this collection of pedagogically written essays is that conjugate gradient algorithms and finite element methods complement each other extremely well. Via their combinations practitioners have been able to solve complicated, direct and inverse, multidemensional problems modeled by ordinary or partial differential equations and inequalities, not necessarily linear, optimal control and optimal design being part of these problems. The aim of this book is to present both methods in the context of complicated problems modeled by linear and nonlinear partial differential equations, to provide an in-depth discussion on their implementation aspects. The authors show that conjugate gradient methods and finite element methods apply to the solution of real-life problems. They address graduate students as well as experts in scientific computing.

Sparse matrix methods in optimization

Author: Stanford University. Systems Optimization Laboratory

Publisher:

ISBN:

Category:

Page: 34

View: 722

Optimization algorithms typically require the solution of many systems of linear equations B sub Y sub = b sub. When large numbers of variables or constraints are present, these linear systems could account for much of the total computation time. Both direct and iterative equation solvers are needed in practice. Unfortunately, most of the off-the shelf solvers are designed for single systems, whereas optimization problems give rise to hundreds or thousands of systems. To avoid refactorization, or to speed the convergence of an iterative method, it is essential to note that B sub is related to B sub - 1. The authors review various sparse matrices that arise in optimization, and discuss compromises that are currently being made in dealing with them. Since significant advances continue to be made with single-system solvers they give special attention to methods that allow such solvers to be used repeatedly on a sequence of modified systems (e.g., the product-form update; use of the Schur complement). The speed of factorizing a matrix then becomes relatively less important than the efficiency of subsequent solves with very many right-hand sides. At the same time it is hoped that future improvements to linear-equation software will be oriented more specifically to the case of related matrices B sub k. (Author).

Riemannian Optimization and Its Applications

Author: Hiroyuki Sato

Publisher: Springer Nature

ISBN:

Category: Technology & Engineering

Page: 129

View: 936

This brief describes the basics of Riemannian optimization—optimization on Riemannian manifolds—introduces algorithms for Riemannian optimization problems, discusses the theoretical properties of these algorithms, and suggests possible applications of Riemannian optimization to problems in other fields. To provide the reader with a smooth introduction to Riemannian optimization, brief reviews of mathematical optimization in Euclidean spaces and Riemannian geometry are included. Riemannian optimization is then introduced by merging these concepts. In particular, the Euclidean and Riemannian conjugate gradient methods are discussed in detail. A brief review of recent developments in Riemannian optimization is also provided. Riemannian optimization methods are applicable to many problems in various fields. This brief discusses some important applications including the eigenvalue and singular value decompositions in numerical linear algebra, optimal model reduction in control engineering, and canonical correlation analysis in statistics.

Engineering Optimization

Methods and Applications

Author: A. Ravindran

Publisher: John Wiley & Sons Incorporated

ISBN:

Category: Technology & Engineering

Page: 667

View: 181

The classic introduction to engineering optimization theory and practice--now expanded and updated Engineering optimization helps engineers zero in on the most effective, efficient solutions to problems. This text provides a practical, real-world understanding of engineering optimization. Rather than belaboring underlying proofs and mathematical derivations, it emphasizes optimization methodology, focusing on techniques and stratagems relevant to engineering applications in design, operations, and analysis. It surveys diverse optimization methods, ranging from those applicable to the minimization of a single-variable function to those most suitable for large-scale, nonlinear constrained problems. New material covered includes the duality theory, interior point methods for solving LP problems, the generalized Lagrange multiplier method and generalization of convex functions, and goal programming for solving multi-objective optimization problems. A practical, hands-on reference and text, Engineering Optimization, Second Edition covers: * Practical issues, such as model formulation, implementation, starting point generation, and more * Current, state-of-the-art optimization software * Three engineering case studies plus numerous examples from chemical, industrial, and mechanical engineering * Both classical methods and new techniques, such as successive quadratic programming, interior point methods, and goal programming Excellent for self-study and as a reference for engineering professionals, this Second Edition is also ideal for senior and graduate courses on engineering optimization, including television and online instruction, as well as for in-plant training.