3
<title>Multimin Algorithms - GNU Scientific Library -- Reference Manual</title>
4
<meta http-equiv="Content-Type" content="text/html">
5
<meta name="description" content="GNU Scientific Library -- Reference Manual">
6
<meta name="generator" content="makeinfo 4.8">
7
<link title="Top" rel="start" href="index.html#Top">
8
<link rel="up" href="Multidimensional-Minimization.html#Multidimensional-Minimization" title="Multidimensional Minimization">
9
<link rel="prev" href="Multimin-Stopping-Criteria.html#Multimin-Stopping-Criteria" title="Multimin Stopping Criteria">
10
<link rel="next" href="Multimin-Examples.html#Multimin-Examples" title="Multimin Examples">
11
<link href="http://www.gnu.org/software/texinfo/" rel="generator-home" title="Texinfo Homepage">
13
Copyright (C) 1996, 1997, 1998, 1999, 2000, 2001, 2002, 2003, 2004, 2005, 2006 The GSL Team.
15
Permission is granted to copy, distribute and/or modify this document
16
under the terms of the GNU Free Documentation License, Version 1.2 or
17
any later version published by the Free Software Foundation; with the
18
Invariant Sections being ``GNU General Public License'' and ``Free Software
19
Needs Free Documentation'', the Front-Cover text being ``A GNU Manual'',
20
and with the Back-Cover Text being (a) (see below). A copy of the
21
license is included in the section entitled ``GNU Free Documentation
24
(a) The Back-Cover Text is: ``You have freedom to copy and modify this
25
GNU Manual, like GNU software.''-->
26
<meta http-equiv="Content-Style-Type" content="text/css">
27
<style type="text/css"><!--
28
pre.display { font-family:inherit }
29
pre.format { font-family:inherit }
30
pre.smalldisplay { font-family:inherit; font-size:smaller }
31
pre.smallformat { font-family:inherit; font-size:smaller }
32
pre.smallexample { font-size:smaller }
33
pre.smalllisp { font-size:smaller }
34
span.sc { font-variant:small-caps }
35
span.roman { font-family:serif; font-weight:normal; }
36
span.sansserif { font-family:sans-serif; font-weight:normal; }
42
<a name="Multimin-Algorithms"></a>
43
Next: <a rel="next" accesskey="n" href="Multimin-Examples.html#Multimin-Examples">Multimin Examples</a>,
44
Previous: <a rel="previous" accesskey="p" href="Multimin-Stopping-Criteria.html#Multimin-Stopping-Criteria">Multimin Stopping Criteria</a>,
45
Up: <a rel="up" accesskey="u" href="Multidimensional-Minimization.html#Multidimensional-Minimization">Multidimensional Minimization</a>
49
<h3 class="section">35.7 Algorithms</h3>
51
<p>There are several minimization methods available. The best choice of
52
algorithm depends on the problem. All of the algorithms use the value
53
of the function and its gradient at each evaluation point, except for
54
the Simplex algorithm which uses function values only.
57
— Minimizer: <b>gsl_multimin_fdfminimizer_conjugate_fr</b><var><a name="index-gsl_005fmultimin_005ffdfminimizer_005fconjugate_005ffr-2240"></a></var><br>
58
<blockquote><p><a name="index-Fletcher_002dReeves-conjugate-gradient-algorithm_002c-minimization-2241"></a><a name="index-Conjugate-gradient-algorithm_002c-minimization-2242"></a><a name="index-minimization_002c-conjugate-gradient-algorithm-2243"></a>This is the Fletcher-Reeves conjugate gradient algorithm. The conjugate
59
gradient algorithm proceeds as a succession of line minimizations. The
60
sequence of search directions is used to build up an approximation to the
61
curvature of the function in the neighborhood of the minimum.
63
<p>An initial search direction <var>p</var> is chosen using the gradient, and line
64
minimization is carried out in that direction. The accuracy of the line
65
minimization is specified by the parameter <var>tol</var>. The minimum
66
along this line occurs when the function gradient <var>g</var> and the search direction
67
<var>p</var> are orthogonal. The line minimization terminates when
68
<!-- {$p\cdot g < tol |p| |g|$} -->
69
dot(p,g) < tol |p| |g|. The
70
search direction is updated using the Fletcher-Reeves formula
71
p' = g' - \beta g where \beta=-|g'|^2/|g|^2, and
72
the line minimization is then repeated for the new search
74
</p></blockquote></div>
77
— Minimizer: <b>gsl_multimin_fdfminimizer_conjugate_pr</b><var><a name="index-gsl_005fmultimin_005ffdfminimizer_005fconjugate_005fpr-2244"></a></var><br>
78
<blockquote><p><a name="index-Polak_002dRibiere-algorithm_002c-minimization-2245"></a><a name="index-minimization_002c-Polak_002dRibiere-algorithm-2246"></a>This is the Polak-Ribiere conjugate gradient algorithm. It is similar
79
to the Fletcher-Reeves method, differing only in the choice of the
80
coefficient \beta. Both methods work well when the evaluation
81
point is close enough to the minimum of the objective function that it
82
is well approximated by a quadratic hypersurface.
83
</p></blockquote></div>
86
— Minimizer: <b>gsl_multimin_fdfminimizer_vector_bfgs</b><var><a name="index-gsl_005fmultimin_005ffdfminimizer_005fvector_005fbfgs-2247"></a></var><br>
87
<blockquote><p><a name="index-BFGS-conjugate-gradient-algorithm_002c-minimization-2248"></a><a name="index-minimization_002c-BFGS-conjugate-gradient-algorithm-2249"></a>This is the vector Broyden-Fletcher-Goldfarb-Shanno (BFGS) conjugate gradient
88
algorithm. It is a quasi-Newton method which builds up an approximation
89
to the second derivatives of the function f using the difference
90
between successive gradient vectors. By combining the first and second
91
derivatives the algorithm is able to take Newton-type steps towards the
92
function minimum, assuming quadratic behavior in that region.
93
</p></blockquote></div>
96
— Minimizer: <b>gsl_multimin_fdfminimizer_steepest_descent</b><var><a name="index-gsl_005fmultimin_005ffdfminimizer_005fsteepest_005fdescent-2250"></a></var><br>
97
<blockquote><p><a name="index-steepest-descent-algorithm_002c-minimization-2251"></a><a name="index-minimization_002c-steepest-descent-algorithm-2252"></a>The steepest descent algorithm follows the downhill gradient of the
98
function at each step. When a downhill step is successful the step-size
99
is increased by a factor of two. If the downhill step leads to a higher
100
function value then the algorithm backtracks and the step size is
101
decreased using the parameter <var>tol</var>. A suitable value of <var>tol</var>
102
for most applications is 0.1. The steepest descent method is
103
inefficient and is included only for demonstration purposes.
104
</p></blockquote></div>
107
— Minimizer: <b>gsl_multimin_fminimizer_nmsimplex</b><var><a name="index-gsl_005fmultimin_005ffminimizer_005fnmsimplex-2253"></a></var><br>
108
<blockquote><p><a name="index-Nelder_002dMead-simplex-algorithm-for-minimization-2254"></a><a name="index-simplex-algorithm_002c-minimization-2255"></a><a name="index-minimization_002c-simplex-algorithm-2256"></a>This is the Simplex algorithm of Nelder and Mead. It constructs
109
n vectors p_i from the
110
starting vector <var>x</var> and the vector <var>step_size</var> as follows:
111
These vectors form the n+1 vertices of a simplex in n
112
dimensions. On each iteration the algorithm tries to improve
113
the parameter vector p_i corresponding to the highest
114
function value by simple geometrical transformations. These
115
are reflection, reflection followed by expansion, contraction and multiple
116
contraction. Using these transformations the simplex moves through
117
the parameter space towards the minimum, where it contracts itself.
119
<p>After each iteration, the best vertex is returned. Note, that due to
120
the nature of the algorithm not every step improves the current
121
best parameter vector. Usually several iterations are required.
123
<p>The routine calculates the minimizer specific characteristic size as the
124
average distance from the geometrical center of the simplex to all its
125
vertices. This size can be used as a stopping criteria, as the simplex
126
contracts itself near the minimum. The size is returned by the function
127
<code>gsl_multimin_fminimizer_size</code>.
128
</p></blockquote></div>