Objective Function is Returning Undefined Values at Initial Point Fsolve Cannot Continue

fsolve

Solve a system of nonlinear equations

for x, where x is a vector and F(x) is a function that returns a vector value.

Syntax

    x = fsolve(fun,x0) x = fsolve(fun,x0,options) x = fsolve(fun,x0,options,P1,P2, ... ) [x,fval] = fsolve(...) [x,fval,exitflag] = fsolve(...) [x,fval,exitflag,output] = fsolve(...) [x,fval,exitflag,output,jacobian] = fsolve(...)        

Description

fsolve finds a root (zero) of a system of nonlinear equations.

x = fsolve(fun,x0) starts at x0 and tries to solve the equations described in fun.

x = fsolve(fun,x0,options) minimizes with the optimization parameters specified in the structure options.

x = fsolve(fun,x0,options,P1,P2,...) passes the problem-dependent parameters P1, P2, etc., directly to the function fun. Pass an empty matrix for options to use the default values for options.

[x,fval] = fsolve(fun,x0) returns the value of the objective function fun at the solution x.

[x,fval,exitflag] = fsolve(...) returns a value exitflag that describes the exit condition.

[x,fval,exitflag,output] = fsolve(...) returns a structure output that contains information about the optimization.

[x,fval,exitflag,output,jacobian] = fsolve(...) returns the Jacobian of fun at the solution x.

Arguments

Input Arguments. Table 4-1, Input Arguments, contains general descriptions of arguments passed in to fsolve. This section provides function-specific details for fun and options:

fun
The nonlinear system of equations to solve. fun is a function that accepts a vector x and returns a vector F, the nonlinear equations evaluated at x. The function fun can be specified as a function handle.
    x = fsolve(@myfun,x0)                  
where myfun is a MATLAB function such as
    function F = myfun(x) F = ...            % Compute function values at x                  
fun can also be an inline object.
    x = fsolve(inline('sin(x.*x)'),x0);                  
If the Jacobian can also be computed and options.Jacobian is 'on', set by
    options = optimset('Jacobian','on')                  
then the function fun must return, in a second output argument, the Jacobian value J, a matrix, at x. Note that by checking the value of nargout the function can avoid computing J when fun is called with only one output argument (in the case where the optimization algorithm only needs the value of F but not J).
    function [F,J] = myfun(x) F = ...          % objective function values at x if nargout > 1   % two output arguments    J = ...   % Jacobian of the function evaluated at x end                  
If fun returns a vector (matrix) of m components and x has length n, where n is the length of x0, then the Jacobian J is an m-by-n matrix where J(i,j) is the partial derivative of F(i) with respect to x(j). (Note that the Jacobian J is the transpose of the gradient of F.)
options
Options provides the function-specific details for the options parameters.

Output Arguments. Table 4-2, Output Arguments, contains general descriptions of arguments returned by fsolve. This section provides function-specific details for exitflag and output:

exitflag
Describes the exit condition:

> 0
The function converged to a solution x.

0
The maximum number of function evaluations or iterations was exceeded.

< 0
The function did not converge to a solution.
output
Structure containing information about the optimization. The fields of the structure are:

iterations
Number of iterations taken.

funcCount
Number of function evaluations.

algorithm
Algorithm used.

cgiterations
Number of PCG iterations (large-scale algorithm only).

stepsize
Final step size taken (medium-scale algorithm only).

firstorderopt
Measure of first-order optimality (large-scale algorithm only).
For large scale problems, the first-order optimality is the infinity norm of the gradient g =J T F (see Nonlinear Least Squares).

Options

Optimization options parameters used by fsolve. Some parameters apply to all algorithms, some are only relevant when using the large-scale algorithm, and others are only relevant when using the medium-scale algorithm.You can use optimset to set or change the values of these fields in the parameters structure, options. See Table 4-3, Optimization Options Parameters,, for detailed information.

We start by describing the LargeScale option since it states a preference for which algorithm to use. It is only a preference since certain conditions must be met to use the large-scale algorithm. For fsolve, the nonlinear system of equations cannot be underdetermined; that is, the number of equations (the number of elements of F returned by fun) must be at least as many as the length of x or else the medium-scale algorithm is used:

LargeScale
Use large-scale algorithm if possible when set to 'on'. Use medium-scale algorithm when set to 'off'.

Medium-Scale and Large-Scale Algorithms. These parameters are used by both the medium-scale and large-scale algorithms:

Diagnostics
Print diagnostic information about the function to be minimized.
Display
Level of display. 'off' displays no output; 'iter' displays output at each iteration; 'final' (default) displays just the final output.
Jacobian
If 'on', fsolve uses a user-defined Jacobian (defined in fun), or Jacobian information (when using JacobMult), for the objective function. If 'off', fsolve approximates the Jacobian using finite differences.
MaxFunEvals
Maximum number of function evaluations allowed.
MaxIter
Maximum number of iterations allowed.
TolFun
Termination tolerance on the function value.
TolX
Termination tolerance on x.

Large-Scale Algorithm Only. These parameters are used only by the large-scale algorithm:

JacobMult
Function handle for Jacobian multiply function. For large-scale structured problems, this function computes the Jacobian matrix products J*Y, J'*Y, or J'*(J*Y) without actually forming J. The function is of the form
    W = jmfun(Jinfo,Y,flag,p1,p2,...)                  

where Jinfo and the additional parameters p1,p2,... contain the matrices used to compute J*Y (or J'*Y, or J'*(J*Y)). The first argument Jinfo must be the same as the second argument returned by the objective function fun.
    [F,Jinfo] = fun(x,p1,p2,...)                  
The parameters p1,p2,... are the same additional parameters that are passed to fsolve (and to fun).
    fsolve(fun,...,options,p1,p2,...)                  
Y is a matrix that has the same number of rows as there are dimensions in the problem. flag determines which product to compute. If flag == 0 then W = J'*(J*Y). If flag > 0 then W = J*Y. If flag < 0 then W = J'*Y. In each case, J is not formed explicitly. fsolve uses Jinfo to compute the preconditioner.

    Note 'Jacobian' must be set to 'on' for Jinfo to be passed from fun to jmfun.

See Nonlinear Minimization with a Dense but Structured Hessian and Equality Constraints for a similar example.
JacobPattern
Sparsity pattern of the Jacobian for finite-differencing. If it is not convenient to compute the Jacobian matrix J in fun, lsqnonlin can approximate J via sparse finite-differences provided the structure of J -- i.e., locations of the nonzeros -- is supplied as the value for JacobPattern. In the worst case, if the structure is unknown, you can set JacobPattern to be a dense matrix and a full finite-difference approximation is computed in each iteration (this is the default if JacobPattern is not set). This can be very expensive for large problems so it is usually worth the effort to determine the sparsity structure.
MaxPCGIter
Maximum number of PCG (preconditioned conjugate gradient) iterations (see the Algorithm section below).
PrecondBandWidth
Upper bandwidth of preconditioner for PCG. By default, diagonal preconditioning is used (upper bandwidth of 0). For some problems, increasing the bandwidth reduces the number of PCG iterations.
TolPCG
Termination tolerance on the PCG iteration.
TypicalX
Typical x values.

Medium-Scale Algorithm Only. These parameters are used only by the medium-scale algorithm:

DerivativeCheck
Compare user-supplied derivatives (Jacobian) to finite-differencing derivatives.
DiffMaxChange
Maximum change in variables for finite-differencing.
DiffMinChange
Minimum change in variables for finite-differencing.
LevenbergMarquardt
Choose Levenberg-Marquardt over Gauss-Newton algorithm.
LineSearchType
Line search algorithm choice.

Examples

Example 1. This example finds a zero of the system of two equations and two unknowns

Thus we want to solve the following system for x

starting at x0 = [-5 -5].

First, write an M-file that computes F, the values of the equations at x.

    function F = myfun(x) F = [2*x(1) - x(2) - exp(-x(1));       -x(1) + 2*x(2) - exp(-x(2))];        

Next, call an optimization routine.

    x0 = [-5; -5];           % Make a starting guess at the solution options=optimset('Display','iter');   % Option to display output [x,fval] = fsolve(@myfun,x0,options)  % Call optimizer        

After 28 function evaluations, a zero is found.

              Norm of   First-order   CG- Iteration  Func-count  f(x)        step       optimality  iterations     1        4        47071.2            1   2.29e+004       0     2        7        6527.47      1.45207   3.09e+003       1     3       10        918.372      1.49186         418       1     4       13         127.74      1.55326        57.3       1     5       16        14.9153      1.57591        8.26       1     6       19       0.779051      1.27662        1.14       1     7       22     0.00372453     0.484658      0.0683       1     8       25   9.21617e-008    0.0385552    0.000336       1     9       28   5.66133e-017  0.000193707   8.34e-009       1 Optimization terminated successfully:  Relative function value changing by less than OPTIONS.TolFun  x =     0.5671     0.5671  fval =   1.0e-008 *    -0.5320    -0.5320 Optimization terminated successfully:  Relative function value changing by less than OPTIONS.TolFun x =     0.5671     0.5671 fval =    1.0e-08 *     -0.5320              -0.5320        

Example 2. Find a matrix x that satisfies the equation

starting at the point x= [1,1; 1,1].

First, write an M-file that computes the equations to be solved.

    function F = myfun(x) F = x*x*x-[1,2;3,4];        

Next, invoke an optimization routine.

    x0 = ones(2,2);  % Make a starting guess at the solution options = optimset('Display','off'); % Turn off Display [x,Fval,exitflag] = fsolve(@myfun,x0,options)        

The solution is

    x =     -0.1291    0.8602      1.2903    1.1612  Fval =      1.0e-03 *      0.1541   -0.1163      0.0109   -0.0243 exitflag =      1        

and the residual is close to zero.

    sum(sum(Fval.*Fval)) ans =       3.7974e-008        

Notes

If the system of equations is linear, then \ (the backslash operator; see help slash) should be used for better speed and accuracy. For example, to find the solution to the following linear system of equations.

Then the problem is formulated and solved as

    A = [ 3 11 -2; 1 1 -2; 1 -1 1]; b = [ 7; 4; 19]; x = A\b x =    13.2188    -2.3438     3.4375        

Algorithm

The methods are based on the nonlinear least squares algorithms also used in lsqnonlin. The advantage of using a least squares method is that if the system of equations is never zero due to small inaccuracies, or because it just does not have a zero, the algorithm still returns a point where the residual is small. However, if the Jacobian of the system is singular, the algorithm may converge to a point that is not a solution of the system of equations (see Limitations and Diagnostics below).

Large-Scale Optimization. By default fsolve chooses the large-scale algorithm. The algorithm is a subspace trust region method and is based on the interior-reflective Newton method described in [1],[2]. Each iteration involves the approximate solution of a large linear system using the method of preconditioned conjugate gradients (PCG). See Trust Region Methods for Nonlinear Minimization, and Preconditioned Conjugate Gradients in the "Large-Scale Algorithms" section.

Medium-Scale Optimization.fsolve with options.LargeScale set to 'off' uses the Gauss-Newton method [3] with line-search. Alternatively, a Levenberg-Marquardt method [4], [5], [6] with line-search may be selected. The choice of algorithm is made by setting options.LevenbergMarquardt. Setting options.LevenbergMarquardt to 'on' (and options.LargeScale to 'off') selects the Levenberg-Marquardt method.

The default line search algorithm, i.e., options.LineSearchType set to 'quadcubic', is a safeguarded mixed quadratic and cubic polynomial interpolation and extrapolation method. A safeguarded cubic polynomial method can be selected by setting options.LineSearchType to 'cubicpoly'. This method generally requires fewer function evaluations but more gradient evaluations. Thus, if gradients are being supplied and can be calculated inexpensively, the cubic polynomial line search method is preferable. The algorithms used are described fully in Standard Algorithms.

Diagnostics

fsolve may converge to a nonzero point and give this message

              Optimizer is stuck at a minimum that is not a root Try again with a new starting guess        

In this case, run fsolve again with other starting values.

Limitations

The function to be solved must be continuous. When successful, fsolve only gives one root. fsolve may converge to a nonzero point, in which case, try other starting values.

fsolve only handles real variables. When x has complex variables, the variables must be split into real and imaginary parts.

Large-Scale Optimization. Currently, if the analytical Jacobian is provided in fun, the options parameter DerivativeCheck cannot be used with the large-scale method to compare the analytic Jacobian to the finite-difference Jacobian. Instead, use the medium-scale method to check the derivative with options parameter MaxIter set to 0 iterations. Then run the problem again with the large-scale method. See Table 1-4, Large-Scale Problem Coverage and Requirements for more information on what problem formulations are covered and what information must be provided.

The preconditioner computation used in the preconditioned conjugate gradient part of the large-scale method forms J T J (where J is the Jacobian matrix) before computing the preconditioner; therefore, a row of J with many nonzeros, which results in a nearly dense product J T J, may lead to a costly solution process for large problems.

See Also

@ (function_handle), \, inline, lsqcurvefit, lsqnonlin, optimset

References

[1]  Coleman, T.F. and Y. Li, "An Interior, Trust Region Approach for Nonlinear Minimization Subject to Bounds," SIAM Journal on Optimization, Vol. 6, pp. 418-445, 1996.

[2]  Coleman, T.F. and Y. Li, "On the Convergence of Reflective Newton Methods for Large-Scale Nonlinear Minimization Subject to Bounds," Mathematical Programming, Vol. 67, Number 2, pp. 189-224, 1994.

[3]  Dennis, J. E. Jr., "Nonlinear Least Squares," State of the Art in Numerical Analysis, ed. D. Jacobs, Academic Press, pp. 269-312.

[4]  Levenberg, K., "A Method for the Solution of Certain Problems in Least Squares," Quarterly Applied Mathematics 2, pp. 164-168, 1944.

[5]  Marquardt, D., "An Algorithm for Least-squares Estimation of Nonlinear Parameters," SIAM Journal Applied Mathematics, Vol. 11, pp. 431-441, 1963.

[6]  More, J. J., "The Levenberg-Marquardt Algorithm: Implementation and Theory," Numerical Analysis, ed. G. A. Watson, Lecture Notes in Mathematics 630, Springer Verlag, pp. 105-116, 1977.

fseminf fzero

laneuperte1978.blogspot.com

Source: http://matrix.etseq.urv.es/manuals/matlab/toolbox/optim/fsolve.html

0 Response to "Objective Function is Returning Undefined Values at Initial Point Fsolve Cannot Continue"

Post a Comment

Iklan Atas Artikel

Iklan Tengah Artikel 1

Iklan Tengah Artikel 2

Iklan Bawah Artikel