|
x0=[-5;-5]
x0 =
-5
-5
>> options=optimset('Display','iter')
options =
Display: 'iter'
MaxFunEvals: []
MaxIter: []
TolFun: []
TolX: []
FunValCheck: []
OutputFcn: []
ActiveConstrTol: []
NoStopIfFlatInfeas: []
BranchStrategy: []
DerivativeCheck: []
Diagnostics: []
DiffMaxChange: []
DiffMinChange: []
GoalsExactAchieve: []
GradConstr: []
GradObj: []
Hessian: []
HessMult: []
HessPattern: []
HessUpdate: []
InitialHessType: []
InitialHessMatrix: []
Jacobian: []
JacobMult: []
JacobPattern: []
LargeScale: []
LevenbergMarquardt: []
LineSearchType: []
MaxNodes: []
MaxPCGIter: []
MaxRLPIter: []
MaxSQPIter: []
MaxTime: []
MeritFunction: []
MinAbsMax: []
NodeDisplayInterval: []
NodeSearchStrategy: []
NonlEqnAlgorithm: []
PhaseOneTotalScaling: []
Preconditioner: []
PrecondBandWidth: []
RelLineSrchBnd: []
RelLineSrchBndDuration: []
ShowStatusWindow: []
Simplex: []
TolCon: []
TolPCG: []
TolRLPFun: []
TolXInteger: []
TypicalX: []
>> [x,fval]=fsolve(@myfun_eq1,x0,options,-1)
Warning: Default trust-region dogleg method of FSOLVE cannot
handle non-square systems; switching to Gauss-Newton method.
> In fsolve at 232
Directional
Iteration Func-count Residual Step-size derivative
0 3 94142.4
1 12 0.539601 5.69 4.34
Conditioning of Gradient Poor - Switching To LM method
2 18 4.73984e-007 1.07 -0.00098 0.5
3 24 1.22382e-025 1.05 -4.59e-016 0.240855
4 25 6.5685e-029 1 -2.39e-025 0.117548
Optimization terminated: directional derivative along
search direction less than TolFun and infinity-norm of
gradient less than 10*(TolFun+TolX).
x =
0.5671
0.5671
fval =
8.1046e-015 |
|