### Table of Contents

- Introduction
- The CONOPT Algorithm
- Iteration Output
- Termination Messages
- Preprocessor
- Adjust Initial Point
- Phase 0 - Finding an Initial Feasible Solution
- Transition between SLP and SQP
- Bad Iterations
- Saddle Points and Directions of Negative Curvature
- Alternative Sub-Models
- Scaling
- CNS Models
- Multiple Threads
- Loss of Feasibility
- Stalling
- APPENDIX A - Options

# Introduction

CONOPT4 is an NLP solver derived from CONOPT3 but with many improvements. This documentation is intended to be self-contained, and there will therefore be some repetitions from the CONOPT3 documentation. We will refer to CONOPT4 as CONOPT in the following.

Nonlinear models created with GAMS must be solved with a nonlinear programming (NLP) algorithm. There are many solvers available, and the number is growing. It is almost impossible to predict how easy or difficult it is to solve a particular model with a particular algorithm, especially for NLP models, so GAMS cannot automatically select the best algorithm. The only reliable way to find which solver to use for a particular class of models is to experiment. However, it may be useful to classify different solvers and describe where CONOPT fits into this picture.

The most important distinction between NLP solvers is whether they attempt to find a local or a global solution. Solvers that attempt to find a global solution, called Global Solvers, will usually use some branch and bound technique and they can usually not solve very large models. As a contrast, most Local Solvers that only search for a locally optimal solution, can work with much larger models. If the model has the right mathematical properties, e.g., is convex, then Local Solvers will find a global optimum. Unfortunately, the theory for testing whether a general NLP model is convex is not very developed and is expected to be in the class of hard problems. CONOPT belongs to the class of Local Solvers, and it can be used for very large models.

Within the class of Local Solvers, the main distinction is between solvers based on Active Set methods and Interior Point methods. Active Set solvers are generally good for models with many equality constraints and few bounds. Interior Point solvers are generally good for models with many inequalities and many bounds, where the combinatorial nature of which inequalities and bounds are active play an important role. CONOPT is an Active Set solver.

Finally, within Active Set solvers we distinguish between Feasible Path methods and methods where feasibility only is found in the final point together with optimality. Feasible Path methods iterate through a sequence of feasible points, gradually improving the solution. It is assumed that the feasibility of the intermediate points will give more relevant derivatives and therefore better directions towards the optimal solution, and therefore will reduce the number of iterations relative to methods that do not enforce feasibility. On the other hand, each iteration will be more expensive because maintaining feasibility comes at a cost. CONOPT is a Feasible Path method, and great care has been taken to find an initial feasible solution and to reduce the cost of maintaining feasibility.

In addition to NLP models CONOPT has a good algorithm for CNS (Constrained Nonlinear Systems or Square Systems of equations) and CONOPT has been used to solve CNS models with many million variables and constraints.

This documentation will describe the basic algorithm and how the iteration output can help the user understand the solution progress. It will also describe some options that can be used to control the behavior of the algorithm. It should be emphasized that most users do not need any options. CONOPT has been designed to adjust its behavior based on statistics for the model and information about the progress of the algorithm.

# The CONOPT Algorithm

The algorithm used in CONOPT is based on the GRG algorithm first suggested by Abadie and Carpentier (1969). The actual implementation has many modifications to make it efficient for large models. Here we will just give a short verbal description of the major steps in a generic GRG algorithm applied to the model:

Min or max f(x) s.t. g(x) = 0 lb <= x <= ub

where \(x\) represents the set of variables, \(f\) represents the objective function, \(g\) represents a vector of constraints, and \(lb\) and \(ub\) represent vectors of lower and upper bounds. Inequalities are handled by adding properly bounded slack variables. The key steps in any GRG algorithm are:

- Initialize and find a feasible solution.
- Compute the Jacobian of the constraints, \(J\).
- Select a set of \(n\) basic variables, \(x_b\), from \(x\) such that \(B\), the sub-matrix of basic columns selected from \(J\), is nonsingular. Factorize \(B\). The remaining variables, \(x_n\), are called nonbasic.
- Solve \(B^{T}u\) = \(df/dx_b\) for the multipliers \(u\).
- Compute the reduced gradient, \(rgra = df/dx-J^{T}u\).
`rgra`

will be zero for the basic variables. - If rgra projected on the bounds is small, then stop. The current point is close to local optimality.
- Select the set of superbasic variables, \(xs\), as a subset of the nonbasic variables that have a reduced gradient pointing away from the bounds, and find a search direction, \(ds\), for the superbasic variables based on \(rgra\) and possibly on second order information.
- Perform a line search along the direction \(ds\). For each step, \(xs\) is changed in the direction \(ds\) and the basic variables \(x_b\) are subsequently adjusted to satisfy the constraints \(g(x_b,x_s)\) = 0 using an iterative Newton-like process with the factored B from step 3. When feasible the objective function \(f(x)\) is evaluated and used to adjust the step and/or decide to terminate the search.
- Go to 2.

The individual steps are of course much more detailed in the practical implementation in CONOPT. Step 1 consists of several pre-processing steps that usually create a smaller and easier model to be optimized. Step 1 also has a special Phase 0 and a scaling procedure as described in the following sections. The work involving the basis B, the LU factorization of B, and all operations involving B (step 3, 4, and 8) have been optimized to work even for very large models. The selection of search direction and optimizing step-lengths are specialized according to whether the model appears to be almost linear or not. For almost linear models some of the linear algebra work involving the matrices J and B is done using cheap LP-type updating techniques, second order information is not relevant in step 7, and the line search in step 8 has been improved by observing that the optimal step almost always will be determined by the first variable that reaches a bound. Similarly, when the model appears to be nonlinear, other aspects are optimized: the sets of basic and superbasic variables will often remain constant over several iterations, and the search direction is improved by using 2^{nd} order information provided by GAMS. The choice of whether to use the almost linear or the nonlinear components is taken dynamically based on observations of the progress of the algorithm. Finally, the one-dimensional search and the feasibility restoring Newton iterations in step 8 have been optimized using quadratic inter- and extrapolations and they take advantage of linearity in the relevant constraints.

# Iteration Output

When running CONOPT you will get a log-file that will look like this:

CONOPT 4 44.2.0 7d8c2acc Aug 17, 2023 WEI x86 64bit/MS Window C O N O P T version 4.31 Copyright (C) ARKI Consulting and Development A/S Bagsvaerdvej 246 A DK-2880 Bagsvaerd, Denmark The user model has 610 constraints and 666 variables with 2432 Jacobian elements, 1518 of which are nonlinear. The Hessian of the Lagrangian has 306 elements on the diagonal, 714 elements below the diagonal, and 562 nonlinear variables. Iter Phase Ninf Infeasibility RGmax NSB Step InItr MX OK 0 0 1.1198119428E+02 (Input point) The pre-triangular part of the model has 4 constraints and 11 variables. The post-triangular part of the model has 2 constraints and variables. There are 425 definitional constraints and defined variables. Preprocessed model has 179 constraints and 228 variables with 1953 Jacobian elements, 1828 of which are nonlinear. Iter Phase Ninf Infeasibility RGmax NSB Step InItr MX OK 9.8752676396E+01 (Full preprocessed model) 2.1093011202E-01 (After scaling) 2.0060649250E-01 (After adjusting individual variables) 1 0 1.9323934856E-01 1.0E+00 1 T T 2 0 1.9273037524E-01 1.0E+00 1 T T 3 0 1.9272947599E-01 1.0E+00 1 T T 4 1 1 1.9121491986E-01 2.8E-02 24 6.0E-03 25 F F 5 1 1 1.8991312916E-01 7.9E-02 7 5.0E-03 7 F F 6 1 1 1.7954812670E-01 8.3E-02 50 7.9E-02 9 T T 7 1 1 1.6192877703E-01 1.0E-02 49 9.8E-01 10 F F Iter Phase Ninf Infeasibility RGmax NSB Step InItr MX OK 8 1 1 1.5263773874E-01 3.8E-01 15 1.8E-01 15 F F 9 1 1 9.4548821232E-02 3.9E-01 49 1.0E+00 22 T T 10 1 1 5.5194560481E-02 1.0E-01 40 8.6E-01 17 T T 11 1 1 3.8130110738E-02 1.2E-01 40 1.0E+00 16 T T 12 1 1 2.1971971978E-02 9.0E-02 29 1.0E+00 14 T T 13 1 1 6.3168158469E-05 3.7E-01 7 1.2E+00 8 T T ** Feasible solution. Value of objective = 206.562941019 Iter Phase Ninf Objective RGmax NSB Step InItr MX OK 14 3 3.2026635060E+02 1.2E+02 18 2.4E-01 25 T T 15 3 4.0972213088E+02 2.1E+02 43 1.8E-01 25 F F 16 3 4.7700306262E+02 1.7E+02 44 1.9E-01 25 F F 17 3 4.9785498881E+02 6.9E+02 48 1.3E-01 19 F F 18 3 5.3401705259E+02 6.5E+02 48 1.0E+00 21 T T 19 3 5.5254890008E+02 8.0E+02 46 8.7E-01 17 T T 20 3 5.6360530732E+02 5.5E+02 48 1.0E+00 9 T T 21 3 5.7367560270E+02 6.5E+02 47 1.0E+00 11 T T 22 3 5.8820821496E+02 6.2E+02 47 3.1E+00 8 F F 23 3 5.9375570902E+02 7.6E+02 47 1.0E+00 15 T T Iter Phase Ninf Objective RGmax NSB Step InItr MX OK 24 3 6.2588529990E+02 7.6E+02 46 4.7E+00 14 T T 25 3 6.4664801061E+02 8.1E+02 47 3.7E+00 13 F F 26 3 6.6175989091E+02 7.4E+02 48 2.9E+00 16 F T 27 3 6.7789059174E+02 6.8E+02 48 9.3E+00 14 F F 28 3 6.9697407332E+02 7.1E+02 48 1.2E+01 19 T T 29 3 7.1098057393E+02 7.1E+02 48 1.4E+01 15 T T 30 4 8.6895316171E+02 1.9E+03 48 2.4E-01 18 F T 31 4 9.5309471994E+02 1.5E+03 49 8.1E-02 47 F F 32 4 1.0261710184E+03 2.1E+03 49 8.9E-02 46 F F 33 4 1.0583894431E+03 7.8E+02 49 4.2E-02 47 F F Iter Phase Ninf Objective RGmax NSB Step InItr MX OK 34 4 1.1225408637E+03 1.2E+04 49 8.8E-02 53 F F 35 4 1.1679673256E+03 1.3E+02 49 9.4E-02 47 F F 36 4 1.2151335208E+03 9.0E+02 49 1.4E-01 35 F F 37 4 1.2291455843E+03 7.3E+01 49 1.2E-01 24 F F 38 4 1.2392640422E+03 7.3E+01 49 1.2E-01 14 F F 39 4 1.2705894285E+03 7.9E+01 49 6.9E-01 17 F F 40 4 1.2785695198E+03 9.9E+01 49 6.0E-01 12 T T 41 4 1.2823556479E+03 1.2E+01 48 1.0E+00 12 T T 42 4 1.2823990320E+03 1.2E+00 43 1.0E+00 10 F T 43 4 1.2823997232E+03 4.8E-02 43 1.0E+00 13 F T Iter Phase Ninf Objective RGmax NSB Step InItr MX OK 44 4 1.2823997236E+03 9.2E-04 43 1.0E+00 11 F T 45 4 1.2823997236E+03 7.4E-06 43 1.0E+00 8 F T 46 4 1.2823997236E+03 1.9E-08 43 ** Optimal solution. Reduced gradient less than tolerance.

The first few lines identify the version of CONOPT. CONOPT is updated at regular intervals, and if you want to report a problem, it is important to identify the version you are using.

The next lines show the size of the model measured in constraints, variables, and Jacobian elements; these numbers are also shown in the GAMS log. Usually, you will also see information about the Hessian (the matrix of 2^{nd} derivatives) of the Lagrangian (a function combining the objective function and the constraints, \(L = f(x) + u^{T}g(x)\) where u is a vector of dual variables). For some very large or dense models and models with few degrees of freedom, 2^{nd} order information is not automatically available and these lines will be missing. You can force 2^{nd} order information to be used with the option Flg_Hessian.

When CONOPT has received the model from GAMS it evaluates the constraint and determines the initial (sum of) Infeasibilities in the input point. It then executes a preprocessor that tries to reduce the size of the model and to find a solution that is closer to feasibility. The information from the preprocessor depends on the model and is described in the section Preprocessor.

After the preprocessors finishes, the model is scaled, and all later reporting of infeasibilities refers to the scaled model.

The remaining part of the log file describes the optimization iterations with the iteration number in the first column. For very small models you may not see all iterations in the iteration log. The options Frq_Log_SlpSqp and Frq_Log_Simple can be used to increase or decrease the log frequency.

The type of each iteration is described by the value of "Phase" in column 2. During Phase 0, 1, and 2 the model is infeasible and the Sum of Infeasibilities in column 4, labeled "Infeasibility", is being minimized. During Phase 1 and 2 the number in column “Ninf” tells how many constraints are still infeasible. These infeasibilities are minimized subject to the feasible constraints remaining feasible, and Ninf will therefore usually not increase. During Phase 3 and 4 the model is feasible and the actual objective function, also shown in column 4 and now labeled "Objective", is either minimized or maximized according to the Solve statement in GAMS.

Phase 0 iterations are Newton-like iterations. The background is described in the section Phase 0 - Finding an Initial Feasible Solution. During Phase 1 and 3 the model behaves almost linearly and CONOPT applies special linear iterations that take advantage of the linearity. These iterations are usually combined into several inner "Sequential Linear Programming" (SLP) iterations, and the number of these inner iterations are shown in the "InItr" column. During Phase 2 and 4 the model behaves more nonlinearly, either in the objective (Phase 4 only) or in the constraints, and most aspects of the iterations are therefore changed: the line search is more elaborate, and CONOPT needs second order information to improve the convergence. CONOPT uses inner "Sequential Quadratic Programming" (SQP) iterations based on exact second derivatives computed by GAMS. The number of these inner iterations are shown in the "InItr" column.

The algorithm in Phase 1 and 3 is almost the same; the only difference is in the set of constraints (in Phase 3 all constraints and in Phase 1 the feasible constraints) and in the objective function (in Phase 3 the user objective function and in Phase 1 the sum of the residuals in the infeasible constraints). The same applies to Phase 2 vs. Phase 4.

The SLP or SQP iterations will define a search direction on the tangent plane of the nonlinear constraints. CONOPT will follow this direction and use one or more steps to find the best solution in the direction. This process is called the one-dimensional search. Because CONOPT is a Feasible Path Method and the constraints in general are nonlinear, the points on the tangent plane are usually not feasible and CONOPT will therefore adjust some of the variables to make the constraints feasible, before the objective function is evaluated in a feasible point. The feasibility restoring process is based on an adaption of Newton's method where only a subset of the variables, the Basic variables, are adjusted.

The remaining numbers in the iteration log describes the one-dimensional search. The column "NSB" for Number of SuperBasics defines the degree of freedom or the dimension of the current search space, and "Rgmax" measures the largest reduced gradient among the non-optimal variables. Rgmax should eventually converge to zero, but convergence will in general not be monotone. The last two columns labeled "MX" and "OK" give information about the one-dimensional search. OK = T means that the one-dimensional search was well-behaved, and the step described in column “Step” is close to the local optimum in the selected search direction. OK = F means that the one-dimensional search was stopped short of an optimal step, usually because the Newton iterations used to find a feasible solution only converge for small steps. MX = T means that the one-dimensional search was terminated because a variable reached a bound or an inequality became binding. This is always combined with OK = T. MX = F means that the step length was determined by the nonlinearity of the objective function.

The step suggested by the SLP or SQP procedure is always 1.0. The linear or quadratic approximation is therefore good when the optimal step is 1.0. Iterations with step 1.0 are often faster because it is sufficient to evaluate a single step. The last iteration in phase 1, iteration 11 and 12 in the log-file above, are very good and the last iterations before optimality, iteration 41 to 46 in phase 4, are also very good.

In the initial part of the optimization, when we are far from the final solution, the direction found by the SLP or SQP procedure will often suggest large changes in the variables, and the feasibility restoring Newton iterations may not converge due to large nonlinearities. We will therefore see more iteration with OK = F and Step < 1 in the beginning, e.g., iteration 15 to 17 in phase 3 and iteration 31 to 39 in phase 4.

# Termination Messages

When CONOPT has finished it will show a termination message describing why it has finished. This section will show most of these messages followed by a short explanation. It will also show the Model Status returned to GAMS in `<model>.ModelStat`

, where `<model>`

represents the name of the GAMS model. The Solver Status returned in `<model>.SolveStat`

will be given if it is different from 1 (Normal Completion). We will in all cases first show the message from CONOPT followed by a short explanation.

The first 4 messages are used for optimal solutions and CONOPT will return ModelStat = 2 (Locally Optimal), except as noted below:

** Optimal solution. There are no superbasic variables.

The solution is a locally optimal corner solution. The solution is determined by constraints only, and both the values of the variables and the value of the objective function are usually very accurate. In some cases, CONOPT can determine that the solution is globally optimal, and it will return ModelStat = 1 (Optimal).

** Optimal solution. Reduced gradient less than tolerance.

The solution is a locally optimal interior solution. The largest component of the reduced gradient is less than the optimality tolerance, `Tol_Optimality`

, with default value 1.e^{-7}. The value of the objective function is very accurate while the values of the variables can be less accurate due to a flat objective function in the interior of the feasible area.

** Optimal solution. The error on the optimal objective function value estimated from the reduced gradient and the estimated Hessian is less than the minimal tolerance on the objective.

The solution is a locally optimal interior solution. The largest component of the reduced gradient is larger than the optimality tolerance, `Tol_Optimality`

. However, when the reduced gradient is scaled with second order information the solution seems optimal. For this to happen the objective must be large or the reduced objective must have large second derivatives, so it is advisable to scale the model if possible.

** Optimal solution. Convergence too slow. The change in objective has been less than xx.xx for xx consecutive iterations.

CONOPT stops with a solution that seems optimal. The solution process is stopped because of slow progress. The largest component of the reduced gradient is greater than the optimality tolerance, `Tol_Optimality`

, but less than `Tol_Optimality`

multiplied by the largest Jacobian element divided by 100. The model must have large derivatives, so it is advisable to scale it.

The four messages above all exist in versions where "Optimal" is replaced by "Infeasible" and ModelStat will be 5 (Locally Infeasible) or 4 (Infeasible). The infeasible messages indicate that the Sum of Infeasibility objective function in Phase 1 or 2 is locally minimal, but positive. If the model is convex, it does not have a feasible solution; if the model is non-convex it may have a feasible solution in a different region. See the section on Initial Values in the Tutorial and Examples section of the GAMS User’s Guide for hints on what to do.

** Feasible solution. Convergence too slow. The change in objective has been less than xx.xx for xx consecutive iterations. ** Feasible solution. The tolerances are minimal and there is no change in objective although the reduced gradient is greater than the tolerance.

The two messages above tell that CONOPT stops with a feasible solution. In the first case the solution process is very slow and in the second there is no progress at all and the optimality criteria have not been satisfied. These messages are accompanied by ModelStat = 7 (Feasible Solution) and SolveStat = 4 (Terminated by Solver). The problem can be caused by discontinuities if the model is of type DNLP; in this case you should consider alternative, smooth formulations as discussed in section Reformulating DNLP Models in the Tutorials and Examples. The problem can also be caused by a poorly scaled model. See section Good NLP formulation and in particular the section on Scaling variables and Equations in Tutorials and Examples. Finally, it can be caused by stalling as described in section Stalling. The two messages also exist in a version where "Feasible" is replaced by "Infeasible". ModelStat is in this case 6 (Intermediate Infeasible) and SolveStat is still 4 (Terminated by Solver); these versions tell that CONOPT cannot make progress towards feasibility, but the Sum of Infeasibility objective function does not have a well-defined local minimum.

** Unbounded solution. A variable has reached 'infinity'. Largest legal value (Lim_Variable) is xx.xx

CONOPT considers a solution to be unbounded if a variable exceeds the indicated value of Lim_Variable (default 1.e^{15}) and it returns ModelStat = 3 (Unbounded). The check for unboundedness is done at every iteration which means that CONOPT will stop if an intermediate solution has a variable that is very large, even if none of the variables in the optimal solution have large values. The variable that has reached ‘Infinity’ is shown in the listing file. You should check whether the solution appears unbounded, or the problem is caused by the scaling of the unbounded variable. If the model seems correct you are advised to scale it. There is also a lazy solution: you can increase the largest legal value, `Lim_Variable`

, as mentioned in the section on options. However, you will most likely pay through reduced reliability or increased solution times. Unlike LP models, where an unbounded model is recognized by an unbounded ray and the iterations are stopped far from "infinity", CONOPT will make a line search and move to a region with large values of the variables. This may lead to bad scaling and derived tolerance and round off problems, including problems of determining whether a solution is feasible or not.

The message above exists in a version where "Unbounded" is replaced by "Infeasible" and ModelStat is 5 (Locally Infeasible). You may also see a message like

** Infeasible solution. A free variable exceeds the allowable range. Current value is xx.xx and current upper bound (Lim_Variable) is xx.xx

These Infeasible messages indicate that some variables become very large before a feasible solution has been found. You should again check whether the problem is caused by the scaling of the unbounded variable. If the model seems correct you should scale it.

** The time limit has been reached.

The time or resource limit defined in GAMS, either by default (usually 1000 seconds) or by `Option ResLim = xx;`

or `<model>.ResLim = xx;`

statements, has been reached. CONOPT will return with SolveStat = 3 (Resource Interrupt) and ModelStat either 6 (Locally Infeasible) or 7 (Feasible Solution).

** The iteration limit has been reached.

The iteration limit defined in GAMS, either by default (usually 2000000000 iterations) or by `Option IterLim = xx;`

or `<model>.IterLim = xx;`

statements, has been reached. CONOPT will return with SolveStat = 2 (Iteration Interrupt) and ModelStat either 6 (Locally Infeasible) or 7 (Feasible Solution).

** Domain error(s) in nonlinear functions. -or- ** Domain error(s) in nonlinear functions. NaN returned -or- ** Domain error(s) in nonlinear functions. Residual too large. Check bounds on variables.

The number of function evaluation errors or bad function values has reached the limit defined in GAMS by `Option DomLim = xx;`

or `<model>.DomLim = xx;`

statements or the default limit of 0 function evaluation errors. CONOPT will return with SolveStat = 5 (Evaluation Error Limit) and ModelStat either 6 (Locally Infeasible) or 7 (Feasible Solution).

Many of the nonlinear functions available with GAMS are not defined for all values of their arguments. Log is not defined for negative arguments, Exp overflows for large arguments, and division by zero is illegal. To avoid evaluating functions outside their domain of definition you should add reasonable variable bounds. CONOPT will in return guarantee that the nonlinear functions never are evaluated with variables outside their bounds. For more advice, see details in section Good NLP Formulations in the Tutorials and Examples section of the GAMS User’s Guide.

** An initial derivative is too large (larger than xx.xx) Scale the variables and/or equations or add bounds. <var> appearing in <equ>: Initial Jacobian element too large = xx.xx

and

** A derivative is too large (larger than xx.xx). Scale the variables and/or equations or add bounds. <var> appearing in <equ>: Jacobian element too large = xx.xx

These two messages appear if a derivative or Jacobian element is very large, either in the initial point or in a later intermediate point. The relevant variable and equation pair(s) will be shown in the listing file and will guide you where to look. A large derivative means that the function changes very rapidly even after a very small change in the variable and it will most likely create numerical problems for many parts of the optimization algorithm. Instead of attempting to solve a model that most likely will fail, CONOPT will stop, and you are advised to adjust the model.

If the offending derivative is associated with a `Log(x)`

or `1/x`

term you may try to increase the lower bound on `x`

. If the offending derivative is associated with an `Exp(x)`

term you must decrease the upper bound on `x`

. You may also try to scale the model, either manually or using the variable.Scale and/or equation.Scale option in GAMS as described in the section Scaling variables and Equations in the Tutorials and Examples part of the GAMS documentation.

# Preprocessor

The preprocessor in CONOPT3 identifies pre- and post-triangular variables and constraints, and it handles these variables and constraints in a special way to make some internal routines run more efficiently.

CONOPT goes one step further and distinguishes between a 'user model' as defined by the user via the GAMS language, and an 'internal model'. Pre-triangular variables and constraints are simply removed from the user model and are not present in the internal model. Post-triangular variables and constraints are collapsed into a single condensed objective function. And definitional constraints are eliminated. After the internal model has been solved, CONOPT translates the internal solution back into the solution for the user model and reports this solution to the user.

In addition to the simple pre- and post-triangular variables and constraints, the preprocessor in CONOPT looks at more possibilities for simplifying the model. Some of the new features are:

- Fixed variables are removed completely.
- Constraints that represent simple inequalities are identified and changed into simple bounds on the variables and the constraints are removed.
- Simple monotone constraints such as
`exp(x) =L= c1 or log(y) =L= c2`

are converted into simple bounds on the variables and then removed. - Forcing constraints such as
`x1 + x2 =L= 0`

with`x1.lo = 0`

and`x2.lo = 0`

are identified, the variables are fixed, and the constraints are removed. - Redundant constraints such as
`x1 + x2 =L= 3`

with`x1.lo = 0`

,`x1.up = 1`

,`x2.lo = 0`

, and`x2.up = 1`

are identified and removed. - Linear and monotone constraints are used to compute 'implied bounds' on many variables and these bounds can help CONOPT get a better starting point for finding an initial feasible solution. The implied bounds may also change a non-monotone constraint into a monotone constraint, and they may help identify redundant constraints.
- Some non-monotone constraints such as
`sqr(x1) + sqr(x2) =L= 1`

can also be used to derive implied bounds (here`-1 < x1 < +1 and -1 < x2 < +1`

) that both can improve the starting point and can be used to determine that other terms are monotone or redundant. - Constraints with exactly two variables, e.g., simple linear identities such as
`x1 =E= a*x2 + b`

or simple monotone identities such as`x3 =E= exp(x4)`

, are used to move bounds between the two variables and this may result in more variables being included in the post-triangle. - Linear constraints that are identical or proportional to others are identified and removed.
- Pairs of constraints that define a lower and an upper bound on the same linear expression or proportional linear expressions, e.g.,
`1 =L= x1 + x2`

and`2*x1+2*x2 =L= 4`

, are turned into a single ranged constraint with a double-bounded slack variable. - Nonlinear constraints that become linear when the pre-triangular variables are fixed are recognized as being linear with the resulting simplifications.

Some of the new preprocessing steps are useful when solving sub-models in a Branch and Bound environment. A constraint like `x =L= M*y`

where `y`

is a binary variable fixed at either `0`

or `1`

is turned into a simple bound on `x`

. And a constraint like `sum(i, x(i) ) =L= Cap*y (with x.lo(i) = 0)`

combined with `y`

fixed at zero will force all `x(i)`

to zero.

The preprocessor also identifies constructs that are easy to make feasible. There are currently two types:

- Penalty terms: A penalty constraint is defined as a constraint of the form
`f(x1,x2,..) + p - n =E= 0`

, where`p`

and`n`

are positive variables, and where`p`

and`n`

only appear in post-triangular constraints or in previously identified penalty constraint. For any feasible values of the x-variables it is easy to find values of`p`

and`n`

that make the penalty constraint feasible:`p = max(0,-f(x))`

and`n = max(0,f(x))`

. The definition is easily generalized to constraints where p and n have coefficients different from one and nonzero bounds; the essence is the presence of two linear unbounded terms of opposite sign. - Minimax terms: A minimax group is defined as a group of constraints of the form
`eq(i).. fi(x1,x2,..) =L= z`

where`z`

is common to the group and otherwise only appears in post-triangular constraints, and`z`

is unbounded from above. For any feasible value of the x-variables it is easy to find a value of z that makes the minimax group feasible:`z = smin(i: fi(x))`

. The definition is easily generalized to groups of constraints where z has coefficients different from one and where the direction of the inequality is reversed.

The preprocessor will also recognize definitional constraints: constraints of the form `x =E= f(y)`

, where `x`

is a free variable or the bounds on `x`

cannot be binding, are called definitional constraints and `x`

is called a defined variable. If there are many potential defined variables the preprocessor will select a recursive set and logically eliminate them from the internal model: The values of the defined variables are easily derived from the values of all other variables by solving the definitional constraints in their recursive order. These values are then substituted into the remaining constraints before their residuals are computed. The matrix of derivatives of the remaining constraints is computed from the overall matrix of derivatives via an efficient elimination of the triangular definitional constraints.

The following extract from the log-file for the otpop.gms model in the GAMS Library shows the main output from the preprocessor:

The user model has 77 constraints and 104 variables with 285 Jacobian elements, 100 of which are nonlinear. The Hessian of the Lagrangian has 17 elements on the diagonal, 33 elements below the diagonal, and 66 nonlinear variables. Iter Phase Ninf Infeasibility RGmax NSB Step InItr MX OK 0 0 4.0939901439E+03 (Input point) The pre-triangular part of the model has 32 constraints and 43 variables. The post-triangular part of the model has 9 constraints and variables. There are 13 definitional constraints and defined variables. Preprocessed model has 23 constraints and 39 variables with 88 Jacobian elements, 25 of which are nonlinear. Iter Phase Ninf Infeasibility RGmax NSB Step InItr MX OK 1.0510968588E+03 (Full preprocessed model) 4.5063072017E+01 (After scaling)

The first few lines define the size of the user model. After the description of the initial sum of infeasibilities there are three lines with statistics from the preprocessor. Note that the pre-triangular part of the model has more variables than constraints, in this case because there are fixed variables. The post-triangular part will always have the same number of constraints as variables, and the same is true for definitional constraints and defined variables.

After the statistics from the preprocessor the size of the resulting internal model is shown. The number of constraints in the preprocessed model, here 23, is computed from 77 constraints in the user model minus 32 removed in the pre-triangle, 9 removed in the post-triangle, and 13 removed as definitional constraints. Similarly, the number of variables in the preprocessed model, here 39, is computed from 104 variables in the user model minus 43 removed in the pre-triangle, 9 removed in the post-triangle, and 13 removed as definitional variables. The Jacobian counts cannot be derived easily. And there is no Hessian information for the internal model; it is costly to compute the Hessian for the internal model and it will in most cases be very dense so all use of 2^{nd} order information in the internal model is computed by mapping variables and constraints to the user model and using the Hessian in the user model.

Other examples of output from the preprocessor are explained in the section on Alternative Sub-Models.

There are some options that control the preprocessor:

- You can turn the preprocessor off using the option Flg_Prep = false. Note that CONOPT still will generate an internal model, but it will be very close to the user model.
- You can turn the search for definitional constraints off using option Flg_NoDefc = true. The time used for the search will usually be recovered later by cheaper iterations, except for models that solve in very few iterations.

# Adjust Initial Point

If the preprocessed model is infeasible, CONOPT starts with a new 'Adjust Initial Point' procedure. The procedure reduces the sum of infeasibilities by changing individual variables one at a time. The procedure is very cheap because changing a single variable only involves a small part of the overall model. The procedure will as a by-product produce a large part of a good initial basis and many constraints will become feasible. If the Adjust Initial Point procedure reduces the infeasibilities, the log file will have a line as shown below (from otpop.gms):

Iter Phase Ninf Infeasibility RGmax NSB Step InItr MX OK 1.0510968588E+03 (Full preprocessed model) 4.5063072017E+01 (After scaling) 2.7782397440E+00 (After adjusting individual variables) 5 0 1.5284282657E+00 1.0E+00 1 T T

As this example shows, the procedure can in some cases reduce the sum of infeasibilities significantly. In rarer cases it will find a feasible solution.

The time used by the Adjust Initial Point procedure can in rare cases, usually for very dense models, be large. If you experience this, you can turn the procedure off with option Flg_AdjIniP = false.

# Phase 0 - Finding an Initial Feasible Solution

CONOPT is a feasible path method, but the initial point defined by the user will not always be feasible, and the Adjust Initial Point procedure may not reduce the infeasibilities to zero.

Phase 0 searches for a feasible solution using a procedure based on Newton's method for solving nonlinear equalities. If a basis has been provided by GAMS (see the GAMS Option Bratio for details) then the initial basis will be as chosen from the GAMS basis with adjustments to make it non-singular. Otherwise, a set of basic variables is selected with preference for variables away from their bounds. It may not be possible to find a full basis with variables away from their bounds so the initial basis may also have some slack variables. The standard Newton's method defines a change direction for these basic variables but bounds and basic slacks will sometimes prevent this direction from reducing the sum of infeasibilities. CONOPT is therefore using an LP-like pricing step to select a subset of the constraints for with Newton's method will reduce the sum of infeasibilities. Newton's method applied to this subset gives a search direction and if a full step can be taken without reaching any bounds the constraints in the subset will usually become feasible rather quickly. If bounds become active the basis is changed and the process is repeated. The constraints that are not in the Newton subset, usually few, will subsequently be made feasible using a phase 1 procedure.

This log file extract showing Phase 0 is from GAMS Library model mathopt3.gms:

Iter Phase Ninf Infeasibility RGmax NSB Step InItr MX OK 1.7198687345E+03 (Full preprocessed model) 7.9643177481E+00 (After scaling) 1.0113241089E+00 (After adjusting individual variables) 1 0 8.6918201909E-01 2.5E-01 2 F F 2 0 4.1584560783E-01 1.0E+00 1 T T 3 S0 9.1316232150E-01 1.0E+00 1 T T 4 0 8.4915160660E-01 2.5E-01 1 F F 5 S0 6.2619863264E-01 1.0E+00 1 T T 6 0 6.1975571905E-01 1.0E+00 1 T T 7 0 6.1974675345E-01 1.0E+00 1 T T 8 1 2 1.5206900053E-01 1.0E+00 2 7.5E-01 2 F F

The `InItr`

column shows the number of basis changes for each outer iteration and `Step = 1`

indicates that the full solution from Newton's method was used. For some of the iterations Step is less than 1 indicating that the direction found by the inner linear model could not be used in full due to nonlinearities. There are some lines with 'S0' instead of 0 in the Phase column. The S tells that the model was scaled before the iteration and the sum of infeasibilities was increased during this scaling procedure. The sum of infeasibilities is therefore not monotone decreasing, even if each outer iteration does decrease them.

Phase 0 terminates when the Newton subset is made feasible and CONOPT switches to Phase 1 where the remaining constraints are handled. In this model there are just 2 infeasible constraints left.

# Transition between SLP and SQP

The transition from SLP (Phase 1 or 3) to SQP (Phase 2 or 4) and back again is in CONOPT3 based on monitoring failure. This logic has been changed in CONOPT, so transition is based on continuous measurements of curvature, both in the general constraints and in the objective function, combined with estimates of computational costs and progress for SLP and SQP.

The continuation of the log file for GAMS Library model otpop.gms shows some of this:

Iter Phase Ninf Objective RGmax NSB Step InItr MX OK 6 3 2.3861517936E+02 3.6E+02 12 2.1E-01 7 F T 7 3 1.4308470370E+02 1.1E+02 16 2.0E-01 9 F T 8 3 9.4375606705E+01 1.5E+02 16 1.8E-01 5 F F 9 3 2.4652509772E+01 7.4E+01 16 6.7E-01 2 F T 10 4 2.4445151316E-02 3.3E+01 16 1.0E+00 6 F T 11 4 5.0735392100E-06 4.4E+00 16 1.0E+00 5 F T 12 4 1.0276261682E-09 1.9E-02 16 1.0E+00 3 F T 13 4 1.4326828955E-13 2.8E-06 16 1.0E+00 1 F T 14 4 1.4326828955E-13 4.0E-10 16 ** Optimal solution. Reduced gradient less than tolerance.

Iterations 6 to 9 are SLP iterations (Phase 3) and iterations 10 to 14 are SQP iterations (Phase 4). The SLP iterations have Step less than 1 due to nonlinear objective terms and CONOPT jumps directly from SLP to SQP. You may in other models see that SQP is replaced by SLP after some iterations where the nonlinearities are measured to be very small.

# Bad Iterations

Bad iterations, flagged with “F” in the “OK” column can be a problem. They appear if the output from the SLP or SQP procedure is a search direction where CONOPT cannot move very far because it is difficult to make the nonlinear constraints feasible again. The efforts spent in solving the SLP or SQP procedure is therefore partially wasted. The problem is usually associated with a basis that is ill-conditioned or has Jacobian elements that change very fast.

CONOPT monitors the quality of the basis using information that already has been computed such as the size of elements of the tangent for basic variables relative to similar elements for superbasic variables and intermediate results from the computation of the reduced costs. This information is then used to trigger the search for a better basis, that usually will allow CONOPT to make larger steps in the following one-dimensional searches.

# Saddle Points and Directions of Negative Curvature

CONOPT is based on moving in a direction derived from the gradient, or the reduced gradient for models with constraints. If the reduced gradient projected on the bounds is zero, then the solution satisfies the first-order optimality conditions (the KKT or Karush-Kuhn-Tucker conditions), and it is standard procedure to stop. Unfortunately, this means that we can stop in a saddle-point.

It is not very common to move towards a saddle-point and get stuck there. However, it is not uncommon that the initial point, provided by a user or by default, is a saddle point. A simple example is the constraint `x*y =E= 1`

started with `x.l = y.l = 0;`

this construct can easily end with a locally infeasible solution. Another example is `minimize z`

, `z =E= x*y`

with the same starting point; it could end locally optimal without moving even though better points exist in the neighborhood.

CONOPT has added a procedure that tries to find a direction of negative curvature that can move the solution point away from a saddle-point. The procedure is only called in points that satisfy the first order optimality conditions and it is therefore usually only called once, and it is a cheap safeguard. The theory behind the method is developed for models without degeneracy and it works very well in practice for these models. Models with some kind of degeneracy (basic variables at bound or nonbasic variables with zero reduced cost) use the same procedure, but it is in this case only a heuristic that cannot be guaranteed to find a direction of negative curvature, even if one exists.

If you know that there are no directions of negative curvature, you can turn the procedure off by setting the logical option Flg_NegCurve to `false`

. If the model is known to be convex you can set the logical option Flg_Convex to `true`

and it will also turn this procedure off. The saving is usually very small, except for models that solve in very few iterations and for models with very many super basics.

There is no output in the log file for negative curvature. If a direction of negative curvature is found CONOPT will follow this direction and the optimization continues. Otherwise, the solution is declared locally optimal.

# Alternative Sub-Models

During an optimization CONOPT can work with up to three different internal sub-models. These models are:

- Full Model: This model consists of the constraints in the user's model excluding all pre- and post-triangular constraints and with the definitional variables eliminated by their defining constraints. The objective function is the user’s objective function.
- No-Penalty Model: This model consists of the Full Model excluding all penalty and mini-max constraints. This model does not have an objective function and it is terminated when a feasible solution has been found.
- Linear Feasibility Model: This model consists of the subset of linear constraints of the Full Model. The Linear Feasibility model is either solved without an objective function or minimizing a quadratic distance measure; this is discussed below.

The pre-triangular variables are considered fixed, and they do not appear in any of the sub-models. Their influence comes through their contribution to coefficients and constant terms. The post-triangular variables are considered intermediate variables in the definition of the objective function. They do not appear in the last two models that only are concerned with feasibility, and they only appear indirectly via the objective in the Full Model. The defined variables are considered intermediate variables in the definition of the remaining constraints in the same way as post-triangular variables are intermediate in the objective. The variables in the Full Model are all variables excluding pre- and post-triangular variables and excluding defined variables; this set can include variables that do not appear in any constraints. The constraints of the full models are all constraints excluding pre- and post-triangular constraints and with the definitional constraints logically eliminated. The variables in the Linear Feasibility Model and in the No-Penalty Model are the variables that appear in the constraints of these models, excluding pre-triangular variables.

## No-Penalty Model

CONOPT always starts by searching for a feasible solution and the sub-models only play a role in this part of the optimization. Therefore, these sub-models are irrelevant if the initial point provided by the modeler is feasible. If there are many penalty and/or minimax constraints then the No-Penalty Model will be much smaller than the Full Model and it is more efficient to use the smaller model while searching for feasibility. The No-Penalty model is therefore only introduced for efficiency reasons. It is by default solved before the Full Model if all the following conditions are satisfied:

- The Flg_NoPen options is true (the default value),
- The model is not a CNS model,
- The user did not provide an initial basis,
- Some of the constraints in the No-Penalty Model are infeasible,
- The number of penalty and minimax constraints is more than the number of constraints in the Full Model multiplied by the value of option Rat_NoPen. The default value of
`Rat_NoPen`

is`0.1`

, i.e., the No-Penalty Model is only defined and solved if it is at least 10% smaller than the Full Model.

An example of a log-file for a model with a No-Penalty model is from the prolog.gms model in the GAMS Library:

The user model has 23 constraints and 21 variables with 129 Jacobian elements, 14 of which are nonlinear. The Hessian of the Lagrangian has 0 elements on the diagonal, 4 elements below the diagonal, and 6 nonlinear variables. Iter Phase Ninf Infeasibility RGmax NSB Step InItr MX OK 0 0 1.2977431398E+03 (Input point) The post-triangular part of the model has 1 constraints and variables. There are 17 penalty and minimax constraints with 3 variables. Reduced model without penalty components has 5 constraints and 17 variables with 46 Jacobian elements, 0 of which are nonlinear. Iter Phase Ninf Infeasibility RGmax NSB Step InItr MX OK 1.2960000000E+03 (Model without penalty constraints) 5.0625000000E+00 (After scaling) 0.0000000000E+00 (After adjusting individual variables) Previous model terminated and penalty components are added back in. Full preprocessed model has 22 constraints and 20 variables with 120 Jacobian elements, 8 of which are nonlinear. Iter Phase Ninf Infeasibility RGmax NSB Step InItr MX OK 0.0000000000E+00 (After adjusting penalty variables)

There are no pre-triangular variables and constraints, and the pre-triangular line is therefore missing. There are also no definitional constraints and the line describing definitional constraints is also missing. On the other hand, there is a line saying that the model has 17 penalty and minimax constraints involving a total of 3 variables.

Since there is a significant number of penalty and minimax constraints, CONOPT starts by generating the No-Penalty model without the penalty and minimax constraints and without the pre- and post-triangular constraints. The 23 constraints in the user model are reduced by 1 post-triangular constraint and 17 minimax and penalty constraints, leaving 5 constraints. Similarly, the 21 variables in the user model are reduced by 1 post-triangular variable and 3 variables in the penalty and minimax constraints, leaving 17 variables.

This No-Penalty model is scaled and made feasible, in this case by the Adjust Initial Point procedure described above. After the No-Penalty model has become feasible CONOPT generates the full internal model, and the 3 variables from the penalty and minimax constraints are adjusted as described in the section on the Preprocessor and the Full Model is then feasible.

## Linear Feasibility Model

The Linear Feasibility Model is introduced to avoid locally infeasible solutions where some linear constraints are infeasible and some feasible nonlinear constraint block for the linear constraint becoming feasible. The Linear Feasibility Model is used to produce a starting point to one of the nonlinear sub-models (the No-Penalty Model or the Full Model) that satisfies all linear constraints. If the Linear Feasibility Model is proved to be infeasible then the overall model is proved to be globally infeasible (independent of nonlinearities) and there is no reason to proceed with the nonlinear part of the model.

The Linear Feasibility Model is only useful if the model has some linear constraints and if the initial point provided by the modeler does not satisfy these constraints. If the Linear Feasibility Model finds a feasible solution to the linear constraints, it continues in one of four possible ways:

A. Use the first feasible solution directly without using extra effort to improve the solution.

B. Perform an approximate minimization of the weighted distance from the user's initial point. Only the variables that have non-default initial values are included in the distance measure, i.e. variables with an initial value (

`xini`

) that is different from zero projected on the bounds, i.e.`xini ne min(max(0,x.lo),x.up)`

. The distance measure is`sqr( (x-xini) / max(1,abs(xini)) )`

.C. As in B, but with all variables included in the distance measure.

D. As in C, but with xini defined as a value between the lower and upper bounds.

Possibility A is fast, but it may give a starting point for the nonlinear model far from the initial point provided by the user. B is slower but gives a starting point for the nonlinear model that is close to the point provided by the user. The variables included in the distance measure in B are the variables that the user has given a value to and therefore are expected to be important. C and D are also slower, but because they have different objective functions, they may provide different starting points for the nonlinear model and may therefore avoid locally infeasible solutions.

Once the Linear Feasibility Model has been solved, with or without a distance objective function, the No-Penalty Model and/or the Full Model are started. All variables that appear in the Linear Feasibility Model are started from the solution values from this model and the remaining variables, that only appear in nonlinear constraints, are started from the values received from the modeler.

The order in which the sub-models are solved depends on the Linear Feasibility Model strategy, defined with option, Lin_Method:

- If
`Lin_Method`

has the default value 1 then the initial point and basis is assumed to be good and CONOPT will start with the No-Penalty Model (only if the conditions mentioned above are satisfied) followed by the Full Model. If the model terminates locally optimal, unbounded, or on some resource limit (time, iterations, function evaluations) CONOPT terminates. We only build and solve the Linear Feasibility Model if the No-Penalty or Full model terminates locally infeasible. If the Linear Feasibility Model is infeasible, the overall model is infeasible and CONOPT terminates. Otherwise, we minimize objective B and use the solution point as a second starting point for the nonlinear model. If this attempt also terminates locally infeasible, we try to generate an alternative initial point with objective C and then with objective D. If all these attempts fail, the model is labeled locally infeasible. - With
`Lin_Method = 2`

CONOPT will start directly with the Linear Feasibility Model with objective A followed by the No-Penalty and Full models. If they are locally infeasible from this starting point, we followed the procedure from above with objective B, C, and then D. `Lin_Method = 3`

is like`Lin_Method = 2`

except that the first objective A is skipped.

Sometimes the objective functions are identical. If all variables have default initial values, then A and B are the same, and if all variables have non-default initial values, then B and C are the same. CONOPT will check for this and will not solve identical sub-models.

The number of times we restart after a locally infeasible solution is controlled by the option, Num_Rounds. The default value is 4, i.e., we will by default try very hard to find a feasible point as shown in the example below. The value 1 will make CONOPT terminate immediately if a locally infeasible point is found, without going back to the Linear Feasibility Model. Num_Rounds can be used if you are not interested in spending extra time on a model that is likely to be infeasible. This is particularly relevant when CONOPT is used as the sub-solver inside SBB, where infeasible sub-problems are common.

An example of a log-file where all Linear Feasibility objectives are used is taken from `ex5_3_2.gms`

in the GlobalLib collection of test models. It shows the repeated starts of the Linear Feasibility Model followed by the Full Preprocessed model:

Preprocessed model has 22 variables and 16 constraints with 59 Jacobian elements, 24 of which are nonlinear. Iter Phase Ninf Infeasibility RGmax NSB Step InItr MX OK 4.1200000000E+02 (Full preprocessed model) 8.4843750000E+00 (After scaling) 6.2500000000E-01 (After adjusting individual variables) 1 1 1 6.2500000000E-01 0.0E+00 5 0.0E+00 T T 4 1 1 6.2500000000E-01 0.0E+00 4 ** Infeasible solution. Reduced gradient less than tolerance. Initial linear feasibility model has 22 variables and 7 constraints with 22 linear Jacobian elements. Objective: Distance to initial point (nondefault variables) Iter Phase Ninf Infeasibility RGmax NSB Step InItr MX OK 3.0200000000E+02 (Linear feasibility model) 3.1718750000E+00 (After scaling) ** Linear constraints feasible. Distance = 0.00000000000 Iter Phase Ninf Distance RGmax NSB Step InItr MX OK 6 4 0.0000000000E+00 0.0E+00 0 Restarting preprocessed model from a new starting point. Iter Phase Ninf Infeasibility RGmax NSB Step InItr MX OK 1.1000000000E+02 (Full preprocessed model) 5.3125000000E+00 (After scaling) 11 2 1 6.2500000000E-01 0.0E+00 0 ** Infeasible solution. There are no superbasic variables. Restarting linear feasibility model. Objective: Distance to initial point (all variables) Iter Phase Ninf Infeasibility RGmax NSB Step InItr MX OK 0.0000000000E+00 (Linear feasibility model) 0.0000000000E+00 (After scaling) ** Linear constraints feasible. Distance = 90002.0000000 Iter Phase Ninf Distance RGmax NSB Step InItr MX OK 16 4 2.2501000000E+04 1.8E-12 5 Restarting preprocessed model from a new starting point. Iter Phase Ninf Infeasibility RGmax NSB Step InItr MX OK 1.8492500000E+02 (Full preprocessed model) 1.0775781250E+01 (After scaling) 21 1 1 6.2500000000E-01 5.4E-03 3 0.0E+00 T T 26 1 1 6.2499999982E-01 3.7E-07 3 2.4E+03 T T 27 2 1 6.2500000000E-01 0.0E+00 2 ** Infeasible solution. Reduced gradient less than tolerance. Restarting linear feasibility model. Objective: Distance to point away from bounds Iter Phase Ninf Infeasibility RGmax NSB Step InItr MX OK 1.4210854715E-14 (Linear feasibility model) 5.5511151231E-17 (After scaling) ** Linear constraints feasible. Distance = 15.1922028230 Iter Phase Ninf Distance RGmax NSB Step InItr MX OK 31 4 2.5570990132E+00 1.1E-04 13 1.0E+00 2 F T 32 4 2.5570990132E+00 7.7E-11 13 Restarting preprocessed model from a new starting point. Iter Phase Ninf Infeasibility RGmax NSB Step InItr MX OK 6.0836352222E+02 (Full preprocessed model) 9.7890253865E+00 (After scaling) 35 S0 0 2.3859378417E+00 1.0E+00 3 T T ** Feasible solution. Value of objective = 1.86415945946 Iter Phase Ninf Objective RGmax NSB Step InItr MX OK 40 4 1.8641594595E+00 0.0E+00 0 ** Optimal solution. There are no superbasic variables.

The Full Model is infeasible when started directly from the values provided by the user. The Linear Feasibility model is then solved to get a different starting point for the Full model where the linear part is feasible. The Full Model is also infeasible from this point, and it is necessary to solve the Linear Feasibility model three times with different objective functions before the full model becomes feasible.

If the model is defined to be convex with option `Flg_Convex = true`

then a locally infeasible solution is labeled globally infeasible with ModelStat = 4, and the Linear Feasibility Model will not be used. A locally optimal solution is also labeled (globally) optimal with ModelStat = 1.

# Scaling

The Tutorials and Examples section of the GAMS User’s Guide has some general recommendations about scaling the user’s model. These recommendations also apply to CONOPT.

In addition to the scaling done by the user, CONOPT will also scale the model internally by defining scale factors for both variables and constraints. The purpose of scaling is to get a scaled Jacobian matrix where the nonzero elements are neither very large nor very small, and scaled variables that are neither very large nor very small.

Variables are scaled with the formula xint = xuser / vscale and constraints are scaled with the formula gint = guser / gscale, where ‘int’ and ‘user’ represent the user model and the internal (scaled) model and vscale and gscale are scale factors for variables and constraints. As a result, the Jacobian elements changes with Jacint = Jacuser * vscale / gscale.

For LP models the Jacobian is constant and scaling is usually done once. For nonlinear models the Jacobian changes with the solution point and scaling is therefore done at regular intervals.

In a nonlinear model a small variable or a small Jacobian element may be small because it is insignificant, but it may also be significant and just small in the current point. CONOPT is therefore using a scaling procedure that scales large variables values and large Jacobian terms down to around 1, but it does not scale small variables values and small Jacobian terms up. The modeler is therefore advised to make sure that the expected solution values are not very small and that the terms in constraint are not all very small.

There are two options relevant for scaling: Tol_Scale_Max is the largest scaling factor that CONOPT will use. The default value is 1.e^{15} and it is applied to both variables and constraints. Tol_Scale_Min is the smallest scaling factor that CONOPT will use. The default value is 1 and it is also applied to both variables and constraints. The default values imply what was mentioned above, that very large terms can be scaled down, but small terms are not scaled up. If you have small but significant terms you may set Tol_Scale_Min below 1 but notice that insignificant small terms may also be scaled up and may disturb the solution process.

If the largest value of a variable, option Lim_Variable or the internal value of ‘infinity’, is increased you may consider increasing Tol_Scale_Max as well.

# CNS Models

There is a special model class in GAMS called CNS - Constrained Nonlinear System. A constrained nonlinear system is a square system of equations, i.e., a model in which the number of non-fixed variables is equal to the number of constraints. A CNS model can be solved with a solve statement like

Solve <model> using CNS;

without an objective term. CONOPT will attempt to solve the constraints with respect to the non-fixed variables using Newton's method. When it works, it is often very fast, and it uses less memory than for the corresponding NLP model, but the solution process does not include a lot of the safeguards used for ordinary NLP models. The lack of safeguards means that the solution process will stop with an error message in some difficult situations and return the current intermediate infeasible solution. Some examples are shown below.

Slacks in inequalities are counted as non-fixed variables which effectively means that inequalities should not be binding. Bounds on the variables are allowed, especially to prevent function evaluation errors for functions that only are defined for some arguments, but the bounds should not be binding in the final solution.

Since there is no objective function the solution returned to GAMS will in all cases have marginal values equal to 0 or EPS, both for the variables and the constraints.

The termination messages for CNS models are different from the termination messages for optimization models. The message you hope for is this:

** Feasible solution to a square system.

that usually will be combined with ModelStat = 16 - Solved. If CONOPT in special cases can guarantee that the solution is unique, for example if the model is linear, then ModelStat will be 15 - Solved Unique.

There are two potential error termination messages related to CNS models. A model with the following two constraints

e1 .. x1 + x2 =e= 1; e2 .. 2*x1 + 2*x2 =e= 2;

will result in this message

** Error in Square System: Pivot too small. e2: Pivot too small. x1: Pivot too small.

"Pivot too small" means that the set of constraints is linearly dependent on the current point and there is no unique search direction for Newton's method, so CONOPT terminates. In the listing file (but not the log-file) the message points to one variable and one constraint. However, this pair of constraint and variable just indicates that the linearly dependent set of constraints and variables include the constraint and variable mentioned. The offending constraint and variable will also be labeled 'DEPND' for linearly dependent in the equation listing. The error will usually be combined with ModelStat = 5 - Locally Infeasible. In cases where CONOPT can guarantee that the infeasibility is not caused by nonlinearities ModelStat will be 4 - Infeasible. If the constraints are linearly dependent but the current point satisfies the constraints then the solution status will be 17 - Solved Singular, indicating that the point is feasible, but there is probably a whole ray of feasible solution through the current point.

It should be mentioned that linear dependency and small pivot could be caused by the initial point and that the model could have a solution. An example is:

e1.. x1*x2 =E= 1; e2.. x1+x2 =E= 3; x1.l = 1; x2.l = 1;

A model with these two constraints and the bound

e1 .. x1 + x2 =e= 2; e2 .. x1 - x2 =e= 0; x1.lo = 1.5;

will result in the message

** Error in Square System: A variable tries to exceed its bound. x1: The variable tries to exceed its bound.

because the solution, (x1,x2) = (1,1) violates the lower bound on x1. This error case will also be combined with ModelStat = 5 - Locally Infeasible. In the cases where CONOPT can guarantee that the infeasibility is not caused by nonlinearities ModelStat will be 4 - Infeasible.

CNS can in some cases be used to generate an initial feasible solution: Fix a subset of the variables so the remaining model is uniquely solvable, solve this model with the CNS solver, reset the bounds on the fixed variables, and solve the original model. This two-solve method gives better control over the first feasible solution and can be useful for very large models.

Additional information on CNS can be found in the GAMS User's Guide

# Multiple Threads

CONOPT can use multiple threads for some internal computations and GAMS can use multiple threads for function and derivative evaluations. In addition to function and derivative evaluations, multiple threads are only used for certain very large and dense computations and there are not so many of these in the types of models usually built with GAMS. In addition, multiple threads have some overhead, and they are therefore mainly useful for very large models. Currently, the best improvements have been for very large models with more than 100,000 variables or constraints, and especially for very large CNS models.

Threads can be turned on with the GAMS command-line option Threads=n or with the CONOPT option threads.

# Loss of Feasibility

During the optimization you may sometimes see a phase 0 iteration and in rare cases you will see the message "Loss of Feasibility - Return to Phase 0". The background for this is as follows:

To work efficiently, CONOPT uses dynamic tolerances for feasibility and during the initial part of the optimization, where the objective changes rapidly, larger infeasibilities are accepted. As the change in objective between iterations becomes smaller it is necessary to solve the constraints more accurately so the uncertainty of the objective value caused by inaccurate constraints will remain smaller than the actual change. The accuracy of the objective is measured as the scalar product of the constraint residuals with the constraint marginals.

Sometimes it is necessary to revise the accuracy of the solution, for example because the algorithmic progress has slowed down or because the marginal of an inaccurate constraint has grown significantly after a basis change, e.g., when an inequality becomes binding. In these cases, CONOPT will reduce the feasibility tolerance and perform one or more Newton iterations on the basic variables. This will usually be very quick, and it happens silently. However, Newton's method may fail, for example in cases where the model is degenerate, and Newton tries to move a basic variable outside a bound. In this case CONOPT uses some special iteration similar to those discussed in section Phase 0 - Finding an Initial Feasible Solution and they are labeled Phase 0.

These Phase 0 iterations may not converge, for example if the degeneracy is significant, if the model is very nonlinear locally, if the model has many product terms involving variables at zero, or if the model is poorly scaled and some constraints contain very large terms. If the iterations do not converge, CONOPT will issue the "Loss of feasibility ..." message, return to the real Phase 0 procedure, find a feasible solution with the smaller tolerance, and resume the optimization.

In rare cases you will see that CONOPT cannot find a feasible solution after the tolerances have been reduced, even though it has declared the model feasible at an earlier stage. In this case CONOPT will stop, restore the last point that was labeled feasible and return this point as an intermediate feasible solution.

Although the problems may appear to be with too tight tolerances it is not a good idea to relax the feasibility tolerances. Relaxed feasibility tolerances will give more inaccurate objective function values that result in difficulties for the progress of the optimization as discussed in the next section.

# Stalling

CONOPT will usually make steady progress towards the final solution. A degeneracy breaking strategy and the monotonicity of the objective function in other iterations should ensure that CONOPT cannot cycle. Unfortunately, there are a few places in the code where the objective function may move in the wrong direction and CONOPT may in fact cycle or move very slowly.

The objective value used to compare two points, in the following called the adjusted objective value, is computed as the true objective plus an accuracy adjustment term equal to the scalar product of the residuals with the marginals (see the previous section where this accuracy term also is used). The accuracy adjustment term is very useful in allowing CONOPT to work smoothly with inaccurate intermediate solutions. However, there is a disadvantage: the accuracy adjustment term can change even though the point itself does not change, namely when the marginals change in connection with a basis change. The adjusted objective is therefore not always monotone. When CONOPT loses feasibility and returns to Phase 0 there is an even larger chance of non-monotone behavior.

To prevent infinite loops and to allow the modeler to stop in cases with very slow progress CONOPT has an anti-stalling option. An iteration is counted as a stalled iteration if it is not degenerate and (1) the adjusted objective is worse than the best adjusted objective seen so far, or (2) the step length was zero without being degenerate (see OK = F in section Linear and Nonlinear Mode: Phase 1 to 4). CONOPT will stop if the number of consecutive stalled iterations (again not counting degenerate iterations) exceeds `Lim_StallIter`

and `Lim_StallIter`

is positive. The default value of `Lim_StallIter`

is 100. The message will be:

** Feasible solution. The tolerances are minimal and there is no change in objective although the reduced gradient is greater than the tolerance.

Large models with very flat optima can sometimes be stopped prematurely due to stalling. If it is important to find a local optimum accurately then you may have to increase the value of `Lim_StallIter`

.

# APPENDIX A - Options

The options that ordinary GAMS users can access are listed below.

## Algorithmic options

Option | Description | Default |
---|---|---|

Flg_AdjIniP | Flag for calling Adjust Initial Point | `1` |

Flg_Convex | Flag for defining a model to be convex | `0` |

Flg_Crash_Slack | Flag for pre-selecting slacks for the initial basis. | `0` |

Flg_Dbg_Intv | Flag for debugging interval evaluations. | `0` |

Flg_NegCurve | Flag for testing for negative curvature when apparently optimal | `1` |

Flg_NoDefc | Flag for turning definitional constraints off. The default is false. | `0` |

Flg_NoPen | Flag for allowing the Model without penalty constraints | `1` |

Flg_SLPMode | Flag for enabling SLP mode. | `1` |

Flg_SQPMode | Flag for enabling of SQP mode. | `1` |

Flg_Square | Flag for Square System. Alternative to defining modeltype=CNS in GAMS | `0` |

Flg_TraceCNS | Flag for tracing a CNS solution. | `0` |

Frq_Rescale | Rescaling frequency. | `5` |

Lim_Err_2DDir | Limit on errors in Directional Second Derivative evaluation. | `10` |

Lim_Err_Fnc_Drv | Limit on number of function evaluation errors. Overwrites GAMS Domlim option | `GAMS DomLim` |

Lim_Err_Hessian | Limit on errors in Hessian evaluation. | `10` |

Lim_Iteration | Maximum number of iterations. Overwrites GAMS Iterlim option. | `GAMS IterLim` |

Lim_NewSuper | Maximum number of new superbasic variables added in one iteration. | `auto` |

Lim_RedHess | Maximum number of superbasic variables in the approximation to the Reduced Hessian. | `auto` |

Lim_SlowPrg | Limit on number of iterations with slow progress (relative less than Tol_Obj_Change). | `20` |

Lim_StallIter | Limit on the number of stalled iterations. | `100` |

Lim_Start_Degen | Limit on number of degenerate iterations before starting degeneracy breaking strategy. | `100` |

Lim_Time | Time Limit. Overwrites the GAMS Reslim option. | `GAMS ResLim` |

Lim_Variable | Upper bound on solution values and equation activity levels | `1.e15` |

Lin_Method | Method used to determine if and/or which Linear Feasibility Models to use | `1` |

Mtd_Dbg_1Drv | Method used by the function and derivative debugger. | `0` |

Mtd_RedHess | Method for initializing the diagonal of the approximate Reduced Hessian | `0` |

Mtd_Scale | Method used for scaling. | `3` |

Mtd_Step_Phase0 | Method used to determine the step in Phase 0. | `auto` |

Mtd_Step_Tight | Method used to determine the maximum step while tightening tolerances. | `0` |

Num_Rounds | Number of rounds with Linear Feasibility Model | `4` |

Rat_NoPen | Limit on ratio of penalty constraints for the No_Penalty model to be solved | `0.1` |

Tol_Bound | Bound filter tolerance for solution values close to a bound. | `1.e-7` |

Tol_BoxSize | Initial box size for trust region models for overall model | `10` |

Tol_BoxSize_Lin | Initial box size for trust region models for linear feasibility model | `1000` |

Tol_Box_LinFac | Box size factor for linear variables applied to trust region box size | `10` |

Tol_Def_Mult | Largest growth factor allowed in the block of definitional constraints | `1.e4` |

Tol_Feas_Max | Maximum feasibility tolerance (after scaling). | `1.e-7` |

Tol_Feas_Min | Minimum feasibility tolerance (after scaling). | `4.e-10` |

Tol_Feas_Tria | Feasibility tolerance for triangular equations. | `1.0e-8` |

Tol_Fixed | Tolerance for defining variables as fixed based on initial or derived bounds. | `4.e-10` |

Tol_Jac_Min | Filter for small Jacobian elements to be ignored during scaling. | `1.e-5` |

Tol_Linesearch | Accuracy of One-dimensional search. | `0.2` |

Tol_Obj_Acc | Relative accuracy of the objective function. | `3.0e-13` |

Tol_Obj_Change | Limit for relative change in objective for well-behaved iterations. | `3.0e-12` |

Tol_Optimality | Optimality tolerance for reduced gradient when feasible. | `1.e-7` |

Tol_Opt_Infeas | Optimality tolerance for reduced gradient when infeasible. | `1.e-7` |

Tol_Opt_LinF | Optimality tolerance when infeasible in Linear Feasibility Model | `1.e-10` |

Tol_Piv_Abs | Absolute pivot tolerance. | `1.e-10` |

Tol_Piv_Abs_Ini | Absolute Pivot Tolerance for building initial basis. | `1.e-7` |

Tol_Piv_Abs_NLTr | Absolute pivot tolerance for nonlinear elements in pre-triangular equations. | `1.e-5` |

Tol_Piv_Ratio | Relative pivot tolerance during ratio-test | `1.e-8` |

Tol_Piv_Rel | Relative pivot tolerance during basis factorizations. | `0.05` |

Tol_Piv_Rel_Ini | Relative Pivot Tolerance for building initial basis | `1.e-3` |

Tol_Piv_Rel_Updt | Relative pivot tolerance during basis updates. | `0.05` |

Tol_Scale2D_Min | Lower bound for scale factors based on large 2nd derivatives. | `1.e-6` |

Tol_Scale_Max | Upper bound on scale factors. | `1.e25` |

Tol_Scale_Min | Lower bound for scale factors computed from values and 1st derivatives. | `1` |

Tol_Scale_Var | Lower bound on x in x*Jac used when scaling. | `1.e-5` |

Tol_Zero | Zero filter for Jacobian elements and inversion results. | `1.e-20` |

Trace_MinStep | Minimum step between Reinversions when using TraceCNS. | `0.001` |

## Debugging options

Option | Description | Default |
---|---|---|

Flg_Interv | Flag for using intervals in the Preprocessor | `1` |

Flg_Prep | Flag for using the Preprocessor | `1` |

Flg_Range | Flag for identifying sets of ranged constraints | `1` |

Lim_Dbg_1Drv | Flag for debugging of first derivatives | `0` |

Lim_Hess_Est | Upper bound on second order terms. | `1.e4` |

Lim_Msg_Dbg_1Drv | Limit on number of error messages from function and derivative debugger. | `10` |

## Output options

Option | Description | Default |
---|---|---|

Frq_Log_Simple | Frequency for log-lines for non-SLP/SQP iterations. | `auto` |

Frq_Log_SlpSqp | Frequency for log-lines for SLP or SQP iterations. | `auto` |

Lim_Msg_Large | Limit on number of error messages related to large function value and Jacobian elements. | `10` |

Lim_Pre_Msg | Limit on number of error messages related to infeasible pre-triangle. | `25` |

## Interface options

Option | Description | Default |
---|---|---|

cooptfile | ||

Flg_2DDir | Flag for computing and using directional 2nd derivatives. | `auto` |

Flg_Hessian | Flag for computing and using 2nd derivatives as Hessian of Lagrangian. | `auto` |

HEAPLIMIT | Maximum Heap size in MB allowed | `1e20` |

HessianMemFac | Memory factor for Hessian generation: Skip if Hessian elements > Nonlinear Jacobian elements*HessianMemFac, 0 means unlimited. | `0` |

THREADC | Number of compatibility threads used for comparing different values of THREADS | `1` |

THREADF | Number of threads used for function evaluation | `1` |

threads | Number of threads used by Conopt internally | `GAMS Threads` |

**cooptfile** *(string)*: ↵

**Flg_2DDir** *(boolean)*: Flag for computing and using directional 2nd derivatives. ↵

If turned on, make directional second derivatives (Hessian matrix times directional vector) available to CONOPT. The default is on, but it will be turned off of the model has external equations (defined with =X=) and the user has not provided directional second derivatives. If both the Hessian of the Lagrangian (see Flg_Hessian) and directional second derivatives are available then CONOPT will use both: directional second derivatives are used when the expected number of iterations in the SQP sub-solver is low and the Hessian is used when the expected number of iterations is large.

Default:

`auto`

**Flg_AdjIniP** *(boolean)*: Flag for calling Adjust Initial Point ↵

If Flg_AdjIniP is on (the default) then the Adjust Initial Point routine is called after the pre-processor. Can be turned off if the routine is very slow.

Default:

`1`

**Flg_Convex** *(boolean)*: Flag for defining a model to be convex ↵

When turned on (the default is off) CONOPT knows that a local solution is also a global solution, whether it is optimal or infeasible, and it will be labeled appropriately. At the moment, Flg_NegCurve will be turned off. Other parts of the code will gradually learn to take advantage of this flag.

Default:

`0`

**Flg_Crash_Slack** *(boolean)*: Flag for pre-selecting slacks for the initial basis. ↵

When turned on (1) CONOPT will select all infeasible slacks as the first part of the initial basis.

Default:

`0`

**Flg_Dbg_Intv** *(boolean)*: Flag for debugging interval evaluations. ↵

Flg_Dbg_Intv controls whether interval evaluations are debugged. Currently we check that the lower bound does not exceed the upper bound for all intervals returned, both for function values and for derivatives.

Default:

`0`

**Flg_Hessian** *(boolean)*: Flag for computing and using 2nd derivatives as Hessian of Lagrangian. ↵

If turned on, compute the structure of the Hessian of the Lagrangian and make it available to CONOPT. The default is usually on, but it will be turned off if the model has external equations (defined with =X=) or cone constraints (defined with =C=) or if the Hessian becomes too dense. See also Flg_2DDir and HessianMemFac.

Default:

`auto`

**Flg_Interv** *(boolean)*: Flag for using intervals in the Preprocessor ↵

If turned on (default), CONOPT will attempt to use interval evaluations in the preprocessor to determine if functions are monotone or if intervals for some of the variables can be excluded as infeasible.

Default:

`1`

**Flg_NegCurve** *(boolean)*: Flag for testing for negative curvature when apparently optimal ↵

When turned on (the default) CONOPT will try to identify directions with negative curvature when the model appears to be optimal. The objective is to move away from saddlepoints. Can be turned off when the model is known to be convex and cannot have negative curvature.

Default:

`1`

**Flg_NoDefc** *(boolean)*: Flag for turning definitional constraints off. The default is false. ↵

If Flg_NoDefc is on, the Preprocessor will not look for definitional constraints and variables.

Default:

`0`

**Flg_NoPen** *(boolean)*: Flag for allowing the Model without penalty constraints ↵

When turned on (the default) CONOPT will create and solve a smaller model without the penalty constraints and variables and the minimax constraints and variables if the remaining constraints are infeasible in the initial point. This is often a faster way to start the solution process.

Default:

`1`

**Flg_Prep** *(boolean)*: Flag for using the Preprocessor ↵

If turned on (default), CONOPT will use its preprocessor to try to determine pre- and post-triangluar components of the model and find definitional constraints.

Default:

`1`

**Flg_Range** *(boolean)*: Flag for identifying sets of ranged constraints ↵

If turned on (default), CONOPT will as part of its preprocessor look for sets of parallel linear constraints and turn each set into a single ranged constraints. There is currently a potential problem with the duals on these constraints and if duals are important ranges can be turned off with this flag.

Default:

`1`

**Flg_SLPMode** *(boolean)*: Flag for enabling SLP mode. ↵

If Flg_SLPMode is on (the default) then the SLP (sequential linear programming) sub-solver can be used, otherwise it is turned off.

Default:

`1`

**Flg_SQPMode** *(boolean)*: Flag for enabling of SQP mode. ↵

If Flg_SQPMode is on (the default) then the SQP (sequential quadratic programming) sub-solver can be used, otherwise it is turned off.

Default:

`1`

**Flg_Square** *(boolean)*: Flag for Square System. Alternative to defining modeltype=CNS in GAMS ↵

When turned on the modeler declares that this is a square system, i.e. the number of non-fixed variables must be equal to the number of constraints, no bounds must be active in the final solution, and the basis selected from the non-fixed variables must always be nonsingular.

Default:

`0`

**Flg_TraceCNS** *(boolean)*: Flag for tracing a CNS solution. ↵

When turned on the model must, for fixed value of the objective variable, be a CNS model and must satisfy the conditions of a CNS. The model is first solved as a CNS with the initial value of the objective fixed and the objective is then minimized or maximized subject to the CNS constraints.

Default:

`0`

**Frq_Log_Simple** *(integer)*: Frequency for log-lines for non-SLP/SQP iterations. ↵

Frq_Log_Simple and Frq_Log_SlpSqp can be used to control the amount of iteration send to the log file. The non-SLP/SQP iterations, i.e. iterations in phase 0, 1, and 3, are usually fast and writing a log line for each iteration may be too much, especially for smaller models. The default value for the log frequency for these iterations is therefore set to 10 for small models, 5 for models with more than 500 constraints or 1000 variables and 1 for models with more than 2000 constraints or 3000 variables.

Default:

`auto`

**Frq_Log_SlpSqp** *(integer)*: Frequency for log-lines for SLP or SQP iterations. ↵

Frq_Log_Simple and Frq_Log_SlpSqp can be used to control the amount of iteration send to the log file. Iterations using the SLP and/or SQP sub-solver, i.e. iterations in phase 2 and 4, may involve several inner iterations and the work per iteration is therefore larger than for the non-SLP/SQP iterations and it may be relevant to write log lines more frequently. The default value for the log frequency is therefore 5 for small models and 1 for models with more than 500 constraints or 1000 variables.

Default:

`auto`

**Frq_Rescale** *(integer)*: Rescaling frequency. ↵

The row and column scales are recalculated at least every Frq_Rescale new point (degenerate iterations do not count), or more frequently if conditions require it.

Default:

`5`

**HEAPLIMIT** *(real)*: Maximum Heap size in MB allowed ↵

Range: [

`0`

, ∞]Default:

`1e20`

**HessianMemFac** *(real)*: Memory factor for Hessian generation: Skip if Hessian elements > Nonlinear Jacobian elements*HessianMemFac, 0 means unlimited. ↵

The Hessian of the Lagrangian is considered too dense therefore too expensive to evaluate and use, and it is not passed on to CONOPT if the number of nonzero elements in the Hessian of the Lagrangian is greater than the number of nonlinear Jacobian elements multiplied by HessianMemFac. See also Flg_Hessian. If HessianMemFac = 0.0 (the default value) then there is no limit on the number of Hessian elements.

Default:

`0`

**Lim_Dbg_1Drv** *(integer)*: Flag for debugging of first derivatives ↵

Lim_Dbg_1Drv controls how often the derivatives are tested. Debugging of derivatives is only relevant for user-written functions in external equations defined with =X=. The amount of debugging is controlled by Mtd_Dbg_1Drv. See Lim_Hess_Est for a definition of when derivatives are considered wrong.

Default:

`0`

value meaning `-1`

The derivatives are tested in the initial point only. `0`

No debugging `+n`

The derivatives are tested in all iterations that can be divided by Lim_Dbg_1Drv, provided the derivatives are computed in this iteration. (During phase 0, 1, and 3 derivatives are only computed when it appears to be necessary.)

**Lim_Err_2DDir** *(integer)*: Limit on errors in Directional Second Derivative evaluation. ↵

If the evaluation of Directional Second Derivatives (Hessian information in a particular direction) has failed more than Lim_Err_2DDir times CONOPT will not attempt to evaluate them any more and will switch to methods that do not use Directional Second Derivatives. Note that second order information may not be defined even if function and derivative values are well-defined, e.g. in an expression like power(x,1.5) at x=0.

Default:

`10`

**Lim_Err_Fnc_Drv** *(integer)*: Limit on number of function evaluation errors. Overwrites GAMS Domlim option ↵

Function values and their derivatives are assumed to be defined in all points that satisfy the bounds of the model. If the function value or a derivative is not defined in a point CONOPT will try to recover by going back to a previous safe point (if one exists), but it will not do it more than at most Lim_Err_Fnc_Drv times. If CONOPT is stopped by functions or derivatives not being defined it will return with a intermediate infeasible or intermediate non-optimal model status.

Default:

`GAMS DomLim`

**Lim_Err_Hessian** *(integer)*: Limit on errors in Hessian evaluation. ↵

If the evaluation of Hessian information has failed more than Lim_Err_Hessian times CONOPT will not attempt to evaluate it any more and will switch to methods that do not use the Hessian. Note that second order information may not be defined even if function and derivative values are well-defined, e.g. in an expression like power(x,1.5) at x=0.

Default:

`10`

**Lim_Hess_Est** *(real)*: Upper bound on second order terms. ↵

The function and derivative debugger (see Lim_Dbg_1Drv) tests if derivatives computed using the modelers routine are sufficiently close to the values computed using finite differences. The term for the acceptable difference includes a second order term and uses Lim_Hess_Est as an estimate of the upper bound on second order derivatives in the model. Larger Lim_Hess_Est values will allow larger deviations between the user-defined derivatives and the numerically computed derivatives.

Default:

`1.e4`

**Lim_Iteration** *(integer)*: Maximum number of iterations. Overwrites GAMS Iterlim option. ↵

The iteration limit can be used to prevent models from spending too many resources. You should note that the cost of the different types of CONOPT iterations (phase 0 to 4) can be very different so the time limit (GAMS Reslim or option Lim_Time) is often a better stopping criterion. However, the iteration limit is better for reproducing solution behavior across machines.

Default:

`GAMS IterLim`

**Lim_Msg_Dbg_1Drv** *(integer)*: Limit on number of error messages from function and derivative debugger. ↵

The function and derivative debugger (see Lim_Dbg_1Drv) may find a very large number of errors, all derived from the same source. To avoid very large amounts of output CONOPT will stop the debugger after Lim_Msg_Dbg_1Drv error(s) have been found.

Default:

`10`

**Lim_Msg_Large** *(integer)*: Limit on number of error messages related to large function value and Jacobian elements. ↵

Very large function value or derivatives (Jacobian elements) in a model will lead to numerical difficulties and most likely to inaccurate primal and/or dual solutions. CONOPT is therefore imposing an upper bound on the value of all function values and derivatives. This bound is 1.e30. If the bound is violated CONOPT will return with an intermediate infeasible or intermediate non-optimal solution and it will issue error messages for all the violating Jacobian elements, up to a limit of Lim_Msg_Large error messages.

Default:

`10`

**Lim_NewSuper** *(integer)*: Maximum number of new superbasic variables added in one iteration. ↵

When there has been a sufficient reduction in the reduced gradient in one subspace new non-basics can be selected to enter the superbasis. The ones with largest reduced gradient of proper sign are selected, up to a limit. If Lim_NewSuper is positive then the limit is min(500,Lim_NewSuper). If Lim_NewSuper is zero (the default) then the limit is selected dynamically by CONOPT depending on model characteristics.

Default:

`auto`

**Lim_Pre_Msg** *(integer)*: Limit on number of error messages related to infeasible pre-triangle. ↵

If the pre-processor determines that the model is infeasible it tries to define a minimal set of variables and constraints that define the infeasibility. If this set is larger than Lim_Pre_Msg elements the report is considered difficult to use and it is skipped.

Default:

`25`

**Lim_RedHess** *(integer)*: Maximum number of superbasic variables in the approximation to the Reduced Hessian. ↵

CONOPT uses and stores a dense lower-triangular matrix as an approximation to the Reduced Hessian. The rows and columns correspond to the superbasic variable. This matrix can use a large amount of memory and computations involving the matrix can be time consuming so CONOPT imposes a limit on on the size. The limit is Lim_RedHess if Lim_RedHess is defined by the modeler and otherwise a value determined from the overall size of the model. If the number of superbasics exceeds the limit, CONOPT will switch to a method based on a combination of SQP and Conjugate Gradient iterations assuming some kind of second order information is available. If no second order information is available CONOPT will use a quasi-Newton method on a subset of the superbasic variables and rotate the subset as the reduced gradient becomes small.

Default:

`auto`

**Lim_SlowPrg** *(integer)*: Limit on number of iterations with slow progress (relative less than Tol_Obj_Change). ↵

The optimization is stopped if the relative change in objective is less than Tol_Obj_Change for Lim_SlowPrg consecutive well-behaved iterations.

Default:

`20`

**Lim_StallIter** *(integer)*: Limit on the number of stalled iterations. ↵

An iteration is considered a stalled iteration if there is no change in objective because the linesearch is limited by nonlinearities or numerical difficulties. Stalled iterations will have Step = 0 and F in the OK column of the log file. After a stalled iteration CONOPT will try various heuristics to get a better basis and a better search direction. However, the heuristics may not work as intended or they may even return to the original bad basis, especially if the model does not satisfy standard constraints qualifications and does not have a KKT point. To prevent cycling CONOPT will therefore stop after Lim_StallIter stalled iterations and returns an Intermediate Infeasible or Intermediate Nonoptimal solution.

Default:

`100`

**Lim_Start_Degen** *(integer)*: Limit on number of degenerate iterations before starting degeneracy breaking strategy. ↵

The default CONOPT pivoting strategy has focus on numerical stability, but it can potentially cycle. When the number of consecutive degenerate iterations exceeds Lim_Start_Degen CONOPT will switch to a pivoting strategy that is guaranteed to break degeneracy but with slightly weaker numerical properties.

Default:

`100`

**Lim_Time** *(real)*: Time Limit. Overwrites the GAMS Reslim option. ↵

The upper bound on the total number of seconds that can be used in the execution phase. There are only tests for time limit once per iteration. The default value is 10000. Lim_Time is overwritten by Reslim when called from GAMS.

Range: [

`0`

, ∞]Default:

`GAMS ResLim`

**Lim_Variable** *(real)*: Upper bound on solution values and equation activity levels ↵

If the value of a variable, including the objective function value and the value of slack variables, exceeds Lim_Variable then the model is considered to be unbounded and the optimization process returns the solution with the large variable flagged as unbounded. A bound cannot exceed this value.

Range: [

`1.e5`

,`1.e30`

]Default:

`1.e15`

**Lin_Method** *(integer)*: Method used to determine if and/or which Linear Feasibility Models to use ↵

The Linear Feasibility Model can use different objectives: Objective 1 is no objective, i.e. the first point that satisfies the Linear Feasibility Model is used as a starting point for the Full Model. Objective 2 minimizes a scaled distance from the initial point for all variables defined by the modeler. Objective 3 minimizes a scaled distance from the initial point for all variables including those not defined by the modeler. Objective 4 minimizes a scaled distance from random a point selected away from bounds.

Default:

`1`

value meaning `1`

Ignore Linear Feasibility Model in the first round and use objective 2, 3, and 4 (see above) in the following rounds as long as model is locally infeasible. This is the default method. `2`

Use Linear Feasibility Model with objective 1 in the first round and continue with objective 2, 3, and 4 in the following rounds as long as model is locally infeasible. `3`

Use Linear Feasibility Model with objective 2 in the first round and continue with objective 3 and 4 in the following rounds as long as model is locally infeasible.

**Mtd_Dbg_1Drv** *(integer)*: Method used by the function and derivative debugger. ↵

The function and derivative debugger (turned on with Lim_Dbg_1Drv) can perform a fairly cheap test or a more extensive test, controlled by Mtd_Dbg_1Drv. See Lim_Hess_Est for a definition of when derivatives are considered wrong. All tests are performed in the current point found by the optimization algorithm.

Default:

`0`

value meaning `0`

Perform tests for sparsity pattern and tests that the numerical values of the derivatives appear to be correct. This is the default. `1`

As 0 plus make extensive test to determine if the functions and their derivatives are continuous around the current point. These tests are much more expensive and should only be used of the cheap test does not find an error but one is expected to exist.

**Mtd_RedHess** *(integer)*: Method for initializing the diagonal of the approximate Reduced Hessian ↵

Each time a nonbasic variable is made superbasic a new row and column is added to the approximate Reduced Hessian. The off-diagonal elements are set to zero and the diagonal to a value controlled by Mtd_RedHess:

Default:

`0`

value meaning `0`

The new diagonal element is set to the geometric mean of the existing diagonal elements. This gives the new diagonal element an intermediate value and new superbasic variables are therefore not given any special treatment. The initial steps should be of good size, but build-up of second order information in the new sub-space may be slower. The larger diagonal element may also in bad cases cause premature convergence. `1`

The new diagonal elements is set to the minimum of the existing diagonal elements. This makes the new diagonal element small and the importance of the new superbasic variable will therefore be high. The initial steps can be rather small, but better second order information in the new sub-space should be build up faster.

**Mtd_Scale** *(integer)*: Method used for scaling. ↵

CONOPT will by default use scaling of the equations and variables of the model to improve the numerical behavior of the solution algorithm and the accuracy of the final solution (see also Frq_Rescale.) The objective of the scaling process is to reduce the values of all large primal and dual variables as well as the values of all large first derivatives so they become closer to 1. Small values are usually not scaled up, see Tol_Scale_Max and Tol_Scale_Min. Scaling method 3 is recommended. The others are only kept for backward compatibility.

Default:

`3`

value meaning `0`

Scaling is based on repeatedly dividing the rows and columns by the geometric means of the largest and smallest elements in each row and column. Very small elements less than Tol_Jac_Min are considered equal to Tol_Jac_Min. `1`

Similar to 3 below, but the projection on the interval [Tol_Scale_Min,Tol_Scale_Max] is applied at a different stage. With method 1, abs(X)*abs(Jac) with small X and very large Jac is scaled very aggressively with a factor abs(Jac). With method 3, the scale factor is abs(X)*abs(Jac). The difference is seen in models with terms like Sqrt(X) close to X = 0. `2`

As 1 but the terms are computed based on a moving average of the squares X and Jac. The purpose of the moving average is to keep the scale factor more stable. This is often an advantage, but for models with very large terms (large variables and in particular large derivatives) in the initial point the averaging process may not have enough time to bring the scale factors into the right region. `3`

Rows are first scaled by dividing by the largest term in the row, then columns are scaled by dividing by by the maximum of the largest term and the value of the variable. A term is here defined as abs(X)*abs(Jac) where X is the value of the variable and Jac is the value of the derivative (Jacobian element). The scale factor are then projected on the interval between Tol_Scale_Min and Tol_Scale_Max.

**Mtd_Step_Phase0** *(integer)*: Method used to determine the step in Phase 0. ↵

The steplength used by the Newton process in phase 0 is computed by one of two alternative methods controlled by Mtd_Step_Phase0:

Default:

`auto`

value meaning `0`

The standard ratio test method known from the Simplex method. CONOPT adds small perturbations to the bounds to avoid very small pivots and improve numerical stability. Linear constraints that initially are feasible will remain feasible with this method. It is the default method for optimization models. `1`

A method based on bending (projecting the target values of the basic variables on the bounds) until the sum of infeasibilities is close to its minimum. Linear constraints that initially are feasible may become infeasible due to bending.

**Mtd_Step_Tight** *(integer)*: Method used to determine the maximum step while tightening tolerances. ↵

The steplength used by the Newton process when tightening tolerances is computed by one of two alternative methods controlled by Mtd_Step_Tight:

Default:

`0`

value meaning `0`

The standard ratio test method known from the Simplex method. CONOPT adds small perturbations to the bounds to avoid very small pivots and improve numerical stability. Linear constraints that initially are feasible will remain feasible with this default method. `1`

A method based on bending (projecting the target value of the basic variables on the bounds) until the sum of infeasibilities is close to its minimum.

**Num_Rounds** *(integer)*: Number of rounds with Linear Feasibility Model ↵

Lin_Method defined which Linear Feasibility Model are going to be solved if the previous models end Locally Infeasible. The number of rounds is limited by Num_Rounds.

Range: {

`1`

, ...,`4`

}Default:

`4`

**Rat_NoPen** *(real)*: Limit on ratio of penalty constraints for the No_Penalty model to be solved ↵

The No-Penalty model can only be generated and solved if the number of penalty and minimax constraints exceed Rat_NoPen times the constraints in the Full Model.

Default:

`0.1`

**THREADC** *(integer)*: Number of compatibility threads used for comparing different values of THREADS ↵

Range: {

`0`

, ..., ∞}Default:

`1`

**THREADF** *(integer)*: Number of threads used for function evaluation ↵

Range: {

`0`

, ..., ∞}Default:

`1`

**threads** *(integer)*: Number of threads used by Conopt internally ↵

Range: {

`0`

, ..., ∞}Default:

`GAMS Threads`

**Tol_Bound** *(real)*: Bound filter tolerance for solution values close to a bound. ↵

A variable is considered to be at a bound if its distance from the bound is less than Tol_Bound * Max(1,ABS(Bound)). The tolerance is used to build the initial bases and is used to flag variables during output.

Range: [

`3.e-13`

,`1.e-5`

]Default:

`1.e-7`

**Tol_BoxSize** *(real)*: Initial box size for trust region models for overall model ↵

The new Phase 0 methods solves an LP model based on a scaled and linearized version of the model with an added trust region box constraint around the initial point. Tol_BoxSize defines the size of the initial trust region box. During the optimization the trust region box is adjusted based on how well the linear approximation fits the real model.

Range: [

`0.01`

,`1.e6`

]Default:

`10`

**Tol_BoxSize_Lin** *(real)*: Initial box size for trust region models for linear feasibility model ↵

Similar to Tol_BoxSize but applied to the linear feasibility model. Since this model has linear constraints the default initial box size is larger.

Range: [

`0.01`

,`1.e8`

]Default:

`1000`

**Tol_Box_LinFac** *(real)*: Box size factor for linear variables applied to trust region box size ↵

The trust region box used in the new Phase 0 method limits the change of variables so 2nd order terms will not become too large. Variables that appear linearly do not have 2nd order terms and the initial box size is therefore larger by a factor Tol_Box_LinFac.

Range: [

`1`

,`1.e4`

]Default:

`10`

**Tol_Def_Mult** *(real)*: Largest growth factor allowed in the block of definitional constraints ↵

The block of definitional constraints form a triangular matrix. This triangular matrix can hide large accumulating growth factors that can lead to increases in the initial sum of infeasibilities and to numerical instability. Tol_Def_Mult is an upper bound on these growth factors. If it is exceeded some critical chains of definitional constraints will be broken leading to a larger internal model, that should be numerically better behaved.

Range: [

`1.01`

, ∞]Default:

`1.e4`

**Tol_Feas_Max** *(real)*: Maximum feasibility tolerance (after scaling). ↵

The feasibility tolerance used by CONOPT is dynamic. As long as we are far from the optimal solution and make large steps it is not necessary to compute intermediate solutions very accurately. When we approach the optimum and make smaller steps we need more accuracy. Tol_Feas_Max is the upper bound on the dynamic feasibility tolerance and Tol_Feas_Min is the lower bound. It is NOT recommended to use loose feasibility tolerances since the objective, including the sum of infeasibility objective, will be less accurate and it may prevent convergence.

Range: [

`1.e-10`

,`1.e-3`

]Default:

`1.e-7`

**Tol_Feas_Min** *(real)*: Minimum feasibility tolerance (after scaling). ↵

See Tol_Feas_Max for a discussion of the dynamic feasibility tolerances used by CONOPT.

Range: [

`3.e-13`

,`1.e-5`

]Default:

`4.e-10`

**Tol_Feas_Tria** *(real)*: Feasibility tolerance for triangular equations. ↵

Triangular equations are usually solved to an accuracy of Tol_Feas_Min. However, if a variable reaches a bound or if a constraint only has pre-determined variables then the feasibility tolerance can be relaxed to Tol_Feas_Tria.

Range: [

`3.e-13`

,`1.e-4`

]Default:

`1.0e-8`

**Tol_Fixed** *(real)*: Tolerance for defining variables as fixed based on initial or derived bounds. ↵

A variable is considered fixed if the distance between the bounds is less than Tol_Fixed * Max(1,Abs(Bound)). The tolerance is used both on the users original bounds and on the derived bounds that the preprocessor implies from the constraints of the model.

Range: [

`3.e-13`

,`1.e-8`

]Default:

`4.e-10`

**Tol_Jac_Min** *(real)*: Filter for small Jacobian elements to be ignored during scaling. ↵

A Jacobian element is considered insignificant if it is less than Tol_Jac_Min. The value is used to select which small values are scaled up during scaling of the Jacobian. Is only used with scaling method Mtd_Scale = 0.

Range: [

`1.e-7`

,`1.e-3`

]Default:

`1.e-5`

**Tol_Linesearch** *(real)*: Accuracy of One-dimensional search. ↵

The onedimensional search is stopped if the expected decrease in then objective estimated from a quadratic approximation is less than Tol_Linesearch times the decrease so far in this onedimensional search.

Range: [

`0.05`

,`0.8`

]Default:

`0.2`

**Tol_Obj_Acc** *(real)*: Relative accuracy of the objective function. ↵

It is assumed that the objective function can be computed to an accuracy of Tol_Obj_Acc * max( 1, abs(Objective) ). Smaller changes in objective are considered to be caused by round-off errors.

Range: [

`3.0e-14`

,`1.e-6`

]Default:

`3.0e-13`

**Tol_Obj_Change** *(real)*: Limit for relative change in objective for well-behaved iterations. ↵

The change in objective in a well-behaved iteration is considered small and the iteration counts as slow progress if the change is less than Tol_Obj_Change * Max(1,Abs(Objective)). See also Lim_SlowPrg.

Range: [

`3.0e-13`

,`1.0e-5`

]Default:

`3.0e-12`

**Tol_Optimality** *(real)*: Optimality tolerance for reduced gradient when feasible. ↵

The reduced gradient is considered zero and the solution optimal if the largest superbasic component of the reduced gradient is less than Tol_Optimality.

Range: [

`3.e-13`

,`1`

]Default:

`1.e-7`

**Tol_Opt_Infeas** *(real)*: Optimality tolerance for reduced gradient when infeasible. ↵

The reduced gradient is considered zero and the solution infeasible if the largest superbasic component of the reduced gradient is less than Tol_Opt_Infeas.

Range: [

`3.e-13`

,`1`

]Default:

`1.e-7`

**Tol_Opt_LinF** *(real)*: Optimality tolerance when infeasible in Linear Feasibility Model ↵

This is a special optimality tolerance used when the Linear Feasibility Model is infeasible. Since the model is linear the default value is smaller than for nonlinear submodels.

Range: [

`3.e-13`

,`1.e-4`

]Default:

`1.e-10`

**Tol_Piv_Abs** *(real)*: Absolute pivot tolerance. ↵

During LU-factorization of the basis matrix a pivot element is considered large enough if its absolute value is larger than Tol_Piv_Abs. There is also a relative test, see Tol_Piv_Rel.

Range: [

`2.2e-16`

,`1.e-7`

]Default:

`1.e-10`

**Tol_Piv_Abs_Ini** *(real)*: Absolute Pivot Tolerance for building initial basis. ↵

Absolute pivot tolerance used during the search for a first logically non-singular basis. The default is fairly large to encourage a better conditioned initial basis.

Range: [

`3.e-13`

,`1.e-3`

]Default:

`1.e-7`

**Tol_Piv_Abs_NLTr** *(real)*: Absolute pivot tolerance for nonlinear elements in pre-triangular equations. ↵

The smallest pivot that can be used for nonlinear or variable Jacobian elements during the pre-triangular solve. The pivot tolerance for linear or constant Jacobian elements is Tol_Piv_Abs. The value cannot be less that Tol_Piv_Abs.

Range: [

`2.2e-16`

,`1.e-3`

]Default:

`1.e-5`

**Tol_Piv_Ratio** *(real)*: Relative pivot tolerance during ratio-test ↵

During ratio-rests, the lower bound on the slope of a basic variable to potentially leave the basis is Tol_Piv_Ratio * the largest term in the computation of the tangent.

Range: [

`1.e-10`

,`1.e-6`

]Default:

`1.e-8`

**Tol_Piv_Rel** *(real)*: Relative pivot tolerance during basis factorizations. ↵

During LU-factorization of the basis matrix a pivot element is considered large enough relative to other elements in the column if its absolute value is at least Tol_Piv_Rel * the largest absolute value in the column. Small values or Tol_Piv_Rel will often give a sparser basis factorization at the expense of the numerical accuracy. The value used internally is therefore adjusted dynamically between the users value and 0.9, based on various statistics collected during the solution process. Certain models derived from finite element approximations of partial differential equations can give rise to poor numerical accuracy and a larger user-value of Tol_Piv_Rel may help.

Range: [

`1.e-3`

,`0.9`

]Default:

`0.05`

**Tol_Piv_Rel_Ini** *(real)*: Relative Pivot Tolerance for building initial basis ↵

Relative pivot tolerance used during the search for a first logically non-singular basis.

Range: [

`1.e-4`

,`0.9`

]Default:

`1.e-3`

**Tol_Piv_Rel_Updt** *(real)*: Relative pivot tolerance during basis updates. ↵

During basischanges CONOPT attempts to use cheap updates of the LU-factors of the basis. A pivot is considered large enough relative to the alternatives in the column if its absolute value is at least Tol_Piv_Rel_Updt * the other element. Smaller values of Tol_Piv_Rel_Updt will allow sparser basis updates but may cause accumulation of larger numerical errors.

Range: [

`1.e-3`

,`0.9`

]Default:

`0.05`

**Tol_Scale2D_Min** *(real)*: Lower bound for scale factors based on large 2nd derivatives. ↵

Scaling of the model is in most cases based on the values of the variables and the first derivatives. However, if the scaled variables and derivatives are reasonable but there are large values in the Hessian of the Lagrangian (the matrix of 2nd derivatives) then the lower bound on the scale factor can be made smaller than Tol_Scale_Min. CONOPT will try to scale variables with large 2nd derivatives by one over the square root of the diagonal elements of the Hessian. However, the revised scale factors cannot be less than Tol_Scale2D_Min.

Range: [

`1.e-9`

,`1`

]Default:

`1.e-6`

**Tol_Scale_Max** *(real)*: Upper bound on scale factors. ↵

Scale factors are projected on the interval from Tol_Scale_Min to Tol_Scale_Max. Is used to prevent very large or very small scale factors due to pathological types of constraints. The upper limit is selected such that Square(X) can be handled for X close to Lim_Variable. More nonlinear functions may not be scalable for very large variables.

Range: [

`1`

,`1.e30`

]Default:

`1.e25`

**Tol_Scale_Min** *(real)*: Lower bound for scale factors computed from values and 1st derivatives. ↵

Scale factors used to scale variables and equations are projected on the range Tol_Scale_Min to Tol_Scale_Max. The limits are used to prevent very large or very small scale factors due to pathological types of constraints. The default value for Tol_Scale_Min is 1 which means that small values are not scaled up. If you need to scale small value up towards 1 then you must define a value of Tol_Scale_Min < 1.

Range: [

`1.e-10`

,`1`

]Default:

`1`

**Tol_Scale_Var** *(real)*: Lower bound on x in x*Jac used when scaling. ↵

Rows are scaled so the largest term x*Jac is around 1. To avoid difficulties with models where Jac is very large and x very small a lower bound of Tol_Scale_Var is applied to the x-term.

Range: [

`1.e-8`

,`1`

]Default:

`1.e-5`

**Tol_Zero** *(real)*: Zero filter for Jacobian elements and inversion results. ↵

Contains the smallest absolute value that an intermediate result can have. If it is smaller, it is set to zero. It must be smaller than Tol_Piv_Abs/10.

Default:

`1.e-20`

**Trace_MinStep** *(real)*: Minimum step between Reinversions when using TraceCNS. ↵

The optimization is stopped with a slow convergence message if the change in trace variable or objective is less than this tolerance between reinversions for more than two consecutive reinversions. The step is scaled by the distance from the initial value to the critical bound.

Range: [

`0`

,`1`

]Default:

`0.001`