### Table of Contents

# Introduction

CONOPT4 is an NLP solver derived from CONOPT3 but with many changes. This initial note will describe some of the more important changes from CONOPT3 to CONOPT4 and some of the new options that control these changes. Selected parts of the log-file are also shown and explained. The note is fairly technical and the casual user does not have to understand the details.

More self-contained documentation for CONOPT4 without numerous references to CONOPT3 will appear later.

# When should you use CONOPT4?

CONOPT3 is a mature solver with a lot of build-in heuristics developed over years of experimentation and CONOPT3 works well for a large class of models. Initially, CONOPT4 is being developed and tuned for models where CONOPT3 has problems or where small adjustments have given a better solver. Our initial recommendations are:

- CONOPT4 should be tried on models that take more than a few minutes to solve. CONOPT3 is probably your best guess for small and medium sized models, i.e. models with less than 1,000 to 10,000 variables and constraints.
- CONOPT4 should be tried for large CNS models, i.e. models with more than 100,000 variables and constraints.
- CONOPT4 should be tried for models where CONOPT3 runs out of memory. However, note that models of this size under all circumstances are very large and good behavior cannot be guaranteed.
- CONOPT4 should be tried on models where CONOPT3 ends in a locally infeasible solution. CONOPT4 has several new components that avoid obviously infeasible areas and let CONOPT4 move away from saddle points.
- CONOPT4 should be tried on models where CONOPT3 finds a large number of definitional constraints.

# Memory Management

CONOPT3 has a limit on the amount of memory that can be used of 2 or 3 GBytes for 32-bit systems and 8 GBytes for 64-bit systems. The way memory is allocated and used has been completely rewritten and there is no longer a logical limit in CONOPT4. 32-bit systems will still have a 2 or 3 GBytes limit derived from the operating system, but 64-bit systems are now only limited by the amount of physical memory on the computer.

# Revised Preprocessor

The preprocessor in CONOPT3 identifies pre- and post-triangular variables and constraints, and it handles these variables and constraints in a special way so some internal routines can run more efficiently. The triangular variables are for example always basic. CONOPT3 does not include these variables in tests for entering and leaving the basis. And triangular variables are processed before other variables in the inversion routine. Despite the special status of some variables and constraints, they are all part of the model to be solved.

CONOPT4 distinguished between a 'user model' as defined by the user via the GAMS language, and an 'internal model'. Pre-triangular variables and constraints are simply removed from the user model. Post-triangular variables and constraints are collapsed into a single condensed objective function. And definitional constraints are eliminated. After the internal model has been solved CONOPT4 translates the internal solution back into the solution for the user model and reports this solution to the user. Because CONOPT4 uses these two different models it is no longer possible to turn the preprocessor off.

In addition to the simple pre- and post-triangular variables and constraints from CONOPT3, the preprocessor in CONOPT4 looks at more possibilities for simplifying the model. Some of the new features are:

- Fixed variables are removed completely.
- Constraints that represent simple inequalities are identified and changed into simple bounds on the variables and the constraints are removed.
- Simple monotone constraints such as
`exp(x) =L= c1`

or`log(y) =L= c2`

are converted into simple bounds on the variables and then removed. - Forcing constraints such as
`x1 + x2 =L= 0`

with`x1.lo = 0`

and`x2.lo = 0`

are identified, the variables are fixed, and the constraints are removed. If a forcing constraint is identified then other constraints may become pre-triangular so they also can be removed. - Linear and monotone constraints are used to compute 'implied bounds' on many variables and these bounds can help CONOPT4 get a better starting point for finding an initial feasible solution.
- Some non-monotone constraints such as
`sqr(x1) + sqr(x2) =L= 1`

can also be used to derive implied bounds (here`-1 < x1 < +1 and -1 < x2 < +1`

) that both can improve the starting point and can be used to determine that other terms are monotone. - Constraints with exactly two variables, e.g. simple linear identities such as
`x1 =E= a*x2 + b`

or simple monotone identities such as x3 =E= exp(x4), are used to move bounds between the two variables and this may result in more variables being included in the post-triangle. - Linear constraints that are identical or proportional to others are identified and removed.
- Pairs of constraints that define a lower and an upper bound on the same linear expression or proportional linear expressions, e.g.
`1 =L= x1 + x2`

and`2*x1+2*x2 =L= 4`

, are turned into a single ranged constraint implemented with a double-bounded slack variable. - Nonlinear constraints that become linear when the pre-triangular variables are fixed are recognized as being linear with the resulting simplifications.

Some of the new preprocessing steps are useful when solving sub-models in a Branch and Bound environment. A constraint like `x =L= M*y`

where `y`

is a binary variable fixed at either `0`

or `1`

is turned into a simple bound on `x`

. And a constraint like `sum(i, x(i) ) =L= Cap*y (with x.lo(i) = 0)`

combined with y fixed at zero will force all `x(i)`

to zero.

The preprocessor also identifies constructs that are easy to make feasible. There are currently two types:

- Penalty terms: We define a penalty constraint as a constraint of the form
`f(x1,x2,..) + p - n =E= 0`

, where p and n are positive variables, and where`p`

and`n`

only appear in post-triangular constraints or in previously identified penalty constraint. For any feasible values of the x-variables it is easy to find values of`p`

and`n`

that makes the penalty constraint feasible:`p = max(0,-f(x))`

and`n = max(0,f(x))`

. The definition is easily generalized to constraints where p and n have coefficients different from one and nonzero bounds; the essence is the presence of two linear unbounded terms of opposite sign. - Minimax terms: We define a minimax group as a group of constraints of the form
`eq(i).. fi(x1,x2,..) =L= z`

where`z`

is common to the group and otherwise only appear in post-triangular constraints, and`z`

is unbounded from above. For any feasible value of the x-variables it is easy to find a value of`z`

that makes the minimax group feasible:`z = smin(i: fi(x))`

. The definition is easily generalized to groups of constraints where`z`

has coefficients different from one and where the direction of the inequality is reversed.

The preprocessor will also recognize definitional equations: constraints of the form `x =E= f(y)`

, where `x`

is a free variable or the bounds on `x`

cannot be binding, are called definitional equations and x is called a defined variable. If there are many potential defined variable the preprocessor will select a recursive set and logically eliminate them from the internal model: The values of the defined variables are easily derived from the values of all other variables by evaluating the definitional equations in their recursive order, and these values are substituted into the remaining constraints before their residuals are computed. The matrix of derivatives of the remaining constraints is computed from the overall matrix of derivatives via an elimination of the triangular definitional equations.

The procedure used to recognize definitional equation is similar to the one used in CONOPT3. However, the actual use of the definitional equations is quite different in CONOPT4. The definitional equations are eliminated from the internal model and they are not present in the internal operations used to solve this model. For some models this can make a big difference.

We will show a few examples of log files where the output from the preprocessor is shown. The first is from the otpop.gms model in the GAMS Library:

C O N O P T version 4.02 Copyright (C) ARKI Consulting and Development A/S Bagsvaerdvej 246 A DK-2880 Bagsvaerd, Denmark Licensed to: GAMS/CONOPT OEM License The user model has 104 variables and 77 constraints with 285 Jacobian elements, 100 of which are nonlinear. The Hessian of the Lagrangian has 17 elements on the diagonal, 33 elements below the diagonal, and 66 nonlinear variables. Iter Phase Ninf Infeasibility RGmax NSB Step InItr MX OK 0 0 4.0939901439E+03 (Input point) The pre-triangular part of the model has 32 constraints and 43 variables. The post-triangular part of the model has 9 constraints and variables. There are 13 definitional constraints and defined variables. Preprocessed model has 39 variables and 23 constraints with 88 Jacobian elements, 22 of which are nonlinear. Iter Phase Ninf Infeasibility RGmax NSB Step InItr MX OK 1.0510968588E+03 (Full preprocessed model) 4.5063072017E+01 (After scaling) 5.0023365462E+00 (After adjusting individual variables)

The first few lines define the size of the user model. Usually there will also be information on the Hessian of the Lagrangian; these two lines are missing from models where the Hessian is not generated, usually because it is very large and dense (or because the generation has been turned off with the option `Flg_Hessian = false`

).

After the description of the initial sum of infeasibilities there are three lines with statistics from the preprocessor. Note that the pre-triangular part of the model has more variables than constraints, in this case because there are fixed variables. The post-triangular part will always have the same number of constraints as variables, and the same is true for definitional constraints and defined variables.

After the statistics from the preprocessor the size of the resulting internal model is shown. There is no Hessian information; it is costly to derive the Hessian for the internal model and it will in most cases be very dense so all use of 2nd order information in the internal model is computed by mapping data to the user model and using the real Hessian.

The next lines show the change in the sum of infeasibilities during the initial stages. The first line is after the preprocessor has changed some variables and removed many constraints. The second line is after the internal model has been scaled. And last line is after some variables have been adjusted as described in the next section.

The second example is from the prolog.gms model in the GAMS Library:

Iter Phase Ninf Infeasibility RGmax NSB Step InItr MX OK 0 0 1.2977431398E+03 (Input point) The post-triangular part of the model has 1 constraints and variables. There are 17 penalty and minimax constraints with 3 variables. Reduced model without penalty components has 17 variables and 5 constraints with 46 Jacobian elements, 0 of which are nonlinear. Iter Phase Ninf Infeasibility RGmax NSB Step InItr MX OK 1.2960000000E+03 (Model without penalty constraints) 5.0625000000E+00 (After scaling) 0.0000000000E+00 (After adjusting individual variables)

There are no pre-triangular variables and constraints and the pre-triangular line is therefore missing, just like the line describing definitional constraints. On the other hand, there is a line telling that the model has 17 penalty and minimax constraints involving a total of 3 variables.

The internal model that is generated is here without the penalty and minimax constraints. This particular sub-model and other internal sub-models are described in more detail later in this note.

# Phase 0 - Finding an Initial Feasible Solution

Phase 0 is started with a new 'Adjust Initial Point' procedure that tries to minimize the sum of infeasibilities by changing individual variables one at a time. The procedure is very cheap since each change of a single variable only involve a small part of the overall model, and it will as a by-product produce a large part of a good initial basis and many constraints will become feasible. As the log file examples above shows, the procedure can in some cases reduce the sum of infeasibilities significantly.

Phase 0 in CONOPT3 is based on Newton's method with a heuristic for taking some constraints out if they do not behave well for Newton. The method can be very fast if a good initial basis is found, but for larger models it can also use a large number of basis changes without any real improvement and can therefore be very slow. CONOPT4 has replaced the heuristic with a rigorous LP framework that iteratively finds a better basis for Newton's method. CONOPT4 can therefore be slower for very easy models but the variability in solution time is smaller and it should never be very slow.

The new Phase 0 procedure replaces the Triangular Crash procedure from CONOPT3.

The following example shows the relevant part of the log file for GAMS Library model otpop.gms: gms (continued from above):

Iter Phase Ninf Infeasibility RGmax NSB Step InItr MX OK 1.0510968588E+03 (Full preprocessed model) 4.5063072017E+01 (After scaling) 5.0023365462E+00 (After adjusting individual variables) 1 0 0 2.2657221820E+00 1.0E+00 3 T T 2 0 0 1.8510537501E+00 1.0E+00 2 T T 3 0 0 1.8243868306E+00 1.0E+00 1 T T 4 0 0 1.8239239188E+00 1.0E+00 1 T T 5 0 0 1.8239236521E+00 1.0E+00 1 T T ** Feasible solution. Value of objective = 323.670194144

The `InItr`

column shows the number of LP-like inner iterations for each outer iteration and `Step = 1`

indicates that the full solution from the LP was be used.

The next example is from GAMS Library model mathopt3.gms:

Iter Phase Ninf Infeasibility RGmax NSB Step InItr MX OK 1.7198687345E+03 (Full preprocessed model) 7.9643177481E+00 (After scaling) 1.2674164214E-01 (After adjusting individual variables) 1 S0 0 2.6763223208E+00 1.2E-01 2 F F 2 0 0 1.3292743354E+00 1.0E+00 2 T T 3 S0 0 4.9371333807E+00 2.5E-01 1 F F 4 S0 0 2.3899063615E+00 1.0E+00 2 T T 5 S0 0 3.2207192474E+00 5.0E-01 1 F F 6 0 0 3.2206913182E+00 6.2E-02 1 F F 7 1 3 6.8573593504E-01 1.7E+00 1 1.2E+00 1 T T

For some of the iterations Step is less than 1 indicating that the direction found by the inner linear model could not be used in full due to nonlinearities. Also note the lines with 'S0' in the Phase column. The S tells that the model was scaled before the iteration and the sum of infeasibilities was increased during this scaling procedure. The sum of infeasibilities is therefore not monotone decreasing even if each outer iteration does decrease them.

# Transition between SLP and SQP

The transition from SLP to SQP and back again is in CONOPT3 based on monitoring failure. The logic has been changed in CONOPT4 so transition is based on continuous measurements of curvature, both in the general constraints and in the objective function, combined with estimates of computational costs and progress for SLP and SQP.

The continuation of the log file for GAMS Library model otpop.gms shows some of this:

Iter Phase Ninf Objective RGmax NSB Step InItr MX OK 6 3 2.3861517936E+02 3.6E+02 12 2.1E-01 7 F T 7 3 1.4308470370E+02 1.1E+02 16 2.0E-01 9 F T 8 3 9.4375606705E+01 1.5E+02 16 1.8E-01 5 F F 9 3 2.4652509772E+01 7.4E+01 16 6.7E-01 2 F T 10 4 2.4445151316E-02 3.3E+01 16 1.0E+00 6 F T 11 4 5.0735392100E-06 4.4E+00 16 1.0E+00 5 F T 12 4 1.0276261682E-09 1.9E-02 16 1.0E+00 3 F T 13 4 1.4326828955E-13 2.8E-06 16 1.0E+00 1 F T 14 4 1.4326828955E-13 4.0E-10 16 ** Optimal solution. Reduced gradient less than tolerance.

Iterations 6 to 9 are SLP iterations (Phase 3) and iterations 10 to 14 are SQP iterations (Phase 4). The SLP iterations have Step less than 1 due to nonlinear objective terms and CONOPT4 jumps directly from SLP to SQP. CONOPT3 needed some steepest descend and some Quasi-Newton iterations as a transition, and although these iterations are fast, the transition could sometimes be slow.

# Bad Iterations

Bad iterations (for both solvers flagged with “F” in the “OK” column of the log file) are an important problem for both CONOPT3 and CONOPT4. They appear if the output from the SLP or SQP procedure is a search direction where CONOPT cannot move very far because it is difficult to make the nonlinear constraints feasible again. The efforts spend in solving the SLP or SQP procedure is therefore partially wasted. The problem is usually associated with a basis that is ill-conditioned or changes very fast.

CONOPT3 has some heuristics for changing the basis in these cases. For models with many basic and superbasic variables this can be a slow and not very reliable procedure.

CONOPT4 uses a more rigorous procedure based on monitoring the size of some intermediate terms. It can be the size of elements of the tangent for basic variables relative to similar elements for superbasic variables. Or it can be intermediate results from the computation of the reduced costs. The monitoring procedure is cheap and it is used to maintain a well-conditioned basis throughout the optimization instead of waiting until something does not work well.

# Saddle Points and Directions of Negative Curvature

CONOPT is based on moving in a direction derived from the gradient (or the reduced gradient for models with constraints). If the reduced gradient (projected on the bounds) is zero then the solution satisfies the first-order optimality conditions and it is standard procedure to stop. Unfortunately, this means that we can stop in a saddle-point.

It is not very common to move towards a saddle-point and get stuck in it. However, it is not uncommon that the initial point, provided by a user or by default, is a saddle point. A simple example is the constraint `x*y =E= 1`

started with `x.l = y.l = 0`

that easily can end with a locally infeasible solution. Or `minimize z, z =E= x*y`

with the same starting point that could end locally optimal without moving even though better points exist in the neighborhood.

CONOPT4 has an added procedure that tries to find a direction of negative curvature that can move the solution point away from a saddle-point. The procedure is only called in points that satisfy the first order optimality conditions and it is therefore a cheap safeguard. The theory behind the method is developed for models without degeneracy and it works very well in practice for these models. Models with some kind of degeneracy (basic variables at bound or nonbasic variables with zero reduced cost) use the same procedure, but it is in this case only a heuristic that cannot be guaranteed to find a direction of negative curvature, even if one exists.

If you know that there are no directions of negative curvature you can turn the procedure of by setting the logical option Flg_NegCurve to false. If the model is known to be convex you can set the logical option `Flg_Convex`

to `true`

and it will also turn this procedure off. The saving is usually very small, except for models that solve in very few iterations and for model with a large number of superbasics.

There is no output in the log file for negative curvature. If a useful direction is found CONOPT4 will follow it and the optimization continues. Otherwise, the solution is declared locally optimal.

# Use of Alternative Sub-Models

During the course of an optimization CONOPT4 can work with up to three different internal sub-models. These models are:

- Full Model: This model consists of the constraints in the user's model excluding all pre- and post-triangular constraints and with the definitional variables eliminated by their defining constraints.
- No-Penalty Model: This model consists of the Full Model excluding all penalty and mini-max constraints. This model does not have an objective function.
- Linear Feasibility Model: This model consists of the linear constraints of the Full Model. The Linear Feasibility model is either solved without an objective function or minimizing a quadratic distance measure; this is discussed below.

The pre-triangular variables are considered fixed and they do not appear in any of the sub-models. Their influence comes through their contribution to coefficients and constant terms. The post-triangular variables are considered intermediate variables in the definition of the objective function. They do not appear in the last two models that only are concerned with feasibility, and they only appear indirectly via the objective in the Full Model. The defined variables are considered intermediate variables in the definition of the remaining constraints in the same way as post-triangular variables are intermediate in the objective. The variables in the Full Model are all variables excluding pre- and post-triangular variables and excluding defined variables; this set can include variables that do not appear in any constraints. The constraints of the full models are all constraints excluding pre- and post-triangular constraints and with the definitional equations logically eliminated. The variables in the Linear Feasibility Model and in the No-Penalty Model are the variables that appear in the constraints of these models (excluding pre-triangular variables).

CONOPT always starts by searching for a feasible solution and the sub-models only play a role in this part of the optimization so if the initial point provided by the modeler is feasible then these sub-models are irrelevant. If there are many penalty and/or minimax constraints then the No-Penalty Model will be much smaller than the Full Model and it is more efficient to use the smaller model while searching for feasibility. So the No-Penalty model is only introduced for efficiency reasons. It is by default solved before the Full Model if all of the following conditions are satisfied:

- The
`Flg_NoPen`

options is true (the default value) - The model is not a CNS model
- The user did not provide an initial basis
- Some of the constraints in the No-Penalty Model are infeasible.
- The number of penalty and minimax constraints is more than the number of constraints in the Full Model multiplied by the value of option
`Rat_NoPen`

. The default value of`Rat_NoPen`

is`0.1`

, i.e. the No-Penalty Model is only defined and solved if it is at least 10% smaller than the Full Model.

The GAMS Library model prolog.gms used earlier has many penalty and minimax constraints. The relevant part of the log file is:

Iter Phase Ninf Infeasibility RGmax NSB Step InItr MX OK 0 0 1.2977431398E+03 (Input point) The post-triangular part of the model has 1 constraints and variables. There are 17 penalty and minimax constraints with 3 variables. Reduced model without penalty components has 17 variables and 5 constraints with 46 Jacobian elements, 0 of which are nonlinear. Iter Phase Ninf Infeasibility RGmax NSB Step InItr MX OK 1.2960000000E+03 (Model without penalty constraints) 5.0625000000E+00 (After scaling) 0.0000000000E+00 (After adjusting individual variables) Previous model terminated and penalty components are added back in. Full preprocessed model has 20 variables and 22 constraints with 120 Jacobian elements, 8 of which are nonlinear. Iter Phase Ninf Infeasibility RGmax NSB Step InItr MX OK 0.0000000000E+00 (After adjusting penalty variables) ** Feasible solution. Value of objective = -1147.54681404

The No-Penalty model is small with only 5 constraints and 17 variables and it solves already in the 'Adjust Individual Point' procedure. The Full Model is significantly larger with 22 constraints and 20 variables, and it is feasible when started from the solution to the No-Penalty model.

The Linear Feasibility Model is introduced to help avoid locally infeasible solutions. It produces a starting point to the nonlinear models (No-Penalty Model or Full Model) that satisfies all linear constraints. If the Linear Feasibility Model is infeasible then the overall model is proved to be infeasible (independent of nonlinearities) and there is no reason to proceed with the nonlinear part of the model.

The Linear Feasibility Model is only useful if the model has some linear constraints and if the initial point provided by the modeler does not satisfy these constraints. If a feasible solution to the linear constraints is found there are several possible ways to continue before the No-Penalty Model and/or the Full Model are started:

A. Use the solution point as is.

B. Perform an approximate minimization of the weighted distance from the user's initial point. Include only the variables that have non-default initial values, i.e. variables with an initial value (

`xini`

) that is different from zero projected on the bounds, i.e.`xini ne min(max(0,x.lo),x.up)`

. The distance measure is`sqr( (x-xini) / max(1,abs(xini)) )`

.C. As in B, but include all variables in the distance measure.

D. As in C, but define xini to 1 projected on the bounds for all variables with default initial value.

Possibility A is fast but may give a starting point for the nonlinear model far from the initial point provided by the user, B is slower but gives a starting point for the nonlinear model that is close to the point provided by the user, and C and D are also slower but may provide reasonably good and different starting points for the nonlinear model.

The order in which the sub-models are solved depends on a Linear Feasibility Model strategy option, `Lin_Method`

:

- If
`Lin_Method`

has the default value 1 then the initial point and basis is assumed to be fairly good and CONOPT4 will start with the No-Penalty Model (only if the conditions mentioned above are satisfied) followed by the Full Model. If the model terminates locally optimal, unbounded, or on some resource limit (time, iterations, function evaluations) then we are done and CONOPT terminates. But if the model is locally infeasible then we build and solve the Linear Feasibility Model. If this model is infeasible, the overall model is infeasible and we are again done. If it is feasible we minimize objective B and use the solution point as a second starting point for the nonlinear model. If this attempt also terminates locally infeasible we try to generate an alternative initial point with objective C and then with objective D. If all fails, the model is labeled locally infeasible. - With
`Lin_Method = 2`

CONOPT will start with the Linear Feasibility Model with objective A before looking at the No-Penalty and Full models. If they are locally infeasible from this starting point we followed the procedure from above with objective B, C, and then D. `Lin_Method = 3`

is similar to`Lin_Method = 2`

except that the first objective A is skipped.

An example where all Linear Feasibility objectives are used is taken from `ex5_3_2.gms`

in the GlobalLib collection of test models. It shows the repeated starts of the Linear Feasibility Model followed by the Full Preprocessed model:

Preprocessed model has 22 variables and 16 constraints with 59 Jacobian elements, 24 of which are nonlinear. Iter Phase Ninf Infeasibility RGmax NSB Step InItr MX OK 4.1200000000E+02 (Full preprocessed model) 8.4843750000E+00 (After scaling) 6.2500000000E-01 (After adjusting individual variables) 1 1 1 6.2500000000E-01 0.0E+00 5 0.0E+00 T T 4 1 1 6.2500000000E-01 0.0E+00 4 ** Infeasible solution. Reduced gradient less than tolerance. Initial linear feasibility model has 22 variables and 7 constraints with 22 linear Jacobian elements. Objective: Distance to initial point (nondefault variables) Iter Phase Ninf Infeasibility RGmax NSB Step InItr MX OK 3.0200000000E+02 (Linear feasibility model) 3.1718750000E+00 (After scaling) ** Linear constraints feasible. Distance = 0.00000000000 Iter Phase Ninf Distance RGmax NSB Step InItr MX OK 6 4 0.0000000000E+00 0.0E+00 0 Restarting preprocessed model from a new starting point. Iter Phase Ninf Infeasibility RGmax NSB Step InItr MX OK 1.1000000000E+02 (Full preprocessed model) 5.3125000000E+00 (After scaling) 11 2 1 6.2500000000E-01 0.0E+00 0 ** Infeasible solution. There are no superbasic variables. Restarting linear feasibility model. Objective: Distance to initial point (all variables) Iter Phase Ninf Infeasibility RGmax NSB Step InItr MX OK 0.0000000000E+00 (Linear feasibility model) 0.0000000000E+00 (After scaling) ** Linear constraints feasible. Distance = 90002.0000000 Iter Phase Ninf Distance RGmax NSB Step InItr MX OK 16 4 2.2501000000E+04 1.8E-12 5 Restarting preprocessed model from a new starting point. Iter Phase Ninf Infeasibility RGmax NSB Step InItr MX OK 1.8492500000E+02 (Full preprocessed model) 1.0775781250E+01 (After scaling) 21 1 1 6.2500000000E-01 5.4E-03 3 0.0E+00 T T 26 1 1 6.2499999982E-01 3.7E-07 3 2.4E+03 T T 27 2 1 6.2500000000E-01 0.0E+00 2 ** Infeasible solution. Reduced gradient less than tolerance. Restarting linear feasibility model. Objective: Distance to point away from bounds Iter Phase Ninf Infeasibility RGmax NSB Step InItr MX OK 1.4210854715E-14 (Linear feasibility model) 5.5511151231E-17 (After scaling) ** Linear constraints feasible. Distance = 15.1922028230 Iter Phase Ninf Distance RGmax NSB Step InItr MX OK 31 4 2.5570990132E+00 1.1E-04 13 1.0E+00 2 F T 32 4 2.5570990132E+00 7.7E-11 13 Restarting preprocessed model from a new starting point. Iter Phase Ninf Infeasibility RGmax NSB Step InItr MX OK 6.0836352222E+02 (Full preprocessed model) 9.7890253865E+00 (After scaling) 35 S0 0 2.3859378417E+00 1.0E+00 3 T T ** Feasible solution. Value of objective = 1.86415945946 Iter Phase Ninf Objective RGmax NSB Step InItr MX OK 40 4 1.8641594595E+00 0.0E+00 0 ** Optimal solution. There are no superbasic variables.

The Full Model is infeasible when started directly from the values provided by the user. The Linear Feasibility model is then solved to get a different starting point for the Full model where the linear part is feasible. The Full Model is also infeasible from this point and it is necessary to solve the Linear Feasibility model three times with different objective functions before the full model becomes feasible.

The number of submodels that are solved is limited by the option, Num_Rounds. The default value is 4, i.e. we will try very hard to find a feasible point as shown in the example above. The value 1 will make CONOPT terminate immediately when a locally infeasible point is found. You can use Num_Rounds if you are not interested in spending a lot of time on a model that is likely to be infeasible. It can be particularly relevant if CONOPT4 is used as the sub-solver inside SBB where infeasible sub-problems are fairly likely.

If the model is defined to be convex with the option `Flg_Convex = true`

then a locally infeasible solution is labeled (globally) infeasible and the Linear Feasibility Model will not be used. (A locally optimal solution is also labeled (globally) optimal.)

# Multiple Threads

CONOPT4 can use multiple threads for some internal computations and in collaboration with GAMS for function and derivative evaluations. Multiple threads are currently only used for certain fairly large and dense computations and there are not so many of these in the types of model usually build with GAMS. In addition, multiple threads have quite high overhead and they are therefore only useful for fairly large models. Currently the best improvements have been for very large models with more than 100,000 variables or constraints, in particular for CNS models if this size.

It is the intention to implement multiple threads into more parts of CONOPT4 in the future.

Threads can be turned on with the GAMS command-line option `Threads=n`

or with the CONOPT4 option threads.

# APPENDIX A - Options

The options that ordinary GAMS users can access are listed below.

## Algorithmic options

Option | Description | Default |
---|---|---|

DF_Method | Method used with defined variables | `0` |

Flg_Convex | Flag for defining a model to be convex | `0` |

Flg_Crash_Basis | Flag for crashing an initial basis without fixed slacks | `1` |

Flg_Crash_Slack | Flag for selecting initial basis as Crash-triangular variables plus slacks. | `0` |

Flg_Dbg_Intv | Flag for debugging interval evaluations. | `0` |

Flg_DC_Unique | Flag for requiring definitional constraints to be unique | `1` |

Flg_NegCurve | Flag for testing for negative curvature when apparently optimal | `1` |

Flg_NoPen | Flag for allowing the Model without penalty constraints | `1` |

Flg_RedHess | Flag for approximating Reduced Hessian information for incoming superbasics. | `0` |

Flg_SLPMode | Flag for enabling SLP mode. | `1` |

Flg_SQPMode | Flag for enabling of SQP mode. | `1` |

Flg_Square | Flag for Square System. Alternative to defining modeltype=CNS in GAMS | `0` |

Frq_Rescale | Rescaling frequency. | `5` |

Lim_DFVars | Limit on the number of candidates for defined variable in one constraint | `2` |

Lim_Err_2DDir | Limit on errors in Directional Second Derivative evaluation. | `10` |

Lim_Err_Fnc_Drv | Limit on number of function evaluation errors. Overwrites GAMS Domlim option | `GAMS DomLim` |

Lim_Err_Hessian | Limit on errors in Hessian evaluation. | `10` |

Lim_Iteration | Maximum number of iterations. Overwrites GAMS Iterlim option. | `GAMS IterLim` |

Lim_NewSuper | Maximum number of new superbasic variables added in one iteration. | `auto` |

Lim_RedHess | Maximum number of superbasic variables in the approximation to the Reduced Hessian. | `auto` |

Lim_SlowPrg | Limit on number of iterations with slow progress (relative less than Tol_Obj_Change). | `20` |

Lim_StallIter | Limit on the number of stalled iterations. | `100` |

Lim_Start_Degen | Limit on number of degenerate iterations before starting degeneracy breaking strategy. | `10` |

Lim_Time | Time Limit. Overwrites the GAMS Reslim option. | `GAMS ResLim` |

Lim_Variable | Upper bound on solution values and equation activity levels | `1.e15` |

Lin_Method | Method used to determine if and/or which Linear Feasibility Models to use | `1` |

Mtd_Dbg_1Drv | Method used by the function and derivative debugger. | `0` |

Mtd_RedHess | Method for initializing the diagonal of the approximate Reduced Hessian | `0` |

Mtd_Scale | Method used for scaling. | `3` |

Mtd_Step_Phase0 | Method used to determine the step in Phase 0. | `Auto` |

Mtd_Step_Tight | Method used to determine the maximum step while tightening tolerances. | `0` |

Num_Rounds | Number of rounds with Linear Feasibility Model | `4` |

Rat_NoPen | Limit on ratio of penalty constraints for the No_Penalty model to be solved | `0.1` |

Tol_Bound | Bound filter tolerance for solution values close to a bound. | `1.e-7` |

Tol_BoxSize | Initial box size for trust region models for overall model | `10` |

Tol_BoxSize_Lin | Initial box size for trust region models for linear feasibility model | `1000` |

Tol_Box_LinFac | Box size factor for linear variables applied to trust region box size | `10` |

Tol_DFixed | Tolerance for defining variables as fixed based on derived bounds. | `1.e-12` |

Tol_Feas_Max | Maximum feasibility tolerance (after scaling). | `1.e-7` |

Tol_Feas_Min | Minimum feasibility tolerance (after scaling). | `4.e-10` |

Tol_Feas_Tria | Feasibility tolerance for triangular equations. | `1.0e-8` |

Tol_IFixed | Tolerance for defining variables as fixed based on initial bounds. | `1.e-9` |

Tol_Jac_Min | Filter for small Jacobian elements to be ignored during scaling. | `1.e-5` |

Tol_Linesearch | Accuracy of One-dimensional search. | `0.2` |

Tol_Obj_Acc | Relative accuracy of the objective function. | `3.0e-13` |

Tol_Obj_Change | Limit for relative change in objective for well-behaved iterations. | `3.0e-12` |

Tol_Optimality | Optimality tolerance for reduced gradient when feasible. | `1.e-7` |

Tol_Opt_Infeas | Optimality tolerance for reduced gradient when infeasible. | `1.e-7` |

Tol_Piv_Abs | Absolute pivot tolerance. | `1.e-10` |

Tol_Piv_Abs_Ini | Absolute Pivot Tolerance for building initial basis. | `1.e-7` |

Tol_Piv_Abs_NLTr | Absolute pivot tolerance for nonlinear elements in pre-triangular equations. | `1.e-5` |

Tol_Piv_Ratio | Relative pivot tolerance during ratio-test | `1.e-8` |

Tol_Piv_Rel | Relative pivot tolerance during basis factorizations. | `0.05` |

Tol_Piv_Rel_Ini | Relative Pivot Tolerance for building initial basis | `1.e-3` |

Tol_Piv_Rel_Updt | Relative pivot tolerance during basis updates. | `0.05` |

Tol_Scale2D_Min | Lower bound for scale factors based on large 2nd derivatives. | `1.e-6` |

Tol_Scale_Max | Upper bound on scale factors. | `1.e25` |

Tol_Scale_Min | Lower bound for scale factors computed from values and 1st derivatives. | `1` |

Tol_Scale_Var | Lower bound on x in x*Jac used when scaling. | `1.e-5` |

Tol_Zero | Zero filter for Jacobian elements and inversion results. | `1.e-20` |

## Debugging options

Option | Description | Default |
---|---|---|

Flg_Interv | Flag for using intervals in the Preprocessor | `1` |

Flg_Prep | Falg for using the Preprocessor | `1` |

Lim_Dbg_1Drv | Flag for debugging of first derivatives | `0` |

Lim_Hess_Est | Upper bound on second order terms. | `1.e4` |

Lim_Msg_Dbg_1Drv | Limit on number of error messages from function and derivative debugger. | `10` |

## Output options

Option | Description | Default |
---|---|---|

Frq_Log_Simple | Frequency for log-lines for non-SLP/SQP iterations. | `auto` |

Frq_Log_SlpSqp | Frequency for log-lines for SLP or SQP iterations. | `auto` |

Lim_Msg_Large | Limit on number of error messages related to large function value and Jacobian elements. | `10` |

Lim_Pre_Msg | Limit on number of error messages related to infeasible pre-triangle. | `25` |

## Interface options

Option | Description | Default |
---|---|---|

cooptfile | ||

Flg_2DDir | Flag for computing and using directional 2nd derivatives. | `auto` |

Flg_Hessian | Flag for computing and using 2nd derivatives as Hessian of Lagrangian. | `auto` |

HEAPLIMIT | Maximum Heap size in MB allowed | `1e20` |

HessianMemFac | Memory factor for Hessian generation: Skip if Hessian elements > Nonlinear Jacobian elements*HessianMemFac, 0 means unlimited. | `0` |

THREAD2D | Number of threads used for second derivatives | `1` |

THREADC | Number of compatibility threads used for comparing different values of THREADS | `1` |

THREADF | Number of threads used for function evaluation | `1` |

threads | Number of threads used by Conopt internally | `GAMS Threads` |

**cooptfile** *(string)*: ↵

**DF_Method** *(integer)*: Method used with defined variables ↵

When defined variables are identified (see LSUSDF) they can be used in two ways, controlled by DF_Method:

Default:

`0`

value meaning `0`

Defined variables are only used in the initial point and for the initial basis (default). `1`

Defined variables are kept basic and the defining constraints are used to recursively assign values to the defined variables in all trial points.

**Flg_2DDir** *(boolean)*: Flag for computing and using directional 2nd derivatives. ↵

If turned on, make directional second derivatives (Hessian matrix times directional vector) available to CONOPT. The default is on, but it will be turned off of the model has external equations (defined with =X=) and the user has not provided directional second derivatives. If both the Hessian of the Lagrangian (see Flg_Hessian) and directional second derivatives are available then CONOPT will use both: directional second derivatives are used when the expected number of iterations in the SQP sub-solver is low and the Hessian is used when the expected number of iterations is large.

Default:

`auto`

**Flg_Convex** *(boolean)*: Flag for defining a model to be convex ↵

When turned on (the default is off) CONOPT knows that a local solution is also a global solution, whether it is optimal or infeasible, and it will be labeled appropriately. At the moment, Flg_NegCurve will be turned off. Other parts of the code will gradually learn to take advantage of this flag.

Default:

`0`

**Flg_Crash_Basis** *(boolean)*: Flag for crashing an initial basis without fixed slacks ↵

When turned on (1) CONOPT will try to crash a basis without fixed slacks in the basis. Fixed slacks are only included in a last round to fill linearly dependent rows. When turned off (0), large infeasible slacks will be included in the initial basis with preference for variables and slacks far from bound.

Default:

`1`

**Flg_Crash_Slack** *(boolean)*: Flag for selecting initial basis as Crash-triangular variables plus slacks. ↵

When turned on (1) CONOPT will select all infeasible slacks as the first part of the initial basis.

Default:

`0`

**Flg_Dbg_Intv** *(boolean)*: Flag for debugging interval evaluations. ↵

Flg_Dbg_Intv controls whether interval evaluations are debugged. Currently we check that the lower bound does not exceed the upper bound for all intervals returned, both for function values and for derivatives.

Default:

`0`

**Flg_DC_Unique** *(boolean)*: Flag for requiring definitional constraints to be unique ↵

Flg_DC_Unique controls whether CONOPT will require definitional constraints to be unique. If turned on variables are excluded if they can be defined from more than one equation, and equations are excluded if they can be used to define more than one variable.

Default:

`1`

**Flg_Hessian** *(boolean)*: Flag for computing and using 2nd derivatives as Hessian of Lagrangian. ↵

If turned on, compute the structure of the Hessian of the Lagrangian and make it available to CONOPT. The default is usually on, but it will be turned off if the model has external equations (defined with =X=) or cone constraints (defined with =C=) or if the Hessian becomes too dense. See also Flg_2DDir and HessianMemFac.

Default:

`auto`

**Flg_Interv** *(boolean)*: Flag for using intervals in the Preprocessor ↵

If turned on (default), CONOPT will attempt to use interval evaluations in the preprocessor to determine if functions are monotone or if intervals for some of the variables can be excluded as infeasible.

Default:

`1`

**Flg_NegCurve** *(boolean)*: Flag for testing for negative curvature when apparently optimal ↵

When turned on (the default) CONOPT will try to identify directions with negative curvature when the model appears to be optimal. The objective is to move away from saddlepoints. Can be turned off when the model is known to be convex and cannot have negative curvature.

Default:

`1`

**Flg_NoPen** *(boolean)*: Flag for allowing the Model without penalty constraints ↵

When turned on (the default) CONOPT will create and solve a smaller model without the penalty constraints and variables and the minimax constraints and variables if the remaining constraints are infeasible in the initial point. This is often a faster way to start the solution process.

Default:

`1`

**Flg_Prep** *(boolean)*: Falg for using the Preprocessor ↵

If turned on (default), CONOPT will use its preprocessor to try to determine pre- and post-triangluar components of the model and find definitional constraints.

Default:

`1`

**Flg_RedHess** *(boolean)*: Flag for approximating Reduced Hessian information for incoming superbasics. ↵

If Flg_RedHess is turned on (1) CONOPT will try to estimate second order information in the reduced Hessian for incoming superbasic variables based on directional second derivatives. This is more costly than the standard method described under Mtd_RedHess.

Default:

`0`

**Flg_SLPMode** *(boolean)*: Flag for enabling SLP mode. ↵

If Flg_SLPMode is on (the default) then the SLP (sequential linear programming) sub-solver can be used, otherwise it is turned off.

Default:

`1`

**Flg_SQPMode** *(boolean)*: Flag for enabling of SQP mode. ↵

If Flg_SQPMode is on (the default) then the SQP (sequential quadratic programming) sub-solver can be used, otherwise it is turned off.

Default:

`1`

**Flg_Square** *(boolean)*: Flag for Square System. Alternative to defining modeltype=CNS in GAMS ↵

When turned on the modeler declares that this is a square system, i.e. the number of non-fixed variables must be equal to the number of constraints, no bounds must be active in the final solution, and the basis selected from the non-fixed variables must always be nonsingular.

Default:

`0`

**Frq_Log_Simple** *(integer)*: Frequency for log-lines for non-SLP/SQP iterations. ↵

Frq_Log_Simple and Frq_Log_SlpSqp can be used to control the amount of iteration send to the log file. The non-SLP/SQP iterations, i.e. iterations in phase 0, 1, and 3, are usually fast and writing a log line for each iteration may be too much, especially for smaller models. The default value for the log frequency for these iterations is therefore set to 10 for small models, 5 for models with more than 500 constraints or 1000 variables and 1 for models with more than 2000 constraints or 3000 variables.

Default:

`auto`

**Frq_Log_SlpSqp** *(integer)*: Frequency for log-lines for SLP or SQP iterations. ↵

Frq_Log_Simple and Frq_Log_SlpSqp can be used to control the amount of iteration send to the log file. Iterations using the SLP and/or SQP sub-solver, i.e. iterations in phase 2 and 4, may involve several inner iterations and the work per iteration is therefore larger than for the non-SLP/SQP iterations and it may be relevant to write log lines more frequently. The default value for the log frequency is therefore 5 for small models and 1 for models with more than 500 constraints or 1000 variables.

Default:

`auto`

**Frq_Rescale** *(integer)*: Rescaling frequency. ↵

The row and column scales are recalculated at least every Frq_Rescale new point (degenerate iterations do not count), or more frequently if conditions require it.

Default:

`5`

**HEAPLIMIT** *(real)*: Maximum Heap size in MB allowed ↵

Range: [

`0`

, ∞]Default:

`1e20`

**HessianMemFac** *(real)*: Memory factor for Hessian generation: Skip if Hessian elements > Nonlinear Jacobian elements*HessianMemFac, 0 means unlimited. ↵

The Hessian of the Lagrangian is considered too dense therefore too expensive to evaluate and use, and it is not passed on to CONOPT if the number of nonzero elements in the Hessian of the Lagrangian is greater than the number of nonlinear Jacobian elements multiplied by HessianMemFac. See also Flg_Hessian. If HessianMemFac = 0.0 (the default value) then there is no limit on the number of Hessian elements.

The following cells are used to count calls of various routines and the time spend in them. The timing is usually only turned on if some debugging level is on and the statistics is only printed in these cases.

Default:

`0`

**Lim_Dbg_1Drv** *(integer)*: Flag for debugging of first derivatives ↵

Lim_Dbg_1Drv controls how often the derivatives are tested. Debugging of derivatives is only relevant for user-written functions in external equations defined with =X=. The amount of debugging is controlled by Mtd_Dbg_1Drv. See Lim_Hess_Est for a definition of when derivatives are considered wrong.

Default:

`0`

value meaning `-1`

The derivatives are tested in the initial point only. `0`

No debugging `+n`

The derivatives are tested in all iterations that can be divided by Lim_Dbg_1Drv, provided the derivatives are computed in this iteration. (During phase 0, 1, and 3 derivatives are only computed when it appears to be necessary.)

**Lim_DFVars** *(integer)*: Limit on the number of candidates for defined variable in one constraint ↵

When there are more than one candidate that can be selected as a defined variable in a particular constraint CONOPT tries to select the most appropriate in order to select as many defined variables as possible. However, to avoid too much arbitrariness this is only attempted if there are not more than Lim_DFvars candidates.

Default:

`2`

**Lim_Err_2DDir** *(integer)*: Limit on errors in Directional Second Derivative evaluation. ↵

If the evaluation of Directional Second Derivatives (Hessian information in a particular direction) has failed more than Lim_Err_2DDir times CONOPT will not attempt to evaluate them any more and will switch to methods that do not use Directional Second Derivatives. Note that second order information may not be defined even if function and derivative values are well-defined, e.g. in an expression like power(x,1.5) at x=0.

Default:

`10`

**Lim_Err_Fnc_Drv** *(integer)*: Limit on number of function evaluation errors. Overwrites GAMS Domlim option ↵

Function values and their derivatives are assumed to be defined in all points that satisfy the bounds of the model. If the function value or a derivative is not defined in a point CONOPT will try to recover by going back to a previous safe point (if one exists), but it will not do it more than at most Lim_Err_Fnc_Drv times. If CONOPT is stopped by functions or derivatives not being defined it will return with a intermediate infeasible or intermediate non-optimal model status.

Default:

`GAMS DomLim`

**Lim_Err_Hessian** *(integer)*: Limit on errors in Hessian evaluation. ↵

If the evaluation of Hessian information has failed more than Lim_Err_Hessian times CONOPT will not attempt to evaluate it any more and will switch to methods that do not use the Hessian. Note that second order information may not be defined even if function and derivative values are well-defined, e.g. in an expression like power(x,1.5) at x=0.

Default:

`10`

**Lim_Hess_Est** *(real)*: Upper bound on second order terms. ↵

The function and derivative debugger (see Lim_Dbg_1Drv) tests if derivatives computed using the modelers routine are sufficiently close to the values computed using finite differences. The term for the acceptable difference includes a second order term and uses Lim_Hess_Est as an estimate of the upper bound on second order derivatives in the model. Larger Lim_Hess_Est values will allow larger deviations between the user-defined derivatives and the numerically computed derivatives.

Default:

`1.e4`

**Lim_Iteration** *(integer)*: Maximum number of iterations. Overwrites GAMS Iterlim option. ↵

The iteration limit can be used to prevent models from spending too many resources. You should note that the cost of the different types of CONOPT iterations (phase 0 to 4) can be very different so the time limit (GAMS Reslim or option Lim_Time) is often a better stopping criterion. However, the iteration limit is better for reproducing solution behavior across machines.

Default:

`GAMS IterLim`

**Lim_Msg_Dbg_1Drv** *(integer)*: Limit on number of error messages from function and derivative debugger. ↵

The function and derivative debugger (see Lim_Dbg_1Drv) may find a very large number of errors, all derived from the same source. To avoid very large amounts of output CONOPT will stop the debugger after Lim_Msg_Dbg_1Drv error(s) have been found.

Default:

`10`

**Lim_Msg_Large** *(integer)*: Limit on number of error messages related to large function value and Jacobian elements. ↵

Very large function value or derivatives (Jacobian elements) in a model will lead to numerical difficulties and most likely to inaccurate primal and/or dual solutions. CONOPT is therefore imposing an upper bound on the value of all function values and derivatives. This bound is 1.e30. If the bound is violated CONOPT will return with an intermediate infeasible or intermediate non-optimal solution and it will issue error messages for all the violating Jacobian elements, up to a limit of Lim_Msg_Large error messages.

Default:

`10`

**Lim_NewSuper** *(integer)*: Maximum number of new superbasic variables added in one iteration. ↵

When there has been a sufficient reduction in the reduced gradient in one subspace new non-basics can be selected to enter the superbasis. The ones with largest reduced gradient of proper sign are selected, up to a limit. If Lim_NewSuper is positive then the limit is min(500,Lim_NewSuper). If Lim_NewSuper is zero (the default) then the limit is selected dynamically by CONOPT depending on model characteristics.

Default:

`auto`

**Lim_Pre_Msg** *(integer)*: Limit on number of error messages related to infeasible pre-triangle. ↵

If the pre-processor determines that the model is infeasible it tries to define a minimal set of variables and constraints that define the infeasibility. If this set is larger than Lim_Pre_Msg elements the report is considered difficult to use and it is skipped.

Default:

`25`

**Lim_RedHess** *(integer)*: Maximum number of superbasic variables in the approximation to the Reduced Hessian. ↵

CONOPT uses and stores a dense lower-triangular matrix as an approximation to the Reduced Hessian. The rows and columns correspond to the superbasic variable. This matrix can use a large amount of memory and computations involving the matrix can be time consuming so CONOPT imposes a limit on on the size. The limit is Lim_RedHess if Lim_RedHess is defined by the modeler and otherwise a value determined from the overall size of the model. If the number of superbasics exceeds the limit, CONOPT will switch to a method based on a combination of SQP and Conjugate Gradient iterations assuming some kind of second order information is available. If no second order information is available CONOPT will use a quasi-Newton method on a subset of the superbasic variables and rotate the subset as the reduced gradient becomes small.

Default:

`auto`

**Lim_SlowPrg** *(integer)*: Limit on number of iterations with slow progress (relative less than Tol_Obj_Change). ↵

The optimization is stopped if the relative change in objective is less than Tol_Obj_Change for Lim_SlowPrg consecutive well-behaved iterations.

Default:

`20`

**Lim_StallIter** *(integer)*: Limit on the number of stalled iterations. ↵

An iteration is considered a stalled iteration if there is no change in objective because the linesearch is limited by nonlinearities or numerical difficulties. Stalled iterations will have Step = 0 and F in the OK column of the log file. After a stalled iteration CONOPT will try various heuristics to get a better basis and a better search direction. However, the heuristics may not work as intended or they may even return to the original bad basis, especially if the model does not satisfy standard constraints qualifications and does not have a KKT point. To prevent cycling CONOPT will therefore stop after Lim_StallIter stalled iterations and returns an Intermediate Infeasible or Intermediate Nonoptimal solution.

Default:

`100`

**Lim_Start_Degen** *(integer)*: Limit on number of degenerate iterations before starting degeneracy breaking strategy. ↵

The default CONOPT pivoting strategy has focus on numerical stability, but it can potentially cycle. When the number of consecutive degenerate iterations exceeds Lim_Start_Degen CONOPT will switch to a pivoting strategy that is guaranteed to break degeneracy but with slightly weaker numerical properties.

Default:

`10`

**Lim_Time** *(real)*: Time Limit. Overwrites the GAMS Reslim option. ↵

The upper bound on the total number of seconds that can be used in the execution phase. There are only tests for time limit once per iteration. The default value is 10000. Lim_Time is overwritten by Reslim when called from GAMS.

Range: [

`0`

, ∞]Default:

`GAMS ResLim`

**Lim_Variable** *(real)*: Upper bound on solution values and equation activity levels ↵

If the value of a variable, including the objective function value and the value of slack variables, exceeds Lim_Variable then the model is considered to be unbounded and the optimization process returns the solution with the large variable flagged as unbounded.

Range: [

`1.e5`

,`1.e30`

]Default:

`1.e15`

**Lin_Method** *(integer)*: Method used to determine if and/or which Linear Feasibility Models to use ↵

1 - Ignore Linear Feasibility Model in the first round and use objective 2, 3, and 4 in the following rounds as long as model is locally infeasible. This is the default method. 2 - Use Linear Feasibility Model with objective 1 in the first round and continue with objective 2, 3, and 4 in the following rounds as long as model is locally infeasible. 3 - Use Linear Feasibility Model with objective 2 in the first round and continue with objective 3 and 4 in the following rounds as long as model is locally infeasible.

Range: [

`0`

,`3`

]Default:

`1`

**Mtd_Dbg_1Drv** *(integer)*: Method used by the function and derivative debugger. ↵

The function and derivative debugger (turned on with Lim_Dbg_1Drv) can perform a fairly cheap test or a more extensive test, controlled by Mtd_Dbg_1Drv. See Lim_Hess_Est for a definition of when derivatives are considered wrong. All tests are performed in the current point found by the optimization algorithm.

Default:

`0`

value meaning `0`

Perform tests for sparsity pattern and tests that the numerical values of the derivatives appear to be correct. This is the default. `1`

As 0 plus make extensive test to determine if the functions and their derivatives are continuous around the current point. These tests are much more expensive and should only be used of the cheap test does not find an error but one is expected to exist.

**Mtd_RedHess** *(integer)*: Method for initializing the diagonal of the approximate Reduced Hessian ↵

Each time a nonbasic variable is made superbasic a new row and column is added to the approximate Reduced Hessian. The off-diagonal elements are set to zero and the diagonal to a value controlled by Mtd_RedHess:

Default:

`0`

value meaning `0`

The new diagonal element is set to the geometric mean of the existing diagonal elements. This gives the new diagonal element an intermediate value and new superbasic variables are therefore not given any special treatment. The initial steps should be of good size, but build-up of second order information in the new sub-space may be slower. The larger diagonal element may also in bad cases cause premature convergence. `1`

The new diagonal elements is set to the minimum of the existing diagonal elements. This makes the new diagonal element small and the importance of the new superbasic variable will therefore be high. The initial steps can be rather small, but better second order information in the new sub-space should be build up faster.

**Mtd_Scale** *(integer)*: Method used for scaling. ↵

CONOPT will by default use scaling of the equations and variables of the model to improve the numerical behavior of the solution algorithm and the accuracy of the final solution (see also Frq_Rescale.) The objective of the scaling process is to reduce the values of all large primal and dual variables as well as the values of all large first derivatives so they become closer to 1. Small values are usually not scaled up, see Tol_Scale_Max and Tol_Scale_Min. Scaling method 3 is recommended. The others are only kept for backward compatibility.

Default:

`3`

value meaning `0`

Scaling is based on repeatedly dividing the rows and columns by the geometric means of the largest and smallest elements in each row and column. Very small elements less than Tol_Jac_Min are considered equal to Tol_Jac_Min. `1`

Similar to 3 below, but the projection on the interval [Tol_Scale_Min,Tol_Scale_Max] is applied at a different stage. With method 1, abs(X)*abs(Jac) with small X and very large Jac is scaled very aggressively with a factor abs(Jac). With method 3, the scale factor is abs(X)*abs(Jac). The difference is seen in models with terms like Sqrt(X) close to X = 0. `2`

As 1 but the terms are computed based on a moving average of the squares X and Jac. The purpose of the moving average is to keep the scale factor more stable. This is often an advantage, but for models with very large terms (large variables and in particular large derivatives) in the initial point the averaging process may not have enough time to bring the scale factors into the right region. `3`

Rows are first scaled by dividing by the largest term in the row, then columns are scaled by dividing by by the maximum of the largest term and the value of the variable. A term is here defined as abs(X)*abs(Jac) where X is the value of the variable and Jac is the value of the derivative (Jacobian element). The scale factor are then projected on the interval between Tol_Scale_Min and Tol_Scale_Max.

**Mtd_Step_Phase0** *(integer)*: Method used to determine the step in Phase 0. ↵

The steplength used by the Newton process in phase 0 is computed by one of two alternative methods controlled by Mtd_Step_Phase0:

Default:

`Auto`

value meaning `0`

The standard ratio test method known from the Simplex method. CONOPT adds small perturbations to the bounds to avoid very small pivots and improve numerical stability. Linear constraints that initially are feasible will remain feasible with this method. It is the default method for optimization models. `1`

A method based on bending (projecting the target values of the basic variables on the bounds) until the sum of infeasibilities is close to its minimum. Linear constraints that initially are feasible may become infeasible due to bending. It is the default method for CNS models.

**Mtd_Step_Tight** *(integer)*: Method used to determine the maximum step while tightening tolerances. ↵

The steplength used by the Newton process when tightening tolerances is computed by one of two alternative methods controlled by Mtd_Step_Tight:

Default:

`0`

value meaning `0`

The standard ratio test method known from the Simplex method. CONOPT adds small perturbations to the bounds to avoid very small pivots and improve numerical stability. Linear constraints that initially are feasible will remain feasible with this default method. `1`

A method based on bending (projecting the target value of the basic variables on the bounds) until the sum of infeasibilities is close to its minimum.

**Num_Rounds** *(integer)*: Number of rounds with Linear Feasibility Model ↵

Lin_Method defined which Linear Feasibility Model are going to be solved if the previous models end Locally Infeasible. The number of rounds is limited by Num_Rounds.

Range: [

`1`

,`4`

]Default:

`4`

**Rat_NoPen** *(real)*: Limit on ratio of penalty constraints for the No_Penalty model to be solved ↵

The No-Penalty model can only be generated and solved if the number of penalty and minimax constraints exceed Rat_NoPen times the constraints in the Full Model.

Default:

`0.1`

**THREAD2D** *(integer)*: Number of threads used for second derivatives ↵

Range: [

`0`

, ∞]Default:

`1`

**THREADC** *(integer)*: Number of compatibility threads used for comparing different values of THREADS ↵

Range: [

`0`

, ∞]Default:

`1`

**THREADF** *(integer)*: Number of threads used for function evaluation ↵

Range: [

`0`

, ∞]Default:

`1`

**threads** *(integer)*: Number of threads used by Conopt internally ↵

Range: [

`0`

, ∞]Default:

`GAMS Threads`

**Tol_Bound** *(real)*: Bound filter tolerance for solution values close to a bound. ↵

A variable is considered to be at a bound if its distance from the bound is less than Tol_Bound * Max(1,ABS(Bound)). The tolerance is used to build the initial bases and is used to flag variables during output.

Range: [

`3.e-13`

,`1.e-5`

]Default:

`1.e-7`

**Tol_BoxSize** *(real)*: Initial box size for trust region models for overall model ↵

The new Phase 0 methods solves an LP model based on a scaled and linearized version of the model with an added trust region box constraint around the initial point. Tol_BoxSize defines the size of the initial trust region box. During the optimization the trust region box is adjusted based on how well the linear approximation fits the real model.

Range: [

`0.01`

,`1.e6`

]Default:

`10`

**Tol_BoxSize_Lin** *(real)*: Initial box size for trust region models for linear feasibility model ↵

Similar to Tol_BoxSize but applied to the linear feasibility model. Since this model has linear constraints the default initial box size is larger.

Range: [

`0.01`

,`1.e8`

]Default:

`1000`

**Tol_Box_LinFac** *(real)*: Box size factor for linear variables applied to trust region box size ↵

The trust region box used in the new Phase 0 method limits the change of variables so 2nd order terms will not become too large. Variables that appear linearly do not have 2nd order terms and the initial box size is therefore larger by a factor Tol_Box_LinFac.

Parameters related to scaling

Range: [

`1`

,`1.e4`

]Default:

`10`

**Tol_DFixed** *(real)*: Tolerance for defining variables as fixed based on derived bounds. ↵

A variable is considered fixed if the distance between the bounds is less than Tol_DFixed * Max(1,Abs(Bound)). The tolerance is used both on the users original bounds and on the derived bounds that the preprocessor implies from the constraints of the model.

Accuracies for linesearch and updates

Range: [

`3.e-13`

,`1.e-8`

]Default:

`1.e-12`

**Tol_Feas_Max** *(real)*: Maximum feasibility tolerance (after scaling). ↵

The feasibility tolerance used by CONOPT is dynamic. As long as we are far from the optimal solution and make large steps it is not necessary to compute intermediate solutions very accurately. When we approach the optimum and make smaller steps we need more accuracy. Tol_Feas_Max is the upper bound on the dynamic feasibility tolerance and Tol_Feas_Min is the lower bound.

Range: [

`1.e-10`

,`1.e-3`

]Default:

`1.e-7`

**Tol_Feas_Min** *(real)*: Minimum feasibility tolerance (after scaling). ↵

See Tol_Feas_Max for a discussion of the dynamic feasibility tolerances used by CONOPT.

Range: [

`3.e-13`

,`1.e-5`

]Default:

`4.e-10`

**Tol_Feas_Tria** *(real)*: Feasibility tolerance for triangular equations. ↵

Triangular equations are usually solved to an accuracy of Tol_Feas_Min. However, if a variable reaches a bound or if a constraint only has pre-determined variables then the feasibility tolerance can be relaxed to Tol_Feas_Tria.

Initial objective values and bounds

Range: [

`3.e-13`

,`1.e-4`

]Default:

`1.0e-8`

**Tol_IFixed** *(real)*: Tolerance for defining variables as fixed based on initial bounds. ↵

A variable is considered fixed if the distance between the bounds defined by the user is less than Tol_IFixed * Max(1,Abs(Bound)).

Range: [

`3.e-13`

,`1.e-5`

]Default:

`1.e-9`

**Tol_Jac_Min** *(real)*: Filter for small Jacobian elements to be ignored during scaling. ↵

A Jacobian element is considered insignificant if it is less than Tol_Jac_Min. The value is used to select which small values are scaled up during scaling of the Jacobian. Is only used with scaling method Mtd_Scale = 0.

Range: [

`1.e-7`

,`1.e-3`

]Default:

`1.e-5`

**Tol_Linesearch** *(real)*: Accuracy of One-dimensional search. ↵

The onedimensional search is stopped if the expected decrease in then objective estimated from a quadratic approximation is less than Tol_Linesearch times the decrease so far in this onedimensional search.

Range: [

`0.05`

,`0.8`

]Default:

`0.2`

**Tol_Obj_Acc** *(real)*: Relative accuracy of the objective function. ↵

It is assumed that the objective function can be computed to an accuracy of Tol_Obj_Acc * max( 1, abs(Objective) ). Smaller changes in objective are considered to be caused by round-off errors.

Range: [

`3.0e-14`

,`10.e-6`

]Default:

`3.0e-13`

**Tol_Obj_Change** *(real)*: Limit for relative change in objective for well-behaved iterations. ↵

The change in objective in a well-behaved iteration is considered small and the iteration counts as slow progress if the change is less than Tol_Obj_Change * Max(1,Abs(Objective)). See also Lim_SlowPrg.

Range: [

`3.0e-13`

,`1.0e-5`

]Default:

`3.0e-12`

**Tol_Optimality** *(real)*: Optimality tolerance for reduced gradient when feasible. ↵

The reduced gradient is considered zero and the solution optimal if the largest superbasic component of the reduced gradient is less than Tol_Optimality.

Range: [

`3.e-13`

,`1`

]Default:

`1.e-7`

**Tol_Opt_Infeas** *(real)*: Optimality tolerance for reduced gradient when infeasible. ↵

The reduced gradient is considered zero and the solution infeasible if the largest superbasic component of the reduced gradient is less than Tol_Opt_Infeas

Pivot tolerances

Range: [

`3.e-13`

,`1`

]Default:

`1.e-7`

**Tol_Piv_Abs** *(real)*: Absolute pivot tolerance. ↵

During LU-factorization of the basis matrix a pivot element is considered large enough if its absolute value is larger than Tol_Piv_Abs. There is also a relative test, see Tol_Piv_Rel.

Range: [

`2.2e-16`

,`1.e-7`

]Default:

`1.e-10`

**Tol_Piv_Abs_Ini** *(real)*: Absolute Pivot Tolerance for building initial basis. ↵

Absolute pivot tolerance used during the search for a first logically non-singular basis. The default is fairly large to encourage a better conditioned initial basis.

Range: [

`3.e-13`

,`1.e-3`

]Default:

`1.e-7`

**Tol_Piv_Abs_NLTr** *(real)*: Absolute pivot tolerance for nonlinear elements in pre-triangular equations. ↵

The smallest pivot that can be used for nonlinear or variable Jacobian elements during the pre-triangular solve. The pivot tolerance for linear or constant Jacobian elements is Tol_Piv_Abs. The value cannot be less that Tol_Piv_Abs.

Range: [

`2.2e-16`

,`1.e-3`

]Default:

`1.e-5`

**Tol_Piv_Ratio** *(real)*: Relative pivot tolerance during ratio-test ↵

During ratio-rests, the lower bound on the slope of a basic variable to potentially leave the basis is Tol_Piv_Ratio * the largest term in the computation of the tangent.

Range: [

`1.e-10`

,`1.e-6`

]Default:

`1.e-8`

**Tol_Piv_Rel** *(real)*: Relative pivot tolerance during basis factorizations. ↵

During LU-factorization of the basis matrix a pivot element is considered large enough relative to other elements in the column if its absolute value is at least Tol_Piv_Rel * the largest absolute value in the column. Small values or Tol_Piv_Rel will often give a sparser basis factorization at the expense of the numerical accuracy. The value used internally is therefore adjusted dynamically between the users value and 0.9, based on various statistics collected during the solution process. Certain models derived from finite element approximations of partial differential equations can give rise to poor numerical accuracy and a larger user-value of Tol_Piv_Rel may help.

Range: [

`1.e-3`

,`0.9`

]Default:

`0.05`

**Tol_Piv_Rel_Ini** *(real)*: Relative Pivot Tolerance for building initial basis ↵

Relative pivot tolerance used during the search for a first logically non-singular basis.

Range: [

`1.e-4`

,`0.9`

]Default:

`1.e-3`

**Tol_Piv_Rel_Updt** *(real)*: Relative pivot tolerance during basis updates. ↵

During basischanges CONOPT attempts to use cheap updates of the LU-factors of the basis. A pivot is considered large enough relative to the alternatives in the column if its absolute value is at least Tol_Piv_Rel_Updt * the other element. Smaller values of Tol_Piv_Rel_Updt will allow sparser basis updates but may cause accumulation of larger numerical errors.

Range: [

`1.e-3`

,`0.9`

]Default:

`0.05`

**Tol_Scale2D_Min** *(real)*: Lower bound for scale factors based on large 2nd derivatives. ↵

Scaling of the model is in most cases based on the values of the variables and the first derivatives. However, if the scaled variables and derivatives are reasonable but there are large values in the Hessian of the Lagrangian (the matrix of 2nd derivatives) then the lower bound on the scale factor can be made smaller than Tol_Scale_Min. CONOPT will try to scale variables with large 2nd derivatives by one over the square root of the diagonal elements of the Hessian. However, the revised scale factors cannot be less than Tol_Scale2D_Min.

Range: [

`1.e-9`

,`1`

]Default:

`1.e-6`

**Tol_Scale_Max** *(real)*: Upper bound on scale factors. ↵

Scale factors are projected on the interval from Tol_Scale_Min to Tol_Scale_Max. Is used to prevent very large or very small scale factors due to pathological types of constraints. The upper limit is selected such that Square(X) can be handled for X close to Lim_Variable. More nonlinear functions may not be scalable for very large variables.

Range: [

`1`

,`1.e30`

]Default:

`1.e25`

**Tol_Scale_Min** *(real)*: Lower bound for scale factors computed from values and 1st derivatives. ↵

Scale factors used to scale variables and equations are projected on the range Tol_Scale_Min to Tol_Scale_Max. The limits are used to prevent very large or very small scale factors due to pathological types of constraints. The default value for Tol_Scale_Min is 1 which means that small values are not scaled up. If you need to scale small value up towards 1 then you must define a value of Tol_Scale_Min < 1.

Range: [

`1.e-10`

,`1`

]Default:

`1`

**Tol_Scale_Var** *(real)*: Lower bound on x in x*Jac used when scaling. ↵

Rows are scaled so the largest term x*Jac is around 1. To avoid difficulties with models where Jac is very large and x very small a lower bound of Tol_Scale_Var is applied to the x-term.

Largest Jacobian element and tolerance in 2nd derivative tests:

Range: [

`1.e-8`

,`1`

]Default:

`1.e-5`

**Tol_Zero** *(real)*: Zero filter for Jacobian elements and inversion results. ↵

Contains the smallest absolute value that an intermediate result can have. If it is smaller, it is set to zero. It must be smaller than Tol_Piv_Abs/10.

Default:

`1.e-20`