### Table of Contents

**Bruce A. Murtagh; Graduate School of Management, Macquarie University, Sydney, Australia**

**Michael A. Saunders, Walter Murray; Department of EESOR, Stanford University, CA**

**Philip E. Gill; Department of Mathematics, University of California, San Diego, La Jolla, CA**

# Introduction

This document describes the GAMS interface to MINOS which is a general purpose nonlinear programming solver. For a quad-precision MINOS, see quadMINOS.

GAMS/MINOS is a specially adapted version of the solver that is used for solving linear and nonlinear programming problems in a GAMS environment.

GAMS/MINOS is designed to find solutions that are *locally optimal*. The nonlinear functions in a problem must be *smooth* (i.e., their first derivatives must exist).The functions need not be separable. Integer restrictions cannot be imposed directly.

A certain region is defined by the linear constraints in a problem and by the bounds on the variables. If the nonlinear objective and constraint functions are convex within this region, any optimal solution obtained will be a *global optimum*. Otherwise there may be several local optima, and some of these may not be global. In such cases the chances of finding a global optimum are usually increased by choosing a staring point that is "sufficiently close", but there is no general procedure for determining what "close" means, or for verifying that a given local optimum is indeed global.

Linearly constrained models are solved with a very efficient and reliable reduced gradient technique that takes advantage of the sparsity of the model. Models with nonlinear constraints are solved with a method that iteratively solves subproblems with linearized constraints and an augmented Lagrangian objective function. This iterative scheme implies that only the final, optimal solution is sure to be feasible w.r.t the nonlinear constraints. This is in contrast to the feasible path method used by some other NLP solvers, e.g., CONOPT. MINOS and CONOPT are very complementary to each other as they employ very different algorithms. See MINOS vs CONOPT for a comparison of the two solvers.

GAMS allows you to specify values for many parameters that control GAMS/MINOS, and with careful experimentation you may be able to influence the solution process in a helpful way. All MINOS options available through GAMS/MINOS are summarized at the end of this document.

# How to Run a Model with GAMS/MINOS

MINOS is capable of solving many types of models, including LP, NLP, DNLP and QCP. If MINOS is not specified as the default solver for the desired model type (e.g. NLP), then the following statement can be used in your GAMS model to select MINOS:

option nlp=minos;

The option statement should appear before the `solve`

statement.

To be complete, we mention that the solver can be also specified on the command line, as in:

> gams camcge nlp=minos

This will override the global default, but if an algorithm option has been specified inside the model, then that specification takes precedence.

# Overview of GAMS/MINOS

GAMS/MINOS is a system designed to solve large-scale optimization problems expressed in the following form:

\[ \begin{array}{lcllr} \textrm{NLP}: & ~~~~~~~~~~~~~ & \textrm{minimize} & F(x) +c^Tx + d^Ty & ~~~~~~~~~~~~~~~~(1) \\[5pt] & ~~~~~~~~~~~~~ & \textrm{subject to} & f(x) + A_1 y \sim b_1 & ~~~~~~~~~~~~~~~~(2) \\ & ~~~~~~~~~~~~~ & & A_2 x + A_3 y \sim b_2 & ~~~~~~~~~~~~~~~~(3) \\ & ~~~~~~~~~~~~~ & & \displaystyle \ell \le \begin{pmatrix}x \\ y\end{pmatrix} \le u & ~~~~~~~~~~~~~~~~(4) \\ \end{array} \]

where the vectors \(c\), \(d\), \(b_1\), \(b_2\), \(\ell\), \(u\) and the matrices \(A_1\), \(A_2\), \(A_3\) are constant, \(F(x)\) is a smooth scalar function, and \(f(x)\) is a vector of smooth functions. The \(\sim\) signs mean that individual constraints may be defined using \(\le\), \(=\) or \(\ge\) corresponding to the GAMS constructs `=L=`

, `=E=`

and `=G=`

.

The components of \(x\) are called the nonlinear variables, and the components of \(y\) are the linear variables. Similarly, the equations in \((2)\) are called the nonlinear constraints, and the equations in \((3)\) are the linear constraints. Equations \((2)\) and \((3)\) together are called the general constraints.

Let \(m_1\) and \(n_1\) denote the number of nonlinear constraints and variables, and let \(m\) and \(n\) denote the total number of (general) constraints and variables. Thus, \(A_3\) has \(m-m_1\) rows and \(n-n_1\) columns. The constraints \((4)\) specify upper and lower bounds on all variables. These are fundamental to many problem formulations and are treated specially by the solution algorithms in GAMS/MINOS. Some of the components of \(\ell\) and \(u\) may be \(-\infty\) or \(+\infty\) respectively, in accordance with the GAMS use of `-INF`

and `+INF`

.

The vectors \(b_1\) and \(b_2\) are called the right-hand side, and together are denoted by \(b\).

## Linear Programming

If the functions \(F(x)\) and \(f(x)\) are absent, the problem becomes a *linear program*. Since there is no need to distinguish between linear and nonlinear variables, we use \(x\) rather than \(y\). GAMS/MINOS converts all general constraints into equalities, and the only remaining inequalities are simple bounds on the variables. Thus, we write linear programs in the form

\[ \begin{array}{lcll} \textrm{LP}: & ~~~~~~~~~~~~~ & \textrm{minimize} & c^Tx \\[5pt] & ~~~~~~~~~~~~~ & \textrm{subject to} & Ax + Is =0 \\ & ~~~~~~~~~~~~~ & & \displaystyle \ell \le \begin{pmatrix}x \\ s\end{pmatrix} \le u \\ \end{array} \]

where the elements of \(x\) are your own GAMS variables, and \(s\) is a set of *slack variables*: one for each general constraint. For computational reasons, the right-hand side \(b\) is incorporated into the bounds on \(s\).

In the expression \(Ax + Is = 0\) we write the identity matrix explicitly if we are concerned with columns of the associated matrix \(\begin{pmatrix} A & I\end{pmatrix}\). Otherwise we will use the equivalent notation \(Ax + s = 0\).

GAMS/MINOS solves linear programs using a reliable implementation of the *primal simplex method* [40] , in which the constraints \(Ax + Is = 0\) are partitioned into the form

\begin{equation*} B x_B + N x_N = 0, \end{equation*}

where the *basis matrix* is square and nonsingular. The elements of \(x_B\) and \(x_N\) are called the basic or nonbasic variables respectively. Together they are a permutation of the vector

\begin{equation*} \begin{pmatrix} x\\ s\end{pmatrix}. \end{equation*}

Normally, each nonbasic variable is equal to one of its bounds, and the basic variables take on whatever values are needed to satisfy the general constraints. (The basic variables may be computed by solving the linear equations \(B x_B = N x_N\).) It can be shown that if an optimal solution to a linear program exists, then it has this form.

The simplex method reaches such a solution by performing a sequence of *iterations*, in which one column of \(B\) is replaced by one column of \(N\) (and vice versa), until no such interchange can be found that will reduce the value of \(c^Tx\).

As indicated nonbasic variables usually satisfy their upper and lower bounds. If any components of \(x_B\) lie significantly outside their bounds, we say that the current point is *infeasible*. In this case, the simplex method uses a Phase 1 procedure to reduce the sum of infeasibilities to zero. This is similar to the subsequent Phase 2 procedure that optimizes the true objective function \(c^Tx\).

If the solution procedures are interrupted, some of the nonbasic variables may lie strictly *between* their bounds \(\ell_j < x_j < u_j\). In addition, at a "feasible" or "optimal" solution, some of the basic variables may lie slightly outside their bounds: \(\ell_j - \delta < x_j < \ell_j\) or \(u_j < x_j < u_j + \delta\) where \(\delta\) is a *feasibility tolerance* (typically \(10^{-6}\)). In rare cases, even nonbasic variables might lie outside their bounds by as much as \(\delta\).

GAMS/MINOS maintains a sparse \(LU\) factorization of the basis matrix \(B\), using a Markowitz ordering scheme and Bartels-Golub updates, as implemented in the Fortran package LUSOL [81] (see [16, 15, 146, 147] ). The basis factorization is central to the efficient handling of sparse linear and nonlinear constraints.

## Problems with a Nonlinear Objective

When nonlinearities are confined to the term \(F(x)\) in the objective function, the problem is a linearly constrained nonlinear program. GAMS/MINOS solves such problems using a *reduced-gradient* algorithm [201] combined with a *quasi-Newton* algorithm that is described in [138] . In the reduced-gradient method, the constraints \(Ax + Is = 0\) are partitioned into the form

\begin{equation*} Bx_B + Sx_S + Nx_N = 0 \end{equation*}

where \(x_s\) is a set of *superbasic variables*. At a solution, the basic and superbasic variables will lie somewhere between their bounds (to within the feasibility tolerance \(\delta\), while nonbasic variables will normally be equal to one of their bounds, as before. Let the number of superbasic variables be \(s\), the number of columns in \(S\). (The context will always distinguish \(s\) from the vector of slack variables.) At a solution, \(s\) will be no more than \(n_1\), the number of nonlinear variables. In many practical cases we have found that \(s\) remains reasonably small, say 200 or less, even if \(n_1\) is large.

In the reduced-gradient algorithm, \(x_s\) is regarded as a set of "independent variables" or "free variables" that are allowed to move in any desirable direction, namely one that will improve the value of the objective function (or reduce the sum of infeasibilities). The basic variables can then be adjusted in order to continue satisfying the linear constraints.

If it appears that no improvement can be made with the current definition of \(B\), \(S\) and \(N\), some of the nonbasic variables are selected to be added to \(S\), and the process is repeated with an increased value of \(s\). At all stages, if a basic or superbasic variable encounters one of its bounds, the variable is made nonbasic and the value of \(s\) is reduced by one.

A step of the reduced-gradient method is called a *minor iteration*. For linear problems, we may interpret the simplex method as being the same as the reduced-gradient method, with the number of superbasic variable oscillating between 0 and 1.

A certain matrix \(Z\) is needed now for descriptive purposes. It takes the form

\begin{equation*} \begin{pmatrix} -B^{-1}S \\ I \\ 0 \end{pmatrix} \end{equation*}

though it is never computed explicitly. Given an \(LU\) factorization of the basis matrix \(B\), it is possible to compute products of the form \(Zq\) and \(Z^Tg\) by solving linear equations involving \(B\) or \(B^T\). This in turn allows optimization to be performed on the superbasic variables, while the basic variables are adjusted to satisfy the general linear constraints.

An important feature of GAMS/MINOS is a stable implementation of a quasi-Newton algorithm for optimizing the superbasic variables. This can achieve superlinear convergence during any sequence of iterations for which the \(B\), \(S\), \(N\) partition remains constant. A *search direction* \(q\) for the superbasic variables is obtained by solving a system of the form

\begin{equation*} R^TRq = -Z^Tg \end{equation*}

where \(g\) is a gradient of \(F(x)\), \(Z^Tg\) is the *reduced gradient*, and \(R\) is a dense upper triangular matrix. GAMS computes the gradient vector \(g\) analytically, using automatic differentiation. The matrix \(R\) is updated in various ways in order to approximate the *reduced Hessian* according to \(R^TR \approx Z^THZ\) where \(H\) is the matrix of second derivatives of \(F(x)\) (the *Hessian*).

Once \(q\) is available, the search direction for all variables is defined by \(p = Zq\). A *ine search* is then performed to find an approximate solution to the one-dimensional (w.r.t. α) problem

\begin{equation*} \begin{split} \textrm{minimize} \>\> & F(x+\alpha p) \\ \textrm{subject to} \>\> & 0 < \alpha < \beta \end{split} \end{equation*}

where \(\beta\) is determined by the bounds on the variables. Another important piece in GAMS/MINOS is a step-length procedure used in the linesearch to determine the step-length \(\alpha\) (see [79] ). The number of nonlinear function evaluations required may be influenced by setting the **Linesearch tolerance**, as discussed in Section Detailed Description of MINOS Options .

As in a linear programming solver, an equation \(B^T\pi= gB\) is solved to obtain the *dual variables* or *shadow prices* \(\pi\) where \(gB\) is the gradient of the objective function associated with basic variables. It follows that \(gB - B^T \pi= 0\). The analogous quantity for superbasic variables is the reduced-gradient vector \(Z^Tg = gs - s^T \pi \); this should also be zero at an optimal solution. (In practice its components will be of order \(r ||\pi||\) where \(r\) is the optimality tolerance, typically \(10^{-6}\), and \(||\pi||\) is a measure of the size of the elements of \(\pi\).)

## Problems with Nonlinear Constraints

If any of the constraints are nonlinear, GAMS/MINOS employs a *project Lagrangian* algorithm, based on a method due to [149] , see [139] . This involves a sequence of *major iterations*, each of which requires the solution of a *linearly constrained subproblem*. Each subproblem contains linearized versions of the nonlinear constraints, as well as the original linear constraints and bounds.

At the start of the \(k^{\hbox{th}}\) major iteration, let \(x_k\) be an estimate of the nonlinear variables, and let \(\lambda_k\) be an estimate of the Lagrange multipliers (or dual variables) associated with the nonlinear constraints. The constraints are linearized by changing \(f(x)\) in equation (2) to its linear approximation:

\begin{equation*} f'(x, x_k) = f(x_k) + J(x_k) (x - x_k) \end{equation*}

or more briefly

\begin{equation*} f' = f_k+ J_k (x - x_k) \end{equation*}

where \(J(x_k)\) is the *Jacobian matrix* evaluated at \(x_k\). (The \(i\)-th row of the Jacobian is the gradient vector of the \(i\)-th nonlinear constraint function. As with the objective gradient, GAMS calculates the Jacobian using automatic differentiation).

The subproblem to be solved during the \(k\)-th major iteration is then

\[ \begin{array}{lllr} & \textrm{minimize} & F(x) +c^Tx + d^Ty - \lambda_k^T(f-f') + 0.5\rho (f-f')^T(f-f') & ~~~~~~~~~~~~~~~~(5) \\[5pt] & \textrm{subject to} & f' + A_1 y \sim b_1 & ~~~~~~~~~~~~~~~~(6) \\ & & A_2 x + A_3 y \sim b_2 & ~~~~~~~~~~~~~~~~(7) \\ & & \displaystyle \ell \le \begin{pmatrix}x \\ y\end{pmatrix} \le u & ~~~~~~~~~~~~~~~~(8) \\ \end{array} \]

The objective function \((5)\) is called an *augmented Lagrangian*. The scalar \(\rho\) is a *penalty parameter*, and the term involving \(\rho\) is a modified *quadratic penalty function*.

GAMS/MINOS uses the reduced-gradient algorithm to minimize \((5)\) subject to \((6)\) – \((8)\). As before, slack variables are introduced and \(b_1\) and \(b_2\) are incorporated into the bounds on the slacks. The linearized constraints take the form

\begin{equation*} \begin{pmatrix}J_k & A_1 \\ A_2 & A_3\end{pmatrix} \begin{pmatrix}x \\ y \end{pmatrix} + \begin{pmatrix}I & 0 \\ 0 & I\end{pmatrix} \begin{pmatrix}s_1\\ s_2 \end{pmatrix}= \begin{pmatrix}J_k x_k - f_k\\ 0 \end{pmatrix} \end{equation*}

This system will be referred to as \(Ax + Is = 0\) as in the linear case. The Jacobian \(J_k\) is treated as a sparse matrix, the same as the matrices \(A_1\), \(A_2\), and \(A_3\).

In the output from GAMS/MINOS, the term *Feasible subproblem * indicates that the *linearized constraints* have been satisfied. In general, the nonlinear constraints are satisfied only in the limit, so that *feasibility* and *optimality* occur at essentially the same time. The nonlinear constraint violation is printed every major iteration. Even if it is zero early on (say at the initial point), it may increase and perhaps fluctuate before tending to zero. On "well behaved problems", the constraint violation will decrease quadratically (i.e., very quickly) during the final few major iterations.

# Modeling Issues

Formulating nonlinear models requires that the modeler pays attention to some details that play no role when dealing with linear models.

## Starting Points

The first issue is specifying a *starting point*. It is advised to specify a good starting point for as many nonlinear variables as possible. The GAMS default of zero is often a very poor choice, making this even more important.

As an (artificial) example consider the problem where we want to find the smallest circle that contains a number of points \((x_i,y_i)\):

\[ \begin{array}{lcllr} \textrm{Example}: & ~~~~~~~~~~~~~ & \textrm{minimize} & r \\[5pt] & ~~~~~~~~~~~~~ & \textrm{subject to} & (x_i-a)^2 + (y_i-b)^2 \le r^2, \> \> r \ge 0. \\ \end{array} \]

This problem can be modeled in GAMS as follows.

set i 'points' /p1*p10/; parameters x(i) 'x coordinates', y(i) 'y coordinates'; * fill with random data x(i) = uniform(1,10); y(i) = uniform(1,10); variables a 'x coordinate of center of circle' b 'y coordinate of center of circle' r 'radius'; equations e(i) 'points must be inside circle'; e(i).. sqr(x(i)-a) + sqr(y(i)-b) =l= sqr(r); r.lo = 0; model m /all/; option nlp=minos; solve m using nlp minimizing r;

Without help, MINOS will not be able to find an optimal solution. The problem will be declared infeasible. In this case, providing a good starting point is very easy. If we define

\begin{eqnarray*} x_{\min} &=& \min_i x_i,\\ y_{\min} &=& \min_i y_i,\\ x_{\max} &=& \max_i x_i,\\ y_{\max} &=& \max_i y_i, \end{eqnarray*}

then good estimates are

\begin{eqnarray*} a &=& (x_{\min}+x_{\max})/2, \\ b &=& (y_{\min}+y_{\max})/2, \\ r &=& \sqrt{ (a-x_{\min})^2 + (b-y_{\min})^2}. \end{eqnarray*}

Thus we include in our model:

parameters xmin,ymin,xmax,ymax; xmin = smin(i, x(i)); ymin = smin(i, y(i)); xmax = smax(i, x(i)); ymax = smax(i, y(i)); * set starting point a.l = (xmin+xmax)/2; b.l = (ymin+ymax)/2; r.l = sqrt( sqr(a.l-xmin) + sqr(b.l-ymin) );

and now the model solves very easily.

Level values can also be set away from zero implicitly as a result of assigning bounds. When a variable is bounded away from zero, for instance by the statement `Y.LO = 1;`

, the implicit projection of variable levels onto their bounds that occurs when a model is solved will initialize `Y`

away from zero.

## Bounds

Setting appropriate bounds can be very important to steer the algorithm away from uninteresting areas, and to prevent *function evaluation errors* from happening.

If your model contains a real power of the form `x**y`

it is important to add a bound \(x > 0.001\), as real exponentation is evaluated in GAMS as \(\exp(y \log(x))\). In some cases one cannot write a bound directly, e.g. if the equation is \(z = x^{f(y)}\). In that case it is advised to introduce an extra variable and equation:

\begin{equation*} \begin{split} z &= x^{\vartheta } \\ \vartheta &= f(y) \\ \vartheta &\ge \varepsilon \end{split} \end{equation*}

(Note that the functions `SQR(x)`

and `POWER(x,k)`

are integer powers and do not require \(x\) to be positive).

If the model produces *function evaluation errors* adding bounds is prefered to raising the `DOMLIM`

limit.

Bounds in GAMS are specified using `X.LO(i)=0.001`

and `X.UP(i) = 1000`

.

## Scaling

Although MINOS has some facilities to scale the problem before starting to optimize it, it remains an important task for the modeler to provide a well-scaled model. This is especially the case for nonlinear models. GAMS has special syntax features to specify row and column scales that allow the modeler to keep the equations in a most natural form. For more information consult the GAMS User's Guide.

## The Objective Function

The first step GAMS/MINOS performs is to try to reconstruct the objective function. In GAMS, optimization models minimize or maximize an objective variable. MINOS however works with an objective function. One way of dealing with this is to add a dummy linear function with just the objective variable. Consider the following GAMS fragment:

obj.. z =e= sum(i, sqr(resid(i))); model m /all/; solve m using nlp minimizing z;

This can be cast in form NLP (equations \((1)-(4)\)) by saying minimize \(z\) subject to \(z = \sum_i resid^2_i\) and the other constraints in the model. Although simple, this approach is not always preferable. Especially when all constraints are linear it is important to minimize \(\sum_i resid^2_i\) directly. This can be achieved by a simple reformulation: \(z\) can be substituted out. The substitution mechanism carries out the formulation if all of the following conditions hold:

- the objective variable \(z\) is a free continuous variable (no bounds are defined on \(z\)),
- \(z\) appears linearly in the objective function,
- the objective function is formulated as an equality constraint,
- \(z\) is only present in the objective function and not in other constraints.

For many models it is very important that the nonlinear objective function be used by MINOS. For instance the model [CHEM] from the model library solves in 21 iterations. When we add the bound

energy.lo = 0;

to the objective variable `energy`

and thus prevent it from being substituted out, MINOS will not be able to find a feasible point for the given starting point.

This reformulation mechanism has been extended for substitutions along the diagonal. For example, the GAMS model

Variables x,y,z; Equations e1,e2; e1..z =e= y; e2..y =e= sqr(1+x); model m /all/; option nlp=minos; solve m using nlp minimizing z;

will be reformulated as an *unconstrained* optimization problem

\[ \begin{array}{lrl} & \text{minimize} ~~\text{f(x)} & =(1+x)^2. \\ \end{array} \]

These additional reformulations can be turned off by using the statement `option reform = 0;`

(see Section GAMS Options).

# GAMS Options

The standard GAMS options (e.g. iterlim, domlim) can be used to control GAMS/MINOS. For more details, see section Controlling a Solver via GAMS Options. We highlight some of the details of this usage below for cases of special interest.

**iterlim**

Sets the

minoriteration limit. MINOS will stop as soon as the number ofminor iterationsexceeds the iteration limit and report the current solution.

**domlim**

Sets the domain error limit. Domain errors are evaluation errors in the nonlinear functions. An example of a domain error is trying to evaluate \(\sqrt{x}\) for \(x<0\). Other examples include taking logs of negative numbers, and evaluating the real power \(x^y\) for \(x < \varepsilon\) ( \(x^y\) is evaluated as \(\exp(y \log x)\)). When such an error occurs the count of domain errors is incremented: MINOS will stop if this count exceeds the limit. If the limit has not been reached, reasonable estimates for the function (and derivatives, if requested) are returned and MINOS continues. For example, in the case of \(\sqrt{x}, x<0\) a zero is passed back for the function value and a large value for the derivative. In many cases MINOS will be able to recover from these domain errors, especially when they happen at some intermediate point. Nevertheless it is best to add appropriate bounds or linear constraints to ensure that these domain errors don't occur. For example, when an expression \(\log(x)\) is present in the model, add a statement like

`x.lo = 0.001;`

.

**bratio**

Ratio used in basis acceptance test. When a previous solution or solution estimate exists, GAMS automatically passes this solution to MINOS so that it can reconstruct an advanced basis. When too many new (i.e. uninitialized with level and/or marginal values) variables or constraints enter the model, it may be better not to use existing basis information, but to instead

crasha new basis. The`bratio`

determines how quickly an existing basis is discarded. A value of 1.0 will discard any basis, while a value of 0.0 will retain any basis.

**workfactor**

By default, GAMS/MINOS computes an estimate of the amount of workspace needed by MINOS, and passes this workspace on to MINOS for use in solving the model. This estimate is based on the model statistics: number of (nonlinear) equations, number of (nonlinear) variables, number of (nonlinear) nonzeroes, etc. In most cases this is sufficient to solve the model. In some rare cases MINOS may need more memory, and the user can provide this by specifying a value of

`workfactor`

greater than 1. The computed memory estimate is multiplied by the workfactor to determine the amount of workspace made available to MINOS for the solve.

**workspace**

The

`workspace`

option isdeprecated: use the`workfactor`

option instead. The`workspace`

option specifies the amount of memory, in MB, that MINOS will use.

**reform**

This option controls the objective reformulation mechanism described in Section The Objective Function The default value of 100 will cause GAMS/MINOS to try further substitutions along the diagonal after the objective variable has been removed. Any other value will disable this diagonal procedure.

# Summary of MINOS Options

The performance of GAMS/MINOS is controlled by a number of parameters or "options." Each option has a default value that should be appropriate for most problems. For special situations it is possible to specify non-standard values for some or all of the options through the MINOS option file. While the content of an option file is solver-specific, the details of how to create an option file and instruct the solver to use it are not. This topic is covered in section The Solver Options File.

Note that the option file is not case sensitive. Examples for using the option file can be found at the end of this section. The tables below contain summary information about the MINOS options, default values, and links to more detailed explanations.

## Output related options

Option | Description | Default |
---|---|---|

debug level | Controls amount of debug information written | `0` |

log frequency | Controls iteration logging to listing file | `100` |

print level | Controls amount of information printed during optimization | `0` |

scale print | Print scaling factors | |

solution | Prints MINOS solution | `NO` |

summary frequency | Controls iteration logging to summary (log file) | `100` |

## Tolerances

Option | Description | Default |
---|---|---|

crash tolerance | Allow crash procedure to ignore small elements in eligible columns | `0.1` |

feasibility tolerance | Feasibility tolerance for linear constraints | `1.0e-6` |

linesearch tolerance | Controls accuracy of steplength selected | `0.1` |

LU density tolerance | When to use dense factorization | `0.5` |

LU factor tolerance | Trade-off between stability and sparsity in basis factorization | `100.0` |

LU singularity tolerance | Protection against ill-conditioned basis matrices | `1.0e-11` |

LU update tolerance | Trade-off between stability and sparsity in basis updating | `10.0` |

optimality tolerance | Reduced gradient optimality check | `1.0e-6` |

row tolerance | Accuracy requirement for nonlinear rows | `1.0e-6` |

scale print tolerance | Scale print flag and set tolerance | `0.9` |

scale tolerance | Scale tolerance | `0.9` |

subspace tolerance | Determines when nonbasics becomes superbasic | `0.5` |

## Limits

Option | Description | Default |
---|---|---|

hessian dimension | Size of Hessian matrix | `1` |

iterations limit | Minor iteration limit | `GAMS iterlim` |

major iterations | Max number of major iterations | `50` |

minor iterations | Max number of minor iterations between linearizations of nonlinear constraints | `40` |

superbasics limit | Maximum number of superbasics | `1` |

unbounded objective value | Determines when a problem is called unbounded | `1.0e20` |

unbounded step size | Determines when a problem is called unbounded | `1.0e10` |

## Other algorithmic options

Option | Description | Default |
---|---|---|

check frequency | Controls frequency of linear constraint satisfaction test | `60` |

completion | Completion level for subproblems (full/partial) | `FULL` |

crash option | Controls the basis crash algorithm | `3` |

expand frequency | Setting for anti-cycling mechanism | `10000` |

factorization frequency | Number of iterations between basis factorizations | `100` |

lagrangian | Determines form of objection function in the linearized subproblems | `YES` |

LU complete pivoting | LUSOL pivoting strategy | |

LU partial pivoting | LUSOL pivoting strategy | |

LU rook pivoting | LUSOL pivoting strategy | |

major damping parameter | Prevents large relative changes between subproblem solutions | `2.0` |

minor damping parameter | Limit change in x during linesearch | `2.0` |

multiple price | Multiple pricing | `1` |

partial price | Number of segments in partial pricing strategy | `10` |

penalty parameter | Used in modified augmented Lagrangian | `automatic` |

radius of convergence | controls final reduction of penalty parameter | `0.01` |

scale all variables | Synonym to scale option 2 | |

scale linear variables | Synonym to scale option 1 | |

scale no | Synonym to scale option 0 | |

scale nonlinear variables | Synonym to scale option 2 | |

scale option | Scaling | `1` |

scale yes | Synonym to scale option 1 | |

start assigned nonlinears | Starting strategy when there is no basis | `SUPERBASIC` |

verify constraint gradients | Synonym to verify level 2 | |

verify gradients | Synonym to verify level 3 | |

verify level | Controls verification of gradients | `0` |

verify no | Synonym to verify level 0 | |

verify objective gradients | Synonym to verify level 1 | |

verify yes | Synonym to verify level 3 | |

weight on linear objective | Composite objective weight | `0.0` |

## Examples of GAMS/MINOS Option File

The following example illustrates the use of certain options that might be helpful for "difficult" models involving nonlinear constraints. Experimentation may be necessary with the values specified, particularly if the sequence of major iterations does not converge using default values.

* These options might be relevant for very nonlinear models. Major damping parameter 0.2 * may prevent divergence. Minor damping parameter 0.2 * if there are singularities * in the nonlinear functions. Penalty parameter 10.0 * or 100.0 perhaps-a value * higher than the default. Scale linear variables * (This is the default.)

Conversely, nonlinearly constrained models that are very nearly linear may optimize more efficiently if some of the cautious defaults are relaxed:

* Suggestions for models with MILDLY nonlinear constraints Completion Full Penalty parameter 0.0 * or 0.1 perhaps-a value * smaller than the default. * Scale one of the following Scale all variables * if starting point is VERY GOOD. Scale linear variables * if they need it. Scale No * otherwise.

Most of the options should be left at their default values for any given model. If experimentation is necessary, we recommend changing just one option at a time.

# Special Notes

## Modeling Hints

Unfortunately, there is no guarantee that the algorithm just described will converge from an arbitrary starting point. The concerned modeler can influence the likelihood of convergence as follows:

- Specify initial activity levels for the nonlinear variables as carefully as possible (using the GAMS suffix
`.L`

). - Include sensible upper and lower bounds on all variables.
- Specify a
*Major damping parameter*that is lower than the default value, if the problem is suspected of being highly nonlinear - Specify a
*Penalty parameter*\(\rho\) that is higher than the default value, again if the problem is highly nonlinear.

In rare cases it may be safe to request the values \(\lambda_k = 0\) and \(\rho = 0\) for all subproblems, by specifying *Lagrangian=No*. However, convergence is much more likely with the default setting, *Lagrangian=Yes*. The initial estimate of the Lagrange multipliers is then \(\lambda_0 = 0\), but for later subproblems \(\lambda_k\) is taken to be the Lagrange multipliers associated with the (linearized) nonlinear constraints at the end of the previous major iteration.

For the first subproblem, the default value for the penalty parameter is \(\rho= 100.0/m_1\) where \(m_1\) is the number of nonlinear constraints. For later subproblems, \(\rho\) is reduced in stages when it appears that the sequence \(\{x_k, \lambda_k\}\) is converging. In many cases it is safe to specify \(\lambda = 0\), particularly if the problem is only mildly nonlinear. This may improve the overall efficiency.

## Storage

GAMS/MINOS uses one large array of memory for most of its workspace. The implementation places no fixed limit on the size of a problem or on its shape (many constraints and relatively few variables, or *vice versa*). In general, the limiting factor will be the amount of physical memory available on a particular machine, and the amount of computation time one is willing to spend.

Some detailed knowledge of a particular model will usually indicate whether the solution procedure is likely to be efficient. An important quantity is \(m\), the total number of general constraints in \((2)\) and \((3)\). The amount of workspace required by GAMS/MINOS is roughly \(100m\) doubles, or \(800m\) bytes for workspace. A further 300K bytes, approximately, are needed for the program itself, along with buffer space for several files. Very roughly, then, a model with \(m\) general constraints requires about \((m+300)\) K bytes of memory.

Another important quantity is \(n\), the total number of variables in \(x\) and \(y\). The above comments assume that \(n\) is not much larger than \(m\), the number of constraints. A typical ratio for \(n/m\) is 2 or 3.

If there are many nonlinear variables (i.e., if \(n_1\) is large), much depends on whether the objective function or the constraints are highly nonlinear or not. The degree of nonlinearity affects \(s\), the number of superbasic variables. Recall that \(s\) is zero for purely linear problems. We know that \(s\) need never be larger than \(n_1+1\). In practice, \(s\) is often very much less than this upper limit.

In the quasi-Newton algorithm, the dense triangular matrix \(R\) has dimension \(s\) and requires about \(s^2/2\) words of storage. If it seems likely that \(s\) will be very large, some aggregation or reformulation of the problem should be considered.

# The GAMS/MINOS Log File

MINOS writes different logs for LPs, NLPs with linear constraints, and NLPs with non-linear constraints. In this section, a sample log file is shown for each case, and the messages that appear are explained.

## Linear Programs

MINOS uses a standard two-phase simplex method for LPs. In the first phase, the sum of the infeasibilities at each iteration is minimized. Once feasibility is attained, MINOS switches to phase 2 where it minimizes (or maximizes) the original objective function. The different objective functions are called the phase 1 and phase 2 objectives. Notice that the marginals in phase 1 are with respect to the phase 1 objective. This means that if MINOS interrupts in phase 1, the marginals are "wrong" in the sense that they do not reflect the original objective.

The log for the problem **TURKPOW** is as follows:

GAMS Rev 235 Copyright (C) 1987-2010 GAMS Development. All rights reserved --- Starting compilation --- turkpow.gms(230) 3 Mb --- Starting execution: elapsed 0:00:00.009 --- turkpow.gms(202) 4 Mb --- Generating LP model turkey --- turkpow.gms(205) 4 Mb --- 350 rows 949 columns 5,872 non-zeroes --- Executing MINOS: elapsed 0:00:00.025 GAMS/MINOS Aug 18, 2010 23.5.2 WIN 19143.19383 VS8 x86/MS Windows M I N O S 5.51 (Jun 2004) GAMS/MINOS 5.51, Large Scale Nonlinear Solver B. A. Murtagh, University of New South Wales P. E. Gill, University of California at San Diego, W. Murray, M. A. Saunders, and M. H. Wright, Systems Optimization Laboratory, Stanford University Work space allocated -- 1.60 Mb Reading Rows... Reading Columns... Itn ninf sinf objective 100 3 2.283E-01 -2.51821463E+04 200 0 0.000E+00 2.02819284E+04 300 0 0.000E+00 1.54107277E+04 400 0 0.000E+00 1.40211808E+04 500 0 0.000E+00 1.33804183E+04 600 0 0.000E+00 1.27082709E+04 EXIT - Optimal Solution found, objective: 12657.77 --- Restarting execution --- turkpow.gms(205) 0 Mb --- Reading solution for model turkey --- turkpow.gms(230) 3 Mb *** Status: Normal completion

The first line that is written by MINOS is the version string: `GAMS/MINOS Aug 18, 2010 23.5.2 WIN 19143.19383 VS8 x86/MS Windows`

This line identifies which version of the MINOS libraries and links you are using, and is only to be deciphered by GAMS support personnel.

After some advertisement text we see the amount of work space (i.e. memory) that is allocated: *1.60 Mb*. When MINOS is loaded, the amount of memory needed is first estimated. This estimate is based on statistics like the number of rows, columns and non-zeros. This amount of memory is then allocated and the problem loaded into MINOS.

The columns have the following meaning:

**Itn**

Iteration number.

**ninf**

Number of infeasibilities. If nonzero the current iterate is still infeasible.

**sinf**

The sum of the infeasibilities. This number is minimized during Phase I. Once the model is feasible this number is zero.

**objective**

The value of the objective function: \(z = \sum c_i x_i\). In phase II this number is maximized or minimized. In phase I it may move in the wrong direction.

The final line indicates the exit status of MINOS.

## Linearly Constrained NLP's

The log is basically the same as for linear models. The only difference is that not only matrix rows and columns need to be loaded, but also instructions for evaluating functions and gradients.

The log for the problem **WEAPONS** is as follows:

GAMS Rev 235 Copyright (C) 1987-2010 GAMS Development. All rights reserved --- Starting compilation --- weapons.gms(77) 3 Mb --- Starting execution: elapsed 0:00:00.005 --- weapons.gms(66) 4 Mb --- Generating NLP model war --- weapons.gms(68) 6 Mb --- 13 rows 66 columns 156 non-zeroes --- 706 nl-code 65 nl-non-zeroes --- weapons.gms(68) 4 Mb --- Executing MINOS: elapsed 0:00:00.013 GAMS/MINOS Aug 18, 2010 23.5.2 WIN 19143.19383 VS8 x86/MS Windows M I N O S 5.51 (Jun 2004) GAMS/MINOS 5.51, Large Scale Nonlinear Solver B. A. Murtagh, University of New South Wales P. E. Gill, University of California at San Diego, W. Murray, M. A. Saunders, and M. H. Wright, Systems Optimization Laboratory, Stanford University Work space allocated -- 0.82 Mb Reading Rows... Reading Columns... Reading Instructions... Itn ninf sinf objective 100 0 0.000E+00 1.71416714E+03 200 0 0.000E+00 1.73483184E+03 EXIT - Optimal Solution found, objective: 1735.570 --- Restarting execution --- weapons.gms(68) 0 Mb --- Reading solution for model war --- weapons.gms(77) 3 Mb *** Status: Normal completion

## NLP's with Nonlinear Constraints

For models with nonlinear constraints the log is more complicated. The library model [CAMCGE] from the model library is such an example: the log output resulting from running it is shown below.

GAMS Rev 235 Copyright (C) 1987-2010 GAMS Development. All rights reserved --- Starting compilation --- camcge.gms(450) 3 Mb --- Starting execution: elapsed 0:00:00.010 --- camcge.gms(441) 4 Mb --- Generating NLP model camcge --- camcge.gms(450) 6 Mb --- 243 rows 280 columns 1,356 non-zeroes --- 5,524 nl-code 850 nl-non-zeroes --- camcge.gms(450) 4 Mb --- Executing MINOS: elapsed 0:00:00.023 GAMS/MINOS Aug 18, 2010 23.5.2 WIN 19143.19383 VS8 x86/MS Windows M I N O S 5.51 (Jun 2004) GAMS/MINOS 5.51, Large Scale Nonlinear Solver B. A. Murtagh, University of New South Wales P. E. Gill, University of California at San Diego, W. Murray, M. A. Saunders, and M. H. Wright, Systems Optimization Laboratory, Stanford University Work space allocated -- 1.48 Mb Reading Rows... Reading Columns... Reading Instructions... Major minor step objective Feasible Optimal nsb ncon penalty BSswp 1 2T 0.0E+00 1.91724E+02 1.8E+02 2.0E-01 0 1 1.0E+00 0 2 90 1.0E+00 1.91735E+02 1.5E-03 7.6E+00 0 3 1.0E+00 0 3 0 1.0E+00 1.91735E+02 1.3E-09 5.5E-06 0 4 1.0E+00 0 4 0 1.0E+00 1.91735E+02 1.1E-12 2.8E-13 0 5 1.0E-01 0 EXIT - Optimal Solution found, objective: 191.7346 --- Restarting execution --- camcge.gms(450) 0 Mb --- Reading solution for model camcge *** Status: Normal completion

Two sets of iterations, *major* and *minor*, are now reported. A description of the various columns present in this log file follows:

**Major**

A major iteration involves linearizing the nonlinear constraints and performing a number of minor iterations on the resulting subproblem. The objective for the subproblem is an augmented Lagrangian, not the true objective function.

**minor**

The number of minor iterations performed on the linearized subproblem. If it is a simple number like 90, then the subproblem was solved to optimality. Here, \(2T\) means that the subproblem was terminated. In general the \(T\) is not something to worry about. Other possible flags are \(I\) and \(U\), which mean that the subproblem was infeasible or unbounded. MINOS may have difficulty if these keep occurring.

**step**

The step size taken towards the solution suggested by the last major iteration. Ideally this should be 1.0, especially near an optimum. If the subproblem solutions are widely different, MINOS may reduce the step size under control of the

Major Damping parameter.

**objective**

The objective function for the original nonlinear program.

**Feasible**

Primal infeasibility, indicating the maximum non-linear constraint violation.

**Optimal**

The maximum dual infeasibility, measured as the maximum departure from complementarity. If we call \(d_j\) the reduced cost of variable \(x_j\), then the dual infeasibility of \(x_j\) is \(d_j \times \min\{x_j - \ell_j, 1\}\) or \(-d_j \times \min\{u_j - x_j, 1\}\) depending on the sign of \(d_j\).

**nsb**

Number of superbasics. If the model is feasible this number cannot exceed the superbasic limit, which may need to be reset to a larger number if the numbers in this column become larger.

**ncon**

The number of times MINOS has evaluated the nonlinear constraints and their derivatives.

**penalty**

The current value of the penalty parameter in the augmented Lagrangian (the objective for the subproblems). If the major iterations appear to be converging, MINOS will decrease the penalty parameter. If there appears to be difficulty, such as unbounded subproblems, the penalty parameter will be increased.

**BSswp**

Number of basis swaps: the number of \(\begin{pmatrix}B & S\end{pmatrix}\) (i.e. basic vs. superbasic) changes.

Note: The **CAMCGE** model (like many CGE models or other almost square systems) can better be solved with the MINOS option *Start Assigned Nonlinears Basic*.

# Detailed Description of MINOS Options

The following is an alphabetical list of the keywords that may appear in the GAMS/MINOS options file, and a description of their effect. Options not specified will take the default values shown.

**check frequency** *(integer)*: Controls frequency of linear constraint satisfaction test ↵

Every

i^{th}iteration after the most recent basis factorization, a numerical test is made to see if the current solutionxsatisfies the general linear constraints (including linearized nonlinear constraints, if any). The constraints are of the formAx+s = 0wheresis the set of slack variables. To perform the numerical test, the residual vectorr = Ax + sis computed. If the largest component ofris judged to be too large, the current basis is refactorized and the basic variables are recomputed to satisfy the general constraints more accurately.Range: {

`1`

, ..., ∞}Default:

`60`

**completion** *(string)*: Completion level for subproblems (full/partial) ↵

When there are nonlinear constraints, this determines whether subproblems should be solved to moderate accuracy (partial completion) or to full accuracy (full completion). GAMS/MINOS implements the option by using two sets of convergence tolerances for the subproblems.

Use of partial completion may reduce the work during early major iterations, unless the Minor iterations limit is active. The optimal set of basic and superbasic variables will probably be determined for any given subproblem, but the reduced gradient may be larger than it would have been with full completion. An automatic switch to full completion occurs when it appears that the sequence of major iterations is converging. The switch is made when the nonlinear constraint error is reduced below 100 * (Row tolerance), the relative change inLambdais 0.1 or less, and the previous subproblem was solved to optimality. Full completion tends to give better Langrange-multiplier estimates. It may lead to fewer major iterations, but may result in more minor iterations._{k}Default:

`FULL`

value meaning `FULL`

Solve subproblems to full accuracy `PARTIAL`

Solve subproblems to moderate accuracy

**crash option** *(integer)*: Controls the basis crash algorithm ↵

If a restart is not being performed, an initial basis will be selected from certain columns of the constraint matrix

(A I). The value of the parameteridetermines which columns ofAare eligible. Columns ofIare used to fill gaps where necessary. Ifi > 0, three passes are made through the relevant columns ofA, searching for a basis matrix that is essentially triangular. A column is assigned to pivot on a particular row if the column contains a suitably large element in a row that has not yet been assigned. (The pivot elements ultimately form the diagonals of the triangular basis). Pass 1 selects pivots from free columns (corresponding to variables with no upper and lower bounds). Pass 2 requires pivots to be in rows associated with equality (`=E=`

) constraints. Pass 3 allows the pivots to be in inequality rows. For remaining (unassigned) rows, the associated slack variables are inserted to complete the basis.Default:

`3`

value meaning `0`

Initial basis will be a slack basis `1`

All columns are eligible `2`

Only linear columns are eligible `3`

Columns appearing nonlinearly in the objective are not eligible `4`

Columns appearing nonlinearly in the constraints are not eligible

**crash tolerance** *(real)*: Allow crash procedure to ignore small elements in eligible columns ↵

The

Crash tolerance rallows the starting procedureCRASHto ignore certain small nonzeros in each column ofA. Ifais the largest element in column_{max}j, other nonzerosain the column are ignored if_{ij}|a. To be meaningful, the parameter_{ij}| < a_{max}* rrshould be in the range0 <= r < 1. Whenr > 0.0the basis obtained byCRASHmay not be strictly triangular, but it is likely to be nonsingular and almost triangular. The intention is to obtain a starting basis containing more columns ofAand fewer (arbitrary) slacks. A feasible solution may be reached sooner on some problems. For example, suppose the firstmcolumns ofAare the matrix shown under LU factor tolerance; i.e., a tridiagonal matrix with entries -1, 4, -1. To helpCRASHchoose allmcolumns for the initial basis, we could specifyCrash tolerance rfor some value ofr > 0.25.Range: [

`0`

,`1.0`

]Default:

`0.1`

**debug level** *(integer)*: Controls amount of debug information written ↵

This causes various amounts of information to be output. Most debug levels will not be helpful to GAMS users, but they are listed here for completeness. Note that you will need to use the GAMS statement

`OPTION SYSOUT=on;`

to echo the MINOS listing to the GAMS listing file.

debug level 0

No debug output.debug level 2(or more)

Output fromM5SETXshowing the maximum residual after a row check.debug level 40

Output fromLU8RPC(which updates the LU factors of the basis matrix), showing the position of the last nonzero in the transformed incoming column.debug level 50

Output fromLU1MAR(which updates the LU factors each refactorization), showing each pivot row and column and the dimensions of the dense matrix involved in the associated elimination.debug level 100

Output fromM2BFACandM5LOGlisting the basic and superbasic variables and their values at every iteration.Default:

`0`

**expand frequency** *(integer)*: Setting for anti-cycling mechanism ↵

This option is part of an anti-cycling procedure designed to guarantee progress even on highly degenerate problems. For linear models, the strategy is to force a positive step at every iteration, at the expense of violating the bounds on the variables by a small amount. Suppose the specified feasibility tolerance is

deltaand the expand frequency isk. Over a period ofkiterations, the tolerance actually used by GAMS/MINOS increases from0.5*deltatodelta(in steps0.5*delta/k). For nonlinear models, the same procedure is used for iterations in which there is only one superbasic variable. (Cycling can occur only when the current solution is at a vertex of the feasible region.) Thus, zero steps are allowed if there is more than one superbasic variable, but otherwise positive steps are enforced. At least everykiterations, a resetting procedure eliminates any infeasible nonbasic variables. Increasingkhelps to reduce the number of these slightly infeasible nonbasic variables. However, it also diminishes the freedom to choose a large pivot element (see Pivot tolerance).Range: {

`1`

, ..., ∞}Default:

`10000`

**factorization frequency** *(integer)*: Number of iterations between basis factorizations ↵

At most

ibasis updates will occur between factorizations of the basis matrix. With linear programs, basis updates usually occur at every iteration. The defaultiis reasonable for typical problems. Higher values up toi = 200(say) may be more efficient on problems that are extremely sparse and well scaled. When the objective function is nonlinear, fewer basis updates will occur as an optimum is approached. The number of iterations between basis factorizations will therefore increase. During these iterations a test is made regularly (according to the Check frequency) to ensure that the general constraints are satisfied. If necessary the basis will be re-factorized before the limit ofiupdates is reached. When the constraints are nonlinear, the Minor iterations limit will probably preempti.Range: {

`1`

, ..., ∞}Default:

`100`

**feasibility tolerance** *(real)*: Feasibility tolerance for linear constraints ↵

When the constraints are linear, a feasible solution is one in which all variables, including slacks, satisfy their upper and lower bounds to within the absolute tolerance

r. (Since slacks are included, this means that the general linear constraints are also satisfied withinr.) GAMS/MINOS attempts to find a feasible solution before optimizing the objective function. If the sum of infeasibilities cannot be reduced to zero, the problem is declared infeasible. LetSINFbe the corresponding sum of infeasibilities. IfSINFis quite small, it may be appropriate to raiserby a factor of 10 or 100. Otherwise, some error in the data should be suspected. IfSINFis not small, there may be other points that have a significantly smaller sum of infeasibilities. GAMS/MINOS does not attempt to find a solution that minimizes the sum. IfScale option= 1 or 2, feasibility is defined in terms of the scaled problem (since it is then more likely to be meaningful). A nonlinear objective functionF(x)will be evaluated only at feasible points. If there are regions whereF(x)is undefined, every attempt should be made to eliminate these regions from the problem. For example, for a functionF(x) = sqrt(x1)+log(x2), it should be essential to place lower bounds on both variables. IfFeasibility tolerance = 10, the bounds^{-6}xand_{1}> 10^{-5}xmight be appropriate. (The log singularity is more serious; in general, keep variables as far away from singularities as possible.) If the constraints are nonlinear, the above comments apply to each major iteration. A feasible solution satisfies the current linearization of the constraints to within the tolerance_{2}> 10^{-4}r. The associated subproblem is said to be feasible. As for the objective function, bounds should be used to keepxmore thanraway from singularities in the constraint functionsf(x). At the start of major iterationk, the constraint functionsf(xare evaluated at a certain point_{k})x. This point always satisfies the relevant bounds (_{k}l < x), but may not satisfy the general linear constraints. During the associated minor iterations,_{k}< uF(x)andf(x)will be evaluated only at pointsxthat satisfy the bound and the general linear constraints (as well as the linearized nonlinear constraints). If a subproblem is infeasible, the bounds on the linearized constraints are relaxed temporarily, in several stages. Feasibility with respect to the nonlinear constraints themselves is measured against the Row tolerance (not againstr). The relevant test is made at the start of a major iteration.Default:

`1.0e-6`

**hessian dimension** *(integer)*: Size of Hessian matrix ↵

This specifies that an

r*rtriangular matrixRis to be available for use by the quasi-Newton algorithm. The matrixRapproximates the reduced Hessian in thatRapproximates^{T}RZ. Suppose there are^{T}HZssuperbasic variables at a particular iteration. Whenever possible,rshould be greater thans. Ifr > s, the firstscolumns ofRwill be used to approximate the reduced Hessian in the normal manner. If there are no further changes to the set of superbasic variables, the rate of convergence will ultimately be superlinear. Ifr < s, a matrix of the form

R = diag(R_{r}, D)will be used to approximate the reduced Hessian, where

Ris an_{r}r * rupper triangular matrix andDis a diagonal matrix of orders - r. The rate of convergence will no longer be superlinear (and may be arbitrarily slow). The storage required is of the ordersqr(r)/2, i.e. quadratic inr. In general,rshould be a slight over-estimate of the final number of superbasic variables, whenever storage permits. It need never be larger thann, where_{1}+ 1nis the number of nonlinear variables. For many problems it can be much smaller than_{1}n. If_{1}Superbasics limit sis specified, the default value ofris the same number,s(and conversely). This is a safeguard to ensure super-linear convergence wherever possible. If neitherrnorsis specified, GAMS chooses values for both, using certain characteristics of the problem.Range: {

`1`

, ..., ∞}Default:

`1`

**iterations limit** *(integer)*: Minor iteration limit ↵

The maximum number of minor iterations allowed (i.e., iterations of the simplex method or the reduced-gradient method). This option, if set, overrides the GAMS

ITERLIMspecification. Ifi = 0, no minor iterations are performed, but the starting point is tested for both feasibility and optimality.Default:

`GAMS iterlim`

**lagrangian** *(string)*: Determines form of objection function in the linearized subproblems ↵

This determines the form of the objective function used for the linearized subproblems. The default value

yesis highly recommended. ThePenalty parametervalue is then also relevant. IfNois specified, the nonlinear constraint functions will be evaluated only twice per major iteration. Hence this option may be useful if the nonlinear constraints are very expensive to evaluate. However, in general there is a great risk that convergence may not occur.Default:

`YES`

value meaning `NO`

Nondefault value (not recommended) `YES`

Default value (recommended)

**linesearch tolerance** *(real)*: Controls accuracy of steplength selected ↵

For nonlinear problems, this controls the accuracy with which a steplength

alphais located in the one-dimensional problemminimize

F(x+alpha*p)

subject to0 < alpha <= betaA linesearch occurs on most minor iterations for which

xis feasible. (If the constraints are nonlinear, the function being minimized is the augmented Lagrangian.)rmust be a real value in the range0.0 < r < 1.0. The default valuer = 0.1requests a moderately accurate search. It should be satisfactory in most cases. If the nonlinear functions are cheap to evaluate, a more accurate search may be appropriate: tryr = 0.01orr = 0.001. The number of iterations should decrease, and this will reduce total run time if there are many linear or nonlinear constraints. If the nonlinear function are expensive to evaluate, a less accurate search may be appropriate; tryr = 0.5or perhapsr = 0.9. (The number of iterations will probably increase but the total number of function evaluations may decrease enough to compensate.)Range: [

`0`

,`1.0`

]Default:

`0.1`

**log frequency** *(integer)*: Controls iteration logging to listing file ↵

In general, one line of the iteration log is printed every

i^{th}minor iteration. A heading labels the printed items, which include the current iteration number, the number and sum of feasibilities (if any), the subproblem objective value (if feasible), and the number of evaluations of the nonlinear functions. A value such asi = 10, 100or larger is suggested for those interested only in the final solution.Log frequency 0may be used as shorthand forLog frequency 99999. IfPrint level > 0, the default value ofiis 1. IfPrint level = 0, the default value ofiis 100. IfPrint level = 0and the constraints are nonlinear, the minor iteration log is not printed (and theLog frequencyis ignored). Instead, one line is printed at the beginning of each major iteration.Range: {

`1`

, ..., ∞}Default:

`100`

**LU complete pivoting** *(no value)*: LUSOL pivoting strategy ↵

The LUSOL factorization implements a Markowitz-style search for pivots that locally minimize fill-in subject to a threshold pivoting stability criterion. The

rookandcomplete pivotingoptions are more expensive thanpartial pivotingbut are more stable and better at revealing rank, as long as theLU factor toleranceis not too large (say< 2.0).

**LU density tolerance** *(real)*: When to use dense factorization ↵

The density tolerance is used during LUSOL's basis factorization

B=LU. Columns ofLand rows ofUare formed one at a time, and the remaining rows and columns of the basis are altered appropriately. At any stage, if the density of the remaining matrix exceeds this tolerance, the Markowitz strategy for choosing pivots is terminated and the remaining matrix is factored by a denseLUprocedure. Raising the tolerance towards 1.0 may give slightly sparser factors, with a slight increase in factorization time.Range: [

`0`

,`1.0`

]Default:

`0.5`

**LU factor tolerance** *(real)*: Trade-off between stability and sparsity in basis factorization ↵

This tolerance affects the stability and sparsity of the basis factorization

B = LUduring factorization. The valuerspecified must satisfyr >= 1.0.

- The default value
r = 100.0usually strikes a good compromise between stability and sparsity.- For large and relatively dense problems, a larger value may give a useful improvement in sparsity without impairing stability to a serious degree.
- For certain very regular structures (e.g., band matrices) it may be necessary to set
rto a value smaller than the default in order to achieve stability.Range: [

`1.0`

, ∞]Default:

`100.0`

**LU partial pivoting** *(no value)*: LUSOL pivoting strategy ↵

The LUSOL factorization implements a Markowitz-style search for pivots that locally minimize fill-in subject to a threshold pivoting stability criterion. The

rookandcomplete pivotingoptions are more expensive thanpartial pivotingbut are more stable and better at revealing rank, as long as theLU factor toleranceis not too large (say< 2.0).

**LU rook pivoting** *(no value)*: LUSOL pivoting strategy ↵

The LUSOL factorization implements a Markowitz-style search for pivots that locally minimize fill-in subject to a threshold pivoting stability criterion. The

rookandcomplete pivotingoptions are more expensive thanpartial pivotingbut are more stable and better at revealing rank, as long as theLU factor toleranceis not too large (say< 2.0).

**LU singularity tolerance** *(real)*: Protection against ill-conditioned basis matrices ↵

When the basis is refactorized, the diagonal elements of

Uare tested as follows: if|Uor_{jj}| <= r|U, the_{jj}| < r * max_{i}|U_{ii}|j^{th}column of the basis is replaced by the corresponding slack variable. (This is most likely to occur after a restart, or at the start of a major iteration.) In some cases, the Jacobian matrix may converge to values that make the basis very ill-conditioned, causing the optimization to progress very slowly (if at all). Settingr = 1.0, say, may help cause a judicious change of basis.^{-5}Default:

`1.0e-11`

**LU update tolerance** *(real)*: Trade-off between stability and sparsity in basis updating ↵

This tolerance affects the stability and sparsity of the basis factorization

B = LUduring updates. The valuerspecified must satisfyr >= 1.0.

- The default value
r = 10.0usually strikes a good compromise between stability and sparsity.- For large and relatively dense problems,
r = 25.0(say) may give a useful improvement in sparsity without impairing stability to a serious degree.- For certain very regular structures (e.g., band matrices) it may be necessary to set
rto a value smaller than the default in order to achieve stability.Range: [

`1.0`

, ∞]Default:

`10.0`

**major damping parameter** *(real)*: Prevents large relative changes between subproblem solutions ↵

The parameter may assist convergence on problems that have highly nonlinear constraints. It is intended to prevent large relative changes between subproblem solutions

(xand_{k}, lambda_{k})(x. For example, the default value 2.0 prevents the relative change in either_{k+1}, lambda_{k+1})xor_{k}lambdafrom exceeding 200 percent. It will not be active on well behaved problems. The parameter is used to interpolate between the solutions at the beginning and end of each major iteration. Thus_{k}xand_{k+1}lambdaare changed to_{k+1}xand_{k}+ sigma*(x_{k+1}- x_{k})lambdafor some step-length_{k}+ sigma*(lambda_{k+1}- lambda_{k})sigma < 1. In the case of nonlinear equations (where the number of constraints is the same as the number of variables) this gives a damped Newton method. This is a very crude control. If the sequence of major iterations does not appear to be converging, one should first re-run the problem with a higher Penalty parameter (say 10 or 100 times the defaultrho). (Skip this re-run in the case of nonlinear equations: there are no degrees of freedom and the value ofrhois irrelevant.) If the subproblem solutions continue to change violently, try reducingrto 0.2 or 0.1 (say). For implementation reasons, the shortened step tosigmaapplies to the nonlinear variablesx, but not to the linear variablesyor the slack variabless. This may reduce the efficiency of the control.Default:

`2.0`

**major iterations** *(integer)*: Max number of major iterations ↵

The maximum number of major iterations allowed. It is intended to guard against an excessive number of linearizations of the nonlinear constraints, since in some cases the sequence of major iterations may not converge. The progress of the major iterations can be best monitored using

Print level 0(the default).Default:

`50`

**minor damping parameter** *(real)*: Limit change in x during linesearch ↵

This parameter limits the change in

xduring a linesearch. It applies to all nonlinear problems, once a feasible solution or feasible subproblem has been found. A linesearch of the form

minimize_{alpha}F(x + alpha*p)is performed over the range

0 < alpha <= beta, wherebetais the step to the nearest upper or lower bound onx. Normally, the first step length tried isa. In some cases, such as_{1}= min(1, beta)F(x) = aeor^{bx}F(x) = ax, even a moderate change in the components of^{b}xcan lead to floating-point overflow. The parameterris therefore used to define a limit

beta2 = r(1 + ||x||)/||p||and the first evaluation of

F(x)is at the potentially smaller steplengthalpha. Wherever possible, upper and lower bounds on_{1}= min(1, beta, beta2)xshould be used to prevent evaluation of nonlinear functions at meaningless points. TheMinor damping parameterprovides an additional safeguard. The default valuer = 2.0should not affect progress on well behaved problems, but settingr = 0.1or0.01may be helpful when rapidly varying functions are present. A good starting point may be required. An important application is to the class of nonlinear least squares problems. In cases where several local optima exist, specifying a small value forrmay help locate an optimum near the starting point.Default:

`2.0`

**minor iterations** *(integer)*: Max number of minor iterations between linearizations of nonlinear constraints ↵

The the maximum number of feasible minor iterations allowed between successive linearizations of the nonlinear constraints. A moderate value (e.g.,

20 <= i <= 50) prevents excessive efforts being expended on early major iterations, but allows later subproblems to be solved to completion. The limit applies to both infeasible and feasible iterations. In some cases, a large number of iterations (sayK) might be required to obtain a feasible subproblem. If good starting values are supplied for variables appearing nonlinearly in the constraints, it may be sensible to specify a limit> K, to allow the first major iteration to terminate at a feasible (and perhaps optimal) subproblem solution. If a good initial subproblem is arbitrarily interrupted by a small limit, the subsequent linearization may be less favorable than the first. In general it is unsafe to specify a value as small asi = 1or2. Even when an optimal solution has been reached, a few minor iterations may be needed for the corresponding subproblem to be recognized as optimal. TheIteration limitprovides an independent limit on the total minor iterations (across all subproblems). If the constraints are linear, only theIteration limitapplies: the minor iterations value is ignored.Default:

`40`

**multiple price** *(integer)*: Multiple pricing ↵

Pricing refers to a scan of the current non-basic variables to determine which, if any, are eligible to become (super)basic. The multiple pricing parameter

kcontrols the number of entering variables to choose: thekbest non-basic variables are selected for admission to the set of (super)basic variables. The defaultk = 1is best for linear programs, since an optimal solution will have zero superbasic variables.Warning: Ifk > 1, GAMS/MINOS will use the reduced-gradient method rather than the simplex method, even on purely linear problems. The subsequent iterations do not correspond to the efficient minor iterations carried out by commercial linear programming systems using multiple pricing. (In the latter systems, the classical simplex method is applied to a tableau involvingkdense columns of dimensionm, andkis therefore limited for storage reasons typically to the range2 <= k <= 7.) GAMS/MINOS varies all superbasic variables simultaneously. For linear problems its storage requirements are essentially independent ofk. Larger values ofkare therefore practical, but in general the iterations and time required whenk > 1are greater than when the simplex method is used (k = 1). On large nonlinear problems it may be important to setk > 1if the starting point does not contain many superbasic variables. For example, if a problem has 3000 variables and 500 of them are nonlinear, the optimal solution may well have 200 variables superbasic. If the problem is solved in several runs, it may be beneficial to usek = 10(say) for early runs, until it seems that the number of superbasics has leveled off. IfMultiple price kis specified, it is also necessary to specifySuperbasic limit sfor somes > k.Range: {

`1`

, ..., ∞}Default:

`1`

**optimality tolerance** *(real)*: Reduced gradient optimality check ↵

This is used to judge the size of the reduced gradients

d, where_{j}= g_{j}- pi^{T}a_{j}gis the gradient of the objective function corresponding to the_{j}j^{th}variable,ais the associated column of the constraint matrix (or Jacobian), and_{j}piis the set of dual variables. By construction, the reduced gradients for basic variables are always zero. Optimality will be declared if the reduced gradients for nonbasic variables at their lower or upper bounds satisfydor_{j}/||pi|| >= -rdrespectively, and if_{j}/||pi|| <= rdfor superbasic variables. The_{j}/||pi|| <= r||pi||is a measure of the size of the dual variables. It is included to make the tests independent of a scale factor on the objective function. The quantity actually used is defined by

sigma = sum(i, abs(pi(i))), ||pi|| = max{sigma/sqrt(m),1}so that only large scale factors are compensated for. As the objective scale decreases, the optimality test tends to become an absolute (instead of a relative) test.

Default:

`1.0e-6`

**partial price** *(integer)*: Number of segments in partial pricing strategy ↵

This parameter is recommended for large problems that have significantly more variables than constraints. It reduces the work required for each pricing operation (when a nonbasic variable is selected to become basic or superbasic). When

i = 1, all columns of the constraints matrix(A I)are searched. Otherwise,Aand_{j}Iare partitioned to giveiroughly equal segmentsA,_{j}I(_{j}j = 1toi). If the previous search was successful onA,_{j-1}I, the next search begins on the segments_{j-1}A,_{j}I. (All subscripts here are modulo_{j}i.) If a reduced gradient is found that is larger than some dynamic tolerance, the variable with the largest such reduced gradient (of appropriate sign) is selected to become superbasic. (Several may be selected if multiple pricing has been specified.) If nothing is found, the search continues on the next segmentsA,_{j+1}Iand so on._{j+1}Partial price t(ort/2ort/3) may be appropriate for time-stage models havingttime periodsRange: {

`1`

, ..., ∞}Default:

`10`

**penalty parameter** *(real)*: Used in modified augmented Lagrangian ↵

This specifies the value of

rhoin the modified augmented Lagrangian. It is used only whenLagrangian = yes(the default setting). For early runs on a problem with unknown characteristics, the default value should be acceptable. If the problem is known to be highly nonlinear, specify a large value, such as 10 times the default. In general, a positive value ofrhomay be necessary to ensure convergence, even for convex programs. On the other hand, ifrhois too large, the rate of convergence may be unnecessarily slow. If the functions are not highly nonlinear or a good starting point is known, it will often be safe to specifypenalty parameter 0.0. When solving a sequence of related problems, initially, use a moderate value forrho(such as the default) and a reasonably lowIterationsand/ormajor iterations limit. If successive major iterations appear to be terminating with radically different solutions, thepenalty parametershould be increased. (See also theMajor damping parameter.) If there appears to be little progress between major iterations, it may help to reduce thepenalty parameter.Default:

`automatic`

**print level** *(integer)*: Controls amount of information printed during optimization ↵

This varies the amount of information that will be output during optimization.

Print level 0sets the default log and summary frequencies to 100. It is then easy to monitor the progress of the run.Print level 1(or more) sets the default log and summary frequencies to 1, giving a line of output for every minor iteration.Print level 1also produces basis statistics, i.e., information relating to LU factors of the basis matrix whenever the basis is re-factorized. For problems with nonlinear constraints, certain quantities are printed at the start of each major iteration. The option value is best thought of as a binary number of the form

Print level`JFLXB`

where each letter stands for a digit that is either 0 or 1. The quantities referred to are:

BBasis statistics, as mentioned aboveXx, the nonlinear variables involved in the objective function or the constraints._{k}Llambda, the Lagrange-multiplier estimates for the nonlinear constraints. (Suppressed if_{k}Lagrangian=No, since thenlambda.)_{k}= 0Ff(x, the values of the nonlinear constraint functions._{k})JJ(x, the Jacobian matrix._{k})To obtain output of any item, set the corresponding digit to 1, otherwise to 0. For example,

Print level 10sets`X = 1`

and the other digits equal to zero; the nonlinear variables will be printed each major iteration. If`J = 1`

, the Jacobian matrix will be output column-wise at the start of each major iteration. Columnjwill be preceded by the value of the corresponding variablexand a key to indicate whether the variable is basic, superbasic or nonbasic. (Hence if_{j}`J = 1`

, there is no reason to specify`X = 1`

unless the objective contains more nonlinear variables than the Jacobian.) A typical line of output is

`3 1.250000D+01 BS 1 1.00000D+00 4 2.00000D+00`

which would mean that

xis basic at value 12.5, and the third column of the Jacobian has elements of 1.0 and 2.0 in rows 1 and 4. (Note: the GAMS/MINOS row numbers are usually different from the GAMS row numbers; see the Solution option.)_{3}Default:

`0`

**radius of convergence** *(real)*: controls final reduction of penalty parameter ↵

This determines when the penalty parameter

rhowill be reduced, assumingrhowas initially positive. Both the nonlinear constraint violation (seeROWERRbelow) and the relative change in consecutive Lagrange multiplier estimates must be less thanrat the start of a major iteration beforerhois reduced or set to zero. A few major iterations later, full completion will be requested if not already set, and the remaining sequence of major iterations should converge quadratically to an optimum.Default:

`0.01`

**row tolerance** *(real)*: Accuracy requirement for nonlinear rows ↵

This specifies how accurately the nonlinear constraints should be satisfied at a solution. The default value is usually small enough, since model data is often specified to about this accuracy. Let

ROWERRbe the maximum component of the residual vectorf(x) + A, normalized by the size of the solution. Thus_{1}y - b_{1}

ROWERR = ||f(x) + A_{1}y - b_{1}||_{inf}/(1 + XNORM)where

XNORMis a measure of the size of the current solution(x, y). The solution is considered to be feasible ifROWERR <= r. If the problem functions involve data that is known to be of low accuracy, a largerRow tolerancemay be appropriate.Default:

`1.0e-6`

**scale all variables** *(no value)*: Synonym to scale option 2 ↵

**scale linear variables** *(no value)*: Synonym to scale option 1 ↵

**scale no** *(no value)*: Synonym to scale option 0 ↵

**scale nonlinear variables** *(no value)*: Synonym to scale option 2 ↵

**scale option** *(integer)*: Scaling ↵

Scale Yessets the default. (Caution: If all variables are nonlinear,Scale Yesunexpectedly does nothing, because there are no linear variables to scale).Scale Nosuppresses scaling (equivalent toScale Option 0). If nonlinear constraints are present,Scale option 1or0should generally be tried at first.Scale option 2gives scales that depend on the initial Jacobian, and should therefore be used only if (a) a good starting point is provided, and (b) the problem is not highly nonlinear.Default:

`1`

value meaning `0`

No scaling

If storage is at a premium, this option should be used.`1`

Scale linear variables

Linear constraints and variables are scaled by an iterative procedure that attempts to make the matrix coefficients as close as possible to 1.0 (see [5]). This will sometimes improve the performance of the solution procedures.Scale linear variablesis an equivalent option.`2`

Scale linear + nonlinear variables

All constraints and variables are scaled by the iterative procedure. Also, a certain additional scaling is performed that may be helpful if the right-hand sidebor the solutionxis large. This takes into account columns of(A I)that are fixed or have positive lower bounds or negative upper bounds.Scale nonlinear variablesorScale all variablesare equivalent options.

**scale print** *(no value)*: Print scaling factors ↵

This causes the row-scales

r(i)and column-scalesc(j)to be printed. The scaled matrix coefficients areâ. The scaled bounds on the variables and slacks are_{ij}= a_{ij}c(j)/r(i)lˆand_{j}= l_{j}/c(j)û, where_{j}= u_{j}/c(j)c(j) = r(j - n)ifj > n. If a Scale option has not already been specified,Scale printsets the default scaling.

**scale print tolerance** *(real)*: Scale print flag and set tolerance ↵

See

Scale Tolerance. This option also turns on printing of the scale factors.Range: [

`0`

,`1.0`

]Default:

`0.9`

**scale tolerance** *(real)*: Scale tolerance ↵

All forms except

Scale optionmay specify a tolerancerwhere0 < r < 1(for example:Scale Print Tolerance = 0.99). This affects how many passes might be needed through the constraint matrix. On each pass, the scaling procedure computes the ratio of the largest and smallest nonzero coefficients in each column:

rho_{j}= max_{i}|a_{ij}|/min_{i}|a_{ij}| (a_{ij}≠ 0)If

maxis less than_{j}rho_{j}rtimes its previous value, another scaling pass is performed to adjust the row and column scales. Raisingrfrom 0.9 to 0.99 (say) usually increases the number of scaling passes throughA. At most 10 passes are made. If aScale optionhas not already been specified,Scale tolerancesets the default scaling.Range: [

`0`

,`1.0`

]Default:

`0.9`

**scale yes** *(no value)*: Synonym to scale option 1 ↵

**solution** *(string)*: Prints MINOS solution ↵

This controls whether or not GAMS/MINOS prints the final solution obtained. There is one line of output for each constraint and variable. The lines are in the same order as in the GAMS solution, but the constraints and variables labeled with internal GAMS/MINOS numbers rather than GAMS names. (The numbers at the left of each line are GAMS/MINOS column numbers, and those at the right of each line in the rows section are GAMS/MINOS slacks.) The GAMS/MINOS solution may be useful occasionally to interpret certain messages that occur during the optimization, and to determine the final status of certain variables (basic, superbasic or nonbasic).

Default:

`NO`

value meaning `NO`

Turn off printing of solution `YES`

Turn on printing of solution

**start assigned nonlinears** *(string)*: Starting strategy when there is no basis ↵

This option affects the starting strategy when there is no basis (i.e., for the first solve or when the GAMS statement

option bratio = 1is used to reject an existing basis.) This option applies to all nonlinear variables that have been assigned nondefault initial values and are strictly between their bounds. Free variables at their default value of zero are excluded. LetKdenote the number of such assigned nonlinear variables.Default:

`SUPERBASIC`

value meaning `SUPERBASIC`

Default

Specifysuperbasicfor highly nonlinear models, as long asKis not too large (sayK < 100) and the initial values are good.`BASIC`

Good for square systems

Specifybasicfor models that are essentially square (i.e., if there are about as many general constraints as variables).`NONBASIC`

Specify nonbasicifKis large.`ELIGIBLE FOR CRASH`

Specify Eligible for Crashfor linear or nearly linear models. The nonlinear variables will be treated in the manner described underCrashoption.

**subspace tolerance** *(real)*: Determines when nonbasics becomes superbasic ↵

This controls the extent to which optimization is confined to the current set of basic and superbasic variables (Phase 4 iterations), before one or more nonbasic variables are added to the superbasic set (Phase 3). The parameter

rmust be a real number in the range0 < r <= 1. When a nonbasic variablexis made superbasic, the resulting norm of the reduced-gradient vector (for all superbasics) is recorded. Let this be_{j}||Z. (In fact, the norm will be^{T}g_{0}|||d, the size of the reduced gradient for the new superbasic variable_{j}|x. Subsequent Phase 4 iterations will continue at least until the norm of the reduced-gradient vector satisfies_{j}||Zis the size of the largest reduced-gradient component among the superbasic variables.) A smaller value of^{T}g_{0}|| <= r||Z^{T}g_{0}||ris likely to increase the total number of iterations, but may reduce the number of basic changes. A larger value such asr = 0.9may sometimes lead to improved overall efficiency, if the number of superbasic variables has to increase substantially between the starting point and an optimal solution. Other convergence tests on the change in the function being minimized and the change in the variables may prolong Phase 4 iterations. This helps to make the overall performance insensitive to larger values ofr.Range: [

`0`

,`1.0`

]Default:

`0.5`

**summary frequency** *(integer)*: Controls iteration logging to summary (log file) ↵

A brief form of the iteration log is output to the MINOS summary file (i.e. the GAMS log file). In general, one line is output every

i^{th}minor iteration. In an interactive environment, the output normally appears at the terminal and allows a run to be monitored. If something looks wrong, the run can be manually terminated. The summary frequency controls summary output in the same way as the log frequency controls output to the print file. A value such asSummary Frequency = 10or100is often adequate to determine if the solve is making progress. IfPrint level = 0, the default value ofSummary Frequencyis 100. IfPrint level > 0, the default value ofSummary Frequencyis 1. IfPrint level = 0and the constraints are nonlinear, theSummary Frequencyis ignored. Instead, one line is printed at the beginning of each major iteration.Range: {

`1`

, ..., ∞}Default:

`100`

**superbasics limit** *(integer)*: Maximum number of superbasics ↵

This places a limit on the storage allocated for superbasic variables. Ideally, the parameter

ishould be set slightly larger than the number of degrees of freedom expected at an optimal solution. For linear problems, an optimum is normally a basic solution with no degrees of freedom. (The number of variables lying strictly between their bounds is not more thanm, the number of general constraints.) The default value ofiis therefore 1. For nonlinear problems, the number of degrees of freedom is often called the number of independent variables. Normally,ineed not be greater thann, where_{1}+ 1nis the number of nonlinear variables. For many problems,_{1}imay be considerably smaller thann. This will save storage if_{1}nis very large. This parameter also sets the Hessian dimension, unless the latter is specified explicitly (and conversely). If neither parameter is specified, GAMS chooses values for both, using certain characteristics of the problem._{1}Range: {

`1`

, ..., ∞}Default:

`1`

**unbounded objective value** *(real)*: Determines when a problem is called unbounded ↵

This parameter is intended to detect unboundedness in nonlinear problems. During a line search of the form

minimize_{alpha}F(x + alpha*p)If

|F|exceeds the parameterror ifalphaexceeds theunbounded stepsize, iterations are terminated with the exit message`PROBLEM IS UNBOUNDED (OR BADLY SCALED)`

. If singularities are present, unboundedness inF(x)may be manifested by a floating-point overflow (during the evaluation ofF(x + alpha*p), before the test againstrcan be made. Unboundedness is best avoided by placing finite upper and lower bounds on the variables. See also theMinor damping parameter.Default:

`1.0e20`

**unbounded step size** *(real)*: Determines when a problem is called unbounded ↵

This parameter is intended to detect unboundedness in nonlinear problems. During a line search of the form

minimize_{alpha}F(x + alpha*p)If

alphaexceeds the parameterror if|F|exceeds theunbounded objective value, iterations are terminated with the exit message`PROBLEM IS UNBOUNDED (OR BADLY SCALED)`

. If singularities are present, unboundedness inF(x)may be manifested by a floating-point overflow (during the evaluation ofF(x + alpha*p), before the test againstrcan be made. Unboundedness is best avoided by placing finite upper and lower bounds on the variables. See also theMinor damping parameter.Default:

`1.0e10`

**verify constraint gradients** *(no value)*: Synonym to verify level 2 ↵

**verify gradients** *(no value)*: Synonym to verify level 3 ↵

**verify level** *(integer)*: Controls verification of gradients ↵

This option controls the finite-difference check performed by MINOS on the gradients (first derivatives) computed by GAMS for each nonlinear function. GAMS computes gradients analytically, and the values obtained should normally be taken as correct.

Default:

`0`

value meaning `0`

Cheap test

Only a cheap test is performed, requiring three evaluations of the nonlinear objective and two evaluations of the nonlinear constraints.Verify Nois an equivalent option.`1`

Check objective

A more reliable check is made on each component of the objective gradient.Verify objective gradientsis an equivalent option.`2`

Check Jacobian

A check is made on each column of the Jacobian matrix associated with the nonlinear constraints.Verify constraint gradientsis an equivalent option.`3`

Check objective and Jacobian

A detailed check is made on both the objective and the Jacobian.Verify,Verify gradients, andVerify Yesare equivalent options.`-1`

No check

**verify no** *(no value)*: Synonym to verify level 0 ↵

**verify objective gradients** *(no value)*: Synonym to verify level 1 ↵

**verify yes** *(no value)*: Synonym to verify level 3 ↵

**weight on linear objective** *(real)*: Composite objective weight ↵

This option controls the so-called composite objective technique. If the first solution obtained is infeasible, and if the objective function contains linear terms, and the objective weight

wis positive, this technique is used. While trying to reduce the sum of infeasibilities, the method also attempts to optimize the linear portion of the objective. At each infeasible iteration, the objective function is defined to be

minimize_{x}sigma*w(c^{T}x) + (sum of infeasibilities)where

sigma = 1for minimization andsigma = -1for maximization andcis the linear portion of the objective. If an optimal solution is reached while still infeasible,wis reduced by a factor of 10. This helps to allow for the possibility that the initialwis too large. It also provides dynamic allowance for the fact the sum of infeasibilities is tending towards zero. The effect ofwis disabled after five such reductions, or if a feasible solution is obtained. This option is intended mainly for linear programs. It is unlikely to be helpful if the objective function is nonlinear.Default:

`0.0`

# Exit Conditions

This section discusses the exit codes printed by MINOS at the end of the optimization run.

**EXIT – Optimal solution found**

This is the message we all hope to see! It is certainly preferable to every other message. Of course it is quite possible that there are model formulation errors, which will (hopefully) lead to unexpected objective values and solutions. The reported optimum may be a local, and other much better optima may exist.

**EXIT – The problem is infeasible**

When the constraints are linear, this message can probably be trusted. Feasibility is measured with respect to the upper and lower bounds on the variables (the bounds on the slack variables correspond to the GAMS constraints). The message tells us that among all the points satisfying the general constraints \(Ax+s=0\), there is apparently no point that satisfies the bounds on \(x\) and \(s\). Violations as small as the

`FEASIBILITY TOLERANCE`

are ignored, but at least one component of \(x\) or $s$ violates a bound by more than the tolerance.Note: Although the objective function is the sum of the infeasibilities, this sum will usually not have been

minimizedwhen MINOS recognizes the situation and exits. There may exist other points that have significantly lower sum of infeasibilities.When nonlinear constraints are present, infeasibility is

muchharder to recognize correctly. Even if a feasible solution exists, the current linearization of the constraints may not contain a feasible point. In an attempt to deal with this situation MINOS may relax the bounds on the slacks associated with nonlinear rows. This perturbation is allowed a fixed number of times. Normally a feasible point will be obtained relative to the perturbed constraints, and optimization can continue on the subproblem. However, if several consecutive subproblems require such perturbation, the problem is terminated and declared`INFEASIBLE`

. Clearly this is an ad-hoc procedure. Wherever possible, nonlinear constraints should be defined in such a way that feasible points are known to exist when the constraints are linearized.

**EXIT – The problem is unbounded (or badly scaled)**

For linear problems, unboundedness is detected by the simplex method when a nonbasic variable can apparently be increased by an arbitrary amount without causing a basic variable to violate a bound. A simple way to diagnose such a model is to add an appropriate bound on the objective variable.

Very rarely, the scaling of the problem could be so poor that numerical error will give an erroneous indication of unboundedness. Consider using the

`SCALE`

option.

For nonlinear problems, MINOS monitors both the size of the current objective function and the size of the change in the variables at each step. If either of these is very large (as judged by the

`UNBOUNDED`

parameter), the problem is terminated and declared`UNBOUNDED`

. To avoid large function values, it may be necessary to impose bounds on some of the variables in order to keep them away from singularities in the nonlinear functions.

**EXIT – User Interrupt**

This exit code is a result of interrupting the optimization process by hitting

`Ctrl-C`

. Inside the IDE this is accomplished by hitting the`Interrupt`

button. The solver will finish its current iteration, and return the current solution. This solution can be still intermediate infeasible or intermediate non-optimal.

**EXIT – Too many iterations**

The iteration limit was hit. Either the

`ITERLIM`

, or in some cases the`ITERATIONS LIMIT`

or`MAJOR ITERATION LIMIT`

was too small to solve the problem. In most cases increasing the GAMS`ITERLIM`

option will resolve the problem. In other cases you will need to create a MINOS option file and set a`MAJOR ITERATION LIMIT`

. The listing file will give more information regarding what limit was hit.The GAMS iteration limit is displayed in the listing file under the section

`SOLVE SUMMARY`

. If the`ITERLIM`

was hit, the message will look like:

ITERATION COUNT, LIMIT 10001 10000

**EXIT – Resource Interrupt**

The solver hit the

`RESLIM`

resource limit, which is a time limit. It returned the solution at that time, which may be still intermediate infeasible or intermediate non-optimal.The GAMS resource limit is displayed in the listing file under the section

`SOLVE SUMMARY`

. If the GAMS`RESLIM`

was hit, the message will look like:

RESOURCE USAGE, LIMIT 1001.570 1000.000

**EXIT – The objective has not changed for many iterations**

This is an emergency measure for the rare occasions when the solution procedure appears to be

cycling. Suppose that a zero step is taken for several consecutive iterations, with a basis change occurring each time. It is theoretically possible for the set of basic variables to become the same as they were one or more iterations earlier. The same sequence of iterations would then occurad infinitum.

**EXIT – The Superbasics Limit is too small**

The problem appears to be more non-linear than anticipated. The current set of basic and superbasic variables have been optimized as much as possible and an increase in the number of superbasics is needed. You can use the option

`SUPERBASICS LIMIT`

to increase the limit. See also option`HESSIAN DIMENSION`

.

**EXIT – Constraint and objective function could not be calculated**

The function or gradient could not be evaluated. For example, this can occur when MINOS attempts to take a log or a square root of a negative number, when evaluating the expression \(x^y\) with \(x\le 0\), or when evaluating

exp(x)for large x and the result is too large to store. The listing file will contain details about where and why evaluation errors occur. To fix this problem, add bounds so that all functions can be properly evaluated. E.g. if you have an expression \(x^y\), add a lower bound`X.LO=0.001`

to your model.In many cases the algorithm can recover from function evaluation errors, for instance if they happen in the line search while evaluating trial points. The message above appears in cases where the algorithm can not recover, and requires a reliable function or gradient evaluation.

**EXIT – Function evaluation error limit**

The limit of allowed function evaluation errors

`DOMLIM`

has been exceeded.Function evaluation errors occur when MINOS attempts to evaluate the objective and/or constraints at points where these functions or their derivatives are not defined or where overflows occur. Some examples are given above. The listing file contains details about these errors.

The quick and dirty way to solve this is to increase the GAMS

`DOMLIM`

setting, but in general it is better to add bounds. E.g. if you have an expression \(x^y\), then add a bound`X.LO=0.001`

to your model.

**EXIT – The current point can not be improved**

The line search failed. This can happen if the model is very nonlinear or if the functions are nonsmooth (using a

`DNLP`

model type).

If the model is non-smooth, consider a smooth approximation. It may be useful to check the scaling of the model and think more carefully about choosing a good starting point. Sometimes it can help to restart the model with full scaling turned on:

option nlp=minos; solve m minimizing z using nlp; // this one gives "current point cannot be improved" file fopt /minos.opt/; // write option file putclose fopt "scale all variables"/; m.optfile=1; solve m minimizing z using nlp; // solve with "scale all variables"

**EXIT – Numerical error in trying to satisfy the linear constraints (or the linearized constraints)**

**The basis is very ill-conditioned**.

This is often a scaling problem. Try the full scaling option

scale all variablesor, better yet, rescale the model in GAMS via the`.scale`

suffix or by choosing more appropriate units for variables and RHS values.

**EXIT – Not enough storage to solve the model**

The amount of workspace allocated for MINOS to solve the model is insufficient. Consider increasing the GAMS option

`workfactor`

to increase the workspace allocated for MINOS to use. The listing file and log file (screen) will contain information about the current workspace allocation. Increasing the workfactor by 50% is a reasonable strategy.

**EXIT– Systems error**

This is a catch all return for other serious problems. Check the listing file for more messages. If needed rerun the model with

`OPTION SYSOUT=ON;`

.