### Table of Contents

- Usage
- List of Options
- Couenne options
- Bonmin Branch-and-bound options
- Bonmin NLP interface
- Bonmin NLP solution robustness
- Bonmin Nonconvex problems
- Bonmin Outer Approximation cuts generation
- Bonmin Output and Loglevel
- Bonmin Primal Heuristics
- Bonmin Strong branching setup
- Ipopt Barrier Parameter Update
- Ipopt Convergence
- Ipopt Hessian Approximation
- Ipopt Initialization
- Ipopt Line Search
- Ipopt Linear Solver
- Ipopt MA27 Linear Solver
- Ipopt MA28 Linear Solver
- Ipopt MA57 Linear Solver
- Ipopt MA77 Linear Solver
- Ipopt MA86 Linear Solver
- Ipopt MA97 Linear Solver
- Ipopt Mumps Linear Solver
- Ipopt NLP
- Ipopt NLP Scaling
- Ipopt Output
- Ipopt Pardiso Linear Solver
- Ipopt Restoration Phase
- Ipopt Step Calculation
- Ipopt Warm Start

- Detailed Options Description

COIN-OR Couenne (**C**onvex **O**ver and **U**nder **En**velopes for **N**onlinear **E**stimation) is an open-source solver for nonconvex mixed-integer nonlinear programming (MINLPs). The code has been developed originally in a cooperation of Carnegie Mellon University and IBM Research. The COIN-OR project leader for Couenne is Pietro Belotti, now with FICO, Ltd.

Couenne solves convex and nonconvex MINLPs by an LP based spatial branch-and-bound algorithm. The implementation extends BONMIN by routines to compute valid linear outer approximations for nonconvex problems and methods for bound tightening and branching on nonlinear variables. Couenne uses IPOPT to solve NLP subproblems.

For more information on the algorithm we refer to [1, 3] and the Couenne web site. Most of the Couenne documentation in this section is taken from the Couenne manual [2] .

Couenne can handle mixed-integer nonlinear programming models which functions can be nonconvex, but should be twice continuously differentiable. Further, an algebraic description of the model needs to be available, which makes the use of some GAMS functions and user-specified external/extrinsic functions impossible. The Couenne link in GAMS supports continuous, binary, and integer variables, but no special ordered sets, semi-continuous or semi-integer variables.

# Usage

The following statement can be used inside your GAMS program to specify using Couenne:

Option MINLP = COUENNE; { or LP, RMIP, MIP, DNLP, NLP, RMINLP, QCP, RMIQCP, MIQCP, CNS }

The above statement should appear before the Solve statement. If Couenne was specified as the default solver during GAMS installation, the above statement is not necessary.

## Specification of Options

A Couenne option file contains IPOPT, BONMIN, and Couenne options. For clarity, all BONMIN options should be preceded with the prefix `bonmin.`

and all Couenne options should be preceded with the prefix `couenne.`

. All IPOPT and many BONMIN options are available in Couenne.

The scheme to name option files is the same as for all other GAMS solvers. The format of the option file is the same as for IPOPT.

GAMS/Couenne understands currently the following GAMS parameters: reslim (time limit), nodlim (node limit), cutoff, optca (absolute gap tolerance), and optcr (relative gap tolerance). Further, the option threads can be used to control the number of threads used in the linear algebra routines of IPOPT.

# List of Options

In the following we give a list of all options available for Couenne, including those for the underlying solvers Ipopt and Bonmin.

## Couenne options

Option | Description | Default |
---|---|---|

2mir_cuts | Frequency k (in terms of nodes) for generating 2mir_cuts cuts in branch-and-cut. | `0` |

aggressive_fbbt | Aggressive feasibility-based bound tightening (to use with NLP points) | `yes` |

art_cutoff | Artificial cutoff | `maxdouble` |

art_lower | Artificial lower bound | `mindouble` |

boundtightening_print_level | Output level for bound tightening code in Couenne | `0` |

branching_object | type of branching object for variable selection | `var_obj` |

branching_print_level | Output level for braching code in Couenne | `0` |

branch_conv_cuts | Apply convexification cuts before branching (for now only within strong branching) | `yes` |

branch_fbbt | Apply bound tightening before branching | `yes` |

branch_lp_clamp | Defines safe interval percentage for using LP point as a branching point. | `0.2` |

branch_lp_clamp_cube | Defines safe interval percentage [0,0.5] for using LP point as a branching point. | `0.2` |

branch_lp_clamp_div | Defines safe interval percentage [0,0.5] for using LP point as a branching point. | `0.2` |

branch_lp_clamp_exp | Defines safe interval percentage [0,0.5] for using LP point as a branching point. | `0.2` |

branch_lp_clamp_log | Defines safe interval percentage [0,0.5] for using LP point as a branching point. | `0.2` |

branch_lp_clamp_negpow | Defines safe interval percentage [0,0.5] for using LP point as a branching point. | `0.2` |

branch_lp_clamp_pow | Defines safe interval percentage [0,0.5] for using LP point as a branching point. | `0.2` |

branch_lp_clamp_prod | Defines safe interval percentage [0,0.5] for using LP point as a branching point. | `0.2` |

branch_lp_clamp_sqr | Defines safe interval percentage [0,0.5] for using LP point as a branching point. | `0.2` |

branch_lp_clamp_trig | Defines safe interval percentage [0,0.5] for using LP point as a branching point. | `0.2` |

branch_midpoint_alpha | Defines convex combination of mid point and current LP point: b = alpha x_lp + (1-alpha) (lb+ub)/2. | `0.25` |

branch_pt_select | Chooses branching point selection strategy | `mid-point` |

branch_pt_select_cube | Chooses branching point selection strategy for operator cube. | `common` |

branch_pt_select_div | Chooses branching point selection strategy for operator div. | `common` |

branch_pt_select_exp | Chooses branching point selection strategy for operator exp. | `common` |

branch_pt_select_log | Chooses branching point selection strategy for operator log. | `common` |

branch_pt_select_negpow | Chooses branching point selection strategy for operator negpow. | `common` |

branch_pt_select_pow | Chooses branching point selection strategy for operator pow. | `common` |

branch_pt_select_prod | Chooses branching point selection strategy for operator prod. | `common` |

branch_pt_select_sqr | Chooses branching point selection strategy for operator sqr. | `common` |

branch_pt_select_trig | Chooses branching point selection strategy for operator trig. | `common` |

check_lp | Check all LPs through an independent call to OsiClpSolverInterface::initialSolve() | `no` |

clique_cuts | Frequency k (in terms of nodes) for generating clique_cuts cuts in branch-and-cut. | `0` |

cont_var_priority | Priority of continuous variable branching | `99` |

convexification_cuts | Specify the frequency (in terms of nodes) at which couenne ecp cuts are generated. | `1` |

convexification_points | Specify the number of points at which to convexify when convexification type is uniform-grid or around-current-point. | `4` |

convexification_type | Determines in which point the linear over/under-estimator are generated | `current-point-only` |

convexifying_print_level | Output level for convexifying code in Couenne | `0` |

cover_cuts | Frequency k (in terms of nodes) for generating cover_cuts cuts in branch-and-cut. | `0` |

crossconv_cuts | The frequency (in terms of nodes) at which Couenne cross-aux convexification cuts are generated. | `0` |

delete_redundant | Eliminate redundant variables, which appear in the problem as x_k = x_h | `yes` |

disjcuts_print_level | Output level for disjunctive cuts in Couenne | `0` |

disj_active_cols | Only include violated variable bounds in the Cut Generating LP (CGLP). | `no` |

disj_active_rows | Only include violated linear inequalities in the CGLP. | `no` |

disj_cumulative | Add previous disjunctive cut to current CGLP. | `no` |

disj_depth_level | Depth of the B&B tree when to start decreasing the number of objects that generate disjunctions. | `5` |

disj_depth_stop | Depth of the B&B tree where separation of disjunctive cuts is stopped. | `20` |

disj_init_number | Maximum number of disjunction to consider at each iteration. | `10` |

disj_init_perc | The maximum fraction of all disjunctions currently violated by the problem to consider for generating disjunctions. | `0.5` |

display_stats | display statistics at the end of the run | `no` |

enable_lp_implied_bounds | Enable OsiSolverInterface::tightenBounds () – warning: it has caused some trouble to Couenne | `no` |

enable_sos | Use Special Ordered Sets (SOS) as indicated in the MINLP model | `no` |

estimate_select | How the min/max estimates of the subproblems' bounds are used in strong branching | `normal` |

feasibility_bt | Feasibility-based (cheap) bound tightening (FBBT) | `yes` |

feas_pump_convcuts | Separate MILP-feasible, MINLP-infeasible solution during or after MILP solver. | `none` |

feas_pump_fademult | decrease/increase rate of multipliers | `1` |

feas_pump_heuristic | Apply the nonconvex Feasibility Pump | `no` |

feas_pump_iter | Number of iterations in the main Feasibility Pump loop | `10` |

feas_pump_level | Specify the logarithm of the number of feasibility pumps to perform on average for each level of given depth of the tree. | `3` |

feas_pump_milpmethod | How should the integral solution be constructed? | `0` |

feas_pump_mult_dist_milp | Weight of the distance in the distance function of the milp problem | `0` |

feas_pump_mult_dist_nlp | Weight of the distance in the distance function of the nlp problem | `0` |

feas_pump_mult_hess_milp | Weight of the Hessian in the distance function of the milp problem | `0` |

feas_pump_mult_hess_nlp | Weight of the Hessian in the distance function of the nlp problem | `0` |

feas_pump_mult_objf_milp | Weight of the original objective function in the distance function of the milp problem | `0` |

feas_pump_mult_objf_nlp | Weight of the original objective function in the distance function of the nlp problem | `0` |

feas_pump_nseprounds | Number of rounds of convexification cuts. | `4` |

feas_pump_poolcomp | Priority field to compare solutions in FP pool | `4` |

feas_pump_tabumgt | Retrieval of MILP solutions when the one returned is unsatisfactory | `pool` |

feas_pump_usescip | Should SCIP be used to solve the MILPs? | `yes` |

feas_pump_vardist | Distance computed on integer-only or on both types of variables, in different flavors. | `integer` |

feas_tolerance | Tolerance for constraints/auxiliary variables | `1e-05` |

fixpoint_bt | The frequency (in terms of nodes) at which Fix Point Bound Tightening is performed. | `0` |

fixpoint_bt_model | Choose whether to add an extended fixpoint LP model or a more compact one. | `compact` |

flow_covers_cuts | Frequency k (in terms of nodes) for generating flow_covers_cuts cuts in branch-and-cut. | `0` |

Gomory_cuts | Frequency k (in terms of nodes) for generating Gomory_cuts cuts in branch-and-cut. | `0` |

int_var_priority | Priority of integer variable branching | `98` |

iterative_rounding_aggressiveness | Aggressiveness of the Iterative Rounding heuristic | `1` |

iterative_rounding_base_lbrhs | Base rhs of the local branching constraint for Iterative Rounding | `15` |

iterative_rounding_heuristic | Do we use the Iterative Rounding heuristic | `no` |

iterative_rounding_num_fir_points | Max number of points rounded at the beginning of Iterative Rounding | `5` |

iterative_rounding_omega | Omega parameter of the Iterative Rounding heuristic | `0.2` |

iterative_rounding_time | Specify the maximum time allowed for the Iterative Rounding heuristic | `-1` |

iterative_rounding_time_firstcall | Specify the maximum time allowed for the Iterative Rounding heuristic when no feasible solution is known | `-1` |

lift_and_project_cuts | Frequency k (in terms of nodes) for generating lift_and_project_cuts cuts in branch-and-cut. | `0` |

local_branching_heuristic | Apply local branching heuristic | `no` |

local_optimization_heuristic | Search for local solutions of MINLPs | `yes` |

log_num_abt_per_level | Specify the frequency (in terms of nodes) for aggressive bound tightening. | `2` |

log_num_local_optimization_per_level | Specify the logarithm of the number of local optimizations to perform on average for each level of given depth of the tree. | `2` |

log_num_obbt_per_level | Specify the frequency (in terms of nodes) for optimality-based bound tightening. | `1` |

lp_solver | Linear Programming solver for the linearization | `clp` |

max_fbbt_iter | Number of FBBT iterations before stopping even with tightened bounds. | `3` |

minlp_disj_cuts | The frequency (in terms of nodes) at which Couenne disjunctive cuts are generated. | `0` |

mir_cuts | Frequency k (in terms of nodes) for generating mir_cuts cuts in branch-and-cut. | `0` |

multilinear_separation | Separation for multilinear terms | `tight` |

nlpheur_print_level | Output level for NLP heuristic in Couenne | `0` |

optimality_bt | Optimality-based (expensive) bound tightening (OBBT) | `yes` |

orbital_branching | detect symmetries and apply orbital branching | `no` |

orbital_branching_depth | Maximum depth at which the symmetry group is computed | `10` |

output_level | Output level | `4` |

probing_cuts | Frequency k (in terms of nodes) for generating probing_cuts cuts in branch-and-cut. | `0` |

problem_print_level | Output level for problem manipulation code in Couenne | `2` |

pseudocost_mult | Multipliers of pseudocosts for estimating and update estimation of bound | `interval_br_rev` |

pseudocost_mult_lp | Use distance between LP points to update multipliers of pseudocosts after simulating branching | `no` |

quadrilinear_decomp | type of decomposition for quadrilinear terms (see work by Cafieri, Lee, Liberti) | `rAI` |

redcost_bt | Reduced cost bound tightening | `yes` |

reduce_split_cuts | Frequency k (in terms of nodes) for generating reduce_split_cuts cuts in branch-and-cut. | `0` |

red_cost_branching | Apply Reduced Cost Branching (instead of the Violation Transfer) – MUST have vt_obj enabled | `no` |

reformulate_print_level | Output level for reformulating problems in Couenne | `0` |

sdp_cuts | The frequency (in terms of nodes) at which Couenne SDP cuts are generated. | `0` |

sdp_cuts_fillmissing | Create fictitious auxiliary variables to fill non-fully dense minors. Can make a difference when Q has at least one zero term. | `no` |

sdp_cuts_neg_ev | Only use negative eigenvalues to create sdp cuts. | `yes` |

sdp_cuts_num_ev | The number of eigenvectors of matrix X to be used to create sdp cuts. | `-1` |

sdp_cuts_sparsify | Make cuts sparse by greedily reducing X one column at a time before extracting eigenvectors. | `no` |

solvetrace | Name of file for writing solving progress information. | |

solvetracenodefreq | Frequency in number of nodes for writing solving progress information. | `100` |

solvetracetimefreq | Frequency in seconds for writing solving progress information. | `5` |

trust_strong | Fathom strong branching LPs when their bound is above the cutoff | `yes` |

twoimpl_depth_level | Depth of the B&B tree when to start decreasing the chance of running this algorithm. | `5` |

twoimpl_depth_stop | Depth of the B&B tree where separation is stopped. | `20` |

two_implied_bt | The frequency (in terms of nodes) at which Couenne two-implied bounds are tightened. | `0` |

two_implied_max_trials | The number of iteration at each call to the cut generator. | `2` |

use_auxcons | Use constraints-defined auxiliaries, i.e. auxiliaries w = f(x) defined by original constraints f(x) - w = 0 | `yes` |

use_quadratic | Use quadratic expressions and related exprQuad class | `no` |

use_semiaux | Use semiauxiliaries, i.e. auxiliaries defined as w ≥ f(x) rather than w := f(x)) | `yes` |

violated_cuts_only | Yes if only violated convexification cuts should be added | `yes` |

## Bonmin Branch-and-bound options

Option | Description | Default |
---|---|---|

allowable_fraction_gap | Specify the value of relative gap under which the algorithm stops. | `0.1` |

allowable_gap | Specify the value of absolute gap under which the algorithm stops. | `0` |

cutoff | Specify cutoff value. | `1e+100` |

cutoff_decr | Specify cutoff decrement. | `1e-05` |

enable_dynamic_nlp | Enable dynamic linear and quadratic rows addition in nlp | `no` |

integer_tolerance | Set integer tolerance. | `1e-06` |

iteration_limit | Set the cumulative maximum number of iteration in the algorithm used to process nodes continuous relaxations in the branch-and-bound. | `maxint` |

nlp_failure_behavior | Set the behavior when an NLP or a series of NLP are unsolved by Ipopt (we call unsolved an NLP for which Ipopt is not able to guarantee optimality within the specified tolerances). | `stop` |

node_comparison | Choose the node selection strategy. | `best-bound` |

node_limit | Set the maximum number of nodes explored in the branch-and-bound search. | `maxint` |

number_before_trust | Set the number of branches on a variable before its pseudo costs are to be believed in dynamic strong branching. | `8` |

number_strong_branch | Choose the maximum number of variables considered for strong branching. | `20` |

num_cut_passes | Set the maximum number of cut passes at regular nodes of the branch-and-cut. | `1` |

num_cut_passes_at_root | Set the maximum number of cut passes at regular nodes of the branch-and-cut. | `20` |

random_generator_seed | Set seed for random number generator (a value of -1 sets seeds to time since Epoch). | `0` |

read_solution_file | Read a file with the optimal solution to test if algorithms cuts it. | `no` |

solution_limit | Abort after that much integer feasible solution have been found by algorithm | `maxint` |

time_limit | Set the global maximum computation time (in secs) for the algorithm. | `1000` |

tree_search_strategy | Pick a strategy for traversing the tree | `probed-dive` |

variable_selection | Chooses variable selection strategy | `strong-branching` |

## Bonmin NLP interface

Option | Description | Default |
---|---|---|

warm_start | Select the warm start method | `none` |

## Bonmin NLP solution robustness

Option | Description | Default |
---|---|---|

max_consecutive_failures | (temporarily removed) Number \(n\) of consecutive unsolved problems before aborting a branch of the tree. | `10` |

max_random_point_radius | Set max value r for coordinate of a random point. | `100000` |

num_iterations_suspect | Number of iterations over which a node is considered 'suspect' (for debugging purposes only, see detailed documentation). | `-1` |

num_retry_unsolved_random_point | Number \(k\) of times that the algorithm will try to resolve an unsolved NLP with a random starting point (we call unsolved an NLP for which Ipopt is not able to guarantee optimality within the specified tolerances). | `0` |

random_point_perturbation_interval | Amount by which starting point is perturbed when choosing to pick random point by perturbing starting point | `1` |

random_point_type | method to choose a random starting point | `Jon` |

resolve_on_small_infeasibility | If a locally infeasible problem is infeasible by less than this, resolve it with initial starting point. | `0` |

## Bonmin Nonconvex problems

Option | Description | Default |
---|---|---|

coeff_var_threshold | Coefficient of variation threshold (for dynamic definition of cutoff_decr). | `0.1` |

dynamic_def_cutoff_decr | Do you want to define the parameter cutoff_decr dynamically? | `no` |

first_perc_for_cutoff_decr | The percentage used when, the coeff of variance is smaller than the threshold, to compute the cutoff_decr dynamically. | `-0.02` |

max_consecutive_infeasible | Number of consecutive infeasible subproblems before aborting a branch. | `0` |

num_resolve_at_infeasibles | Number \(k\) of tries to resolve an infeasible node (other than the root) of the tree with different starting point. | `0` |

num_resolve_at_node | Number \(k\) of tries to resolve a node (other than the root) of the tree with different starting point. | `0` |

num_resolve_at_root | Number \(k\) of tries to resolve the root node with different starting points. | `0` |

second_perc_for_cutoff_decr | The percentage used when, the coeff of variance is greater than the threshold, to compute the cutoff_decr dynamically. | `-0.05` |

## Bonmin Outer Approximation cuts generation

Option | Description | Default |
---|---|---|

add_only_violated_oa | Do we add all OA cuts or only the ones violated by current point? | `no` |

oa_cuts_scope | Specify if OA cuts added are to be set globally or locally valid | `global` |

oa_rhs_relax | Value by which to relax OA cut | `1e-08` |

tiny_element | Value for tiny element in OA cut | `1e-08` |

very_tiny_element | Value for very tiny element in OA cut | `1e-17` |

## Bonmin Output and Loglevel

Option | Description | Default |
---|---|---|

bb_log_interval | Interval at which node level output is printed. | `100` |

bb_log_level | specify main branch-and-bound log level. | `1` |

lp_log_level | specify LP log level. | `0` |

nlp_log_at_root | specify a different log level for root relaxation. | `0` |

nlp_log_level | specify NLP solver interface log level (independent from ipopt print_level). | `1` |

oa_cuts_log_level | level of log when generating OA cuts. | `0` |

## Bonmin Primal Heuristics

Option | Description | Default |
---|---|---|

algorithm | Choice of the algorithm. | `B-BB` |

feasibility_pump_objective_norm | Norm of feasibility pump objective function | `1` |

heuristic_dive_fractional | if yes runs the Dive Fractional heuristic | `no` |

heuristic_dive_MIP_fractional | if yes runs the Dive MIP Fractional heuristic | `no` |

heuristic_dive_MIP_vectorLength | if yes runs the Dive MIP VectorLength heuristic | `no` |

heuristic_dive_vectorLength | if yes runs the Dive VectorLength heuristic | `no` |

heuristic_feasibility_pump | whether the heuristic feasibility pump should be used | `no` |

heuristic_RINS | if yes runs the RINS heuristic | `no` |

milp_solver | Choose the subsolver to solve MILP sub-problems in OA decompositions. | `Cbc_D` |

milp_strategy | Choose a strategy for MILPs. | `find_good_sol` |

pump_for_minlp | whether to run the feasibility pump heuristic for MINLP | `no` |

## Bonmin Strong branching setup

Option | Description | Default |
---|---|---|

candidate_sort_criterion | Choice of the criterion to choose candidates in strong-branching | `best-ps-cost` |

maxmin_crit_have_sol | Weight towards minimum in of lower and upper branching estimates when a solution has been found. | `0.1` |

maxmin_crit_no_sol | Weight towards minimum in of lower and upper branching estimates when no solution has been found yet. | `0.7` |

min_number_strong_branch | Sets minimum number of variables for strong branching (overriding trust) | `0` |

number_before_trust_list | Set the number of branches on a variable before its pseudo costs are to be believed during setup of strong branching candidate list. | `0` |

number_look_ahead | Sets limit of look-ahead strong-branching trials | `0` |

number_strong_branch_root | Maximum number of variables considered for strong branching in root node. | `maxint` |

setup_pseudo_frac | Proportion of strong branching list that has to be taken from most-integer-infeasible list. | `0.5` |

trust_strong_branching_for_pseudo_cost | Whether or not to trust strong branching results for updating pseudo costs. | `yes` |

## Ipopt Barrier Parameter Update

Option | Description | Default |
---|---|---|

adaptive_mu_globalization | Globalization strategy for the adaptive mu selection mode. | `obj-constr-filter` |

adaptive_mu_kkterror_red_fact | Sufficient decrease factor for 'kkt-error' globalization strategy. | `0.9999` |

adaptive_mu_kkterror_red_iters | Maximum number of iterations requiring sufficient progress. | `4` |

adaptive_mu_kkt_norm_type | Norm used for the KKT error in the adaptive mu globalization strategies. | `2-norm-squared` |

adaptive_mu_monotone_init_factor | Determines the initial value of the barrier parameter when switching to the monotone mode. | `0.8` |

adaptive_mu_restore_previous_iterate | Indicates if the previous iterate should be restored if the monotone mode is entered. | `no` |

barrier_tol_factor | Factor for mu in barrier stop test. | `10` |

filter_margin_fact | Factor determining width of margin for obj-constr-filter adaptive globalization strategy. | `1e-05` |

filter_max_margin | Maximum width of margin in obj-constr-filter adaptive globalization strategy. | `1` |

fixed_mu_oracle | Oracle for the barrier parameter when switching to fixed mode. | `average_compl` |

mu_allow_fast_monotone_decrease | Allow skipping of barrier problem if barrier test is already met. | `yes` |

mu_init | Initial value for the barrier parameter. | `0.1` |

mu_linear_decrease_factor | Determines linear decrease rate of barrier parameter. | `0.2` |

mu_max | Maximum value for barrier parameter. | `100000` |

mu_max_fact | Factor for initialization of maximum value for barrier parameter. | `1000` |

mu_min | Minimum value for barrier parameter. | `1e-11` |

mu_oracle | Oracle for a new barrier parameter in the adaptive strategy. | `quality-function` |

mu_strategy | Update strategy for barrier parameter. | `monotone` |

mu_superlinear_decrease_power | Determines superlinear decrease rate of barrier parameter. | `1.5` |

quality_function_balancing_term | The balancing term included in the quality function for centrality. | `none` |

quality_function_centrality | The penalty term for centrality that is included in quality function. | `none` |

quality_function_max_section_steps | Maximum number of search steps during direct search procedure determining the optimal centering parameter. | `8` |

quality_function_norm_type | Norm used for components of the quality function. | `2-norm-squared` |

quality_function_section_qf_tol | Tolerance for the golden section search procedure determining the optimal centering parameter (in the function value space). | `0` |

quality_function_section_sigma_tol | Tolerance for the section search procedure determining the optimal centering parameter (in sigma space). | `0.01` |

sigma_max | Maximum value of the centering parameter. | `100` |

sigma_min | Minimum value of the centering parameter. | `1e-06` |

tau_min | Lower bound on fraction-to-the-boundary parameter tau. | `0.99` |

## Ipopt Convergence

Option | Description | Default |
---|---|---|

acceptable_compl_inf_tol | 'Acceptance' threshold for the complementarity conditions. | `0.01` |

acceptable_constr_viol_tol | 'Acceptance' threshold for the constraint violation. | `0.01` |

acceptable_dual_inf_tol | 'Acceptance' threshold for the dual infeasibility. | `1e+10` |

acceptable_iter | Number of 'acceptable' iterates before triggering termination. | `15` |

acceptable_obj_change_tol | 'Acceptance' stopping criterion based on objective function change. | `1e+20` |

acceptable_tol | 'Acceptable' convergence tolerance (relative). | `1e-06` |

compl_inf_tol | Desired threshold for the complementarity conditions. | `0.0001` |

constr_viol_tol | Desired threshold for the constraint violation. | `0.0001` |

diverging_iterates_tol | Threshold for maximal value of primal iterates. | `1e+20` |

dual_inf_tol | Desired threshold for the dual infeasibility. | `1` |

max_cpu_time | Maximum number of CPU seconds. | `1e+06` |

max_iter | Maximum number of iterations. | `3000` |

mu_target | Desired value of complementarity. | `0` |

s_max | Scaling threshold for the NLP error. | `100` |

tol | Desired convergence tolerance (relative). | `1e-08` |

## Ipopt Hessian Approximation

Option | Description | Default |
---|---|---|

hessian_approximation | Indicates what Hessian information is to be used. | `exact` |

hessian_approximation_space | Indicates in which subspace the Hessian information is to be approximated. | `nonlinear-variables` |

limited_memory_aug_solver | Strategy for solving the augmented system for low-rank Hessian. | `sherman-morrison` |

limited_memory_initialization | Initialization strategy for the limited memory quasi-Newton approximation. | `scalar1` |

limited_memory_init_val | Value for B0 in low-rank update. | `1` |

limited_memory_init_val_max | Upper bound on value for B0 in low-rank update. | `1e+08` |

limited_memory_init_val_min | Lower bound on value for B0 in low-rank update. | `1e-08` |

limited_memory_max_history | Maximum size of the history for the limited quasi-Newton Hessian approximation. | `6` |

limited_memory_max_skipping | Threshold for successive iterations where update is skipped. | `2` |

limited_memory_special_for_resto | Determines if the quasi-Newton updates should be special during the restoration phase. | `no` |

limited_memory_update_type | Quasi-Newton update formula for the limited memory approximation. | `bfgs` |

## Ipopt Initialization

Option | Description | Default |
---|---|---|

bound_frac | Desired minimum relative distance from the initial point to bound. | `0.01` |

bound_mult_init_method | Initialization method for bound multipliers | `constant` |

bound_mult_init_val | Initial value for the bound multipliers. | `1` |

bound_push | Desired minimum absolute distance from the initial point to bound. | `0.01` |

constr_mult_init_max | Maximum allowed least-square guess of constraint multipliers. | `1000` |

least_square_init_duals | Least square initialization of all dual variables | `no` |

least_square_init_primal | Least square initialization of the primal variables | `no` |

slack_bound_frac | Desired minimum relative distance from the initial slack to bound. | `0.01` |

slack_bound_push | Desired minimum absolute distance from the initial slack to bound. | `0.01` |

## Ipopt Line Search

Option | Description | Default |
---|---|---|

accept_after_max_steps | Accept a trial point after maximal this number of steps. | `-1` |

accept_every_trial_step | Always accept the first trial step. | `no` |

alpha_for_y | Method to determine the step size for constraint multipliers. | `primal` |

alpha_for_y_tol | Tolerance for switching to full equality multiplier steps. | `10` |

alpha_min_frac | Safety factor for the minimal step size (before switching to restoration phase). | `0.05` |

alpha_red_factor | Fractional reduction of the trial step size in the backtracking line search. | `0.5` |

constraint_violation_norm_type | Norm to be used for the constraint violation in the line search. | `1-norm` |

corrector_compl_avrg_red_fact | Complementarity tolerance factor for accepting corrector step (unsupported!). | `1` |

corrector_type | The type of corrector steps that should be taken (unsupported!). | `none` |

delta | Multiplier for constraint violation in the switching rule. | `1` |

eta_phi | Relaxation factor in the Armijo condition. | `1e-08` |

filter_reset_trigger | Number of iterations that trigger the filter reset. | `5` |

gamma_phi | Relaxation factor in the filter margin for the barrier function. | `1e-08` |

gamma_theta | Relaxation factor in the filter margin for the constraint violation. | `1e-05` |

kappa_sigma | Factor limiting the deviation of dual variables from primal estimates. | `1e+10` |

kappa_soc | Factor in the sufficient reduction rule for second order correction. | `0.99` |

line_search_method | Globalization method used in backtracking line search | `filter` |

max_filter_resets | Maximal allowed number of filter resets | `5` |

max_soc | Maximum number of second order correction trial steps at each iteration. | `4` |

nu_inc | Increment of the penalty parameter. | `0.0001` |

nu_init | Initial value of the penalty parameter. | `1e-06` |

obj_max_inc | Determines the upper bound on the acceptable increase of barrier objective function. | `5` |

recalc_y | Tells the algorithm to recalculate the equality and inequality multipliers as least square estimates. | `no` |

recalc_y_feas_tol | Feasibility threshold for recomputation of multipliers. | `1e-06` |

rho | Value in penalty parameter update formula. | `0.1` |

skip_corr_if_neg_curv | Skip the corrector step in negative curvature iteration (unsupported!). | `yes` |

skip_corr_in_monotone_mode | Skip the corrector step during monotone barrier parameter mode (unsupported!). | `yes` |

slack_move | Correction size for very small slacks. | `1.81899e-12` |

soc_method | Ways to apply second order correction | `0` |

s_phi | Exponent for linear barrier function model in the switching rule. | `2.3` |

s_theta | Exponent for current constraint violation in the switching rule. | `1.1` |

theta_max_fact | Determines upper bound for constraint violation in the filter. | `10000` |

theta_min_fact | Determines constraint violation threshold in the switching rule. | `0.0001` |

tiny_step_tol | Tolerance for detecting numerically insignificant steps. | `2.22045e-15` |

tiny_step_y_tol | Tolerance for quitting because of numerically insignificant steps. | `0.01` |

watchdog_shortened_iter_trigger | Number of shortened iterations that trigger the watchdog. | `10` |

watchdog_trial_iter_max | Maximum number of watchdog iterations. | `3` |

## Ipopt Linear Solver

Option | Description | Default |
---|---|---|

linear_scaling_on_demand | Flag indicating that linear scaling is only done if it seems required. | `yes` |

linear_solver | Linear solver used for step computations. | `ma27` |

linear_system_scaling | Method for scaling the linear system. | `mc19` |

## Ipopt MA27 Linear Solver

Option | Description | Default |
---|---|---|

ma27_ignore_singularity | Enables MA27's ability to solve a linear system even if the matrix is singular. | `no` |

ma27_la_init_factor | Real workspace memory for MA27. | `5` |

ma27_liw_init_factor | Integer workspace memory for MA27. | `5` |

ma27_meminc_factor | Increment factor for workspace size for MA27. | `2` |

ma27_pivtol | Pivot tolerance for the linear solver MA27. | `1e-08` |

ma27_pivtolmax | Maximum pivot tolerance for the linear solver MA27. | `0.0001` |

ma27_skip_inertia_check | Always pretend inertia is correct. | `no` |

## Ipopt MA28 Linear Solver

Option | Description | Default |
---|---|---|

ma28_pivtol | Pivot tolerance for linear solver MA28. | `0.01` |

## Ipopt MA57 Linear Solver

Option | Description | Default |
---|---|---|

ma57_automatic_scaling | Controls MA57 automatic scaling | `no` |

ma57_block_size | Controls block size used by Level 3 BLAS in MA57BD | `16` |

ma57_node_amalgamation | Node amalgamation parameter | `16` |

ma57_pivot_order | Controls pivot order in MA57 | `5` |

ma57_pivtol | Pivot tolerance for the linear solver MA57. | `1e-08` |

ma57_pivtolmax | Maximum pivot tolerance for the linear solver MA57. | `0.0001` |

ma57_pre_alloc | Safety factor for work space memory allocation for the linear solver MA57. | `1.05` |

ma57_small_pivot_flag | If set to 1, then when small entries defined by CNTL(2) are detected they are removed and the corresponding pivots placed at the end of the factorization. This can be particularly efficient if the matrix is highly rank deficient. | `0` |

## Ipopt MA77 Linear Solver

Option | Description | Default |
---|---|---|

ma77_buffer_lpage | Number of scalars per MA77 buffer page | `4096` |

ma77_buffer_npage | Number of pages that make up MA77 buffer | `1600` |

ma77_file_size | Target size of each temporary file for MA77, scalars per type | `2097152` |

ma77_maxstore | Maximum storage size for MA77 in-core mode | `0` |

ma77_nemin | Node Amalgamation parameter | `8` |

ma77_order | Controls type of ordering used by HSL_MA77 | `metis` |

ma77_print_level | Debug printing level for the linear solver MA77 | `-1` |

ma77_small | Zero Pivot Threshold | `1e-20` |

ma77_static | Static Pivoting Threshold | `0` |

ma77_u | Pivoting Threshold | `1e-08` |

ma77_umax | Maximum Pivoting Threshold | `0.0001` |

## Ipopt MA86 Linear Solver

Option | Description | Default |
---|---|---|

ma86_nemin | Node Amalgamation parameter | `32` |

ma86_order | Controls type of ordering used by HSL_MA86 | `auto` |

ma86_print_level | Debug printing level for the linear solver MA86 | `-1` |

ma86_scaling | Controls scaling of matrix | `mc64` |

ma86_small | Zero Pivot Threshold | `1e-20` |

ma86_static | Static Pivoting Threshold | `0` |

ma86_u | Pivoting Threshold | `1e-08` |

ma86_umax | Maximum Pivoting Threshold | `0.0001` |

## Ipopt MA97 Linear Solver

Option | Description | Default |
---|---|---|

ma97_nemin | Node Amalgamation parameter | `8` |

ma97_order | Controls type of ordering used by HSL_MA97 | `auto` |

ma97_print_level | Debug printing level for the linear solver MA97 | `0` |

ma97_scaling | Specifies strategy for scaling in HSL_MA97 linear solver | `dynamic` |

ma97_scaling1 | First scaling. | `mc64` |

ma97_scaling2 | Second scaling. | `mc64` |

ma97_scaling3 | Third scaling. | `mc64` |

ma97_small | Zero Pivot Threshold | `1e-20` |

ma97_solve_blas3 | Controls if blas2 or blas3 routines are used for solve | `no` |

ma97_switch1 | First switch, determine when ma97_scaling1 is enabled. | `od_hd_reuse` |

ma97_switch2 | Second switch, determine when ma97_scaling2 is enabled. | `never` |

ma97_switch3 | Third switch, determine when ma97_scaling3 is enabled. | `never` |

ma97_u | Pivoting Threshold | `1e-08` |

ma97_umax | Maximum Pivoting Threshold | `0.0001` |

## Ipopt Mumps Linear Solver

Option | Description | Default |
---|---|---|

mumps_dep_tol | Pivot threshold for detection of linearly dependent constraints in MUMPS. | `0` |

mumps_mem_percent | Percentage increase in the estimated working space for MUMPS. | `1000` |

mumps_permuting_scaling | Controls permuting and scaling in MUMPS | `7` |

mumps_pivot_order | Controls pivot order in MUMPS | `7` |

mumps_pivtol | Pivot tolerance for the linear solver MUMPS. | `1e-06` |

mumps_pivtolmax | Maximum pivot tolerance for the linear solver MUMPS. | `0.1` |

mumps_scaling | Controls scaling in MUMPS | `77` |

## Ipopt NLP

Option | Description | Default |
---|---|---|

bound_relax_factor | Factor for initial relaxation of the bounds. | `1e-10` |

check_derivatives_for_naninf | Indicates whether it is desired to check for Nan/Inf in derivative matrices | `no` |

dependency_detection_with_rhs | Indicates if the right hand sides of the constraints should be considered during dependency detection | `no` |

dependency_detector | Indicates which linear solver should be used to detect linearly dependent equality constraints. | `none` |

fixed_variable_treatment | Determines how fixed variables should be handled. | `make_parameter` |

honor_original_bounds | Indicates whether final points should be projected into original bounds. | `yes` |

jac_c_constant | Indicates whether all equality constraints are linear | `no` |

jac_d_constant | Indicates whether all inequality constraints are linear | `no` |

kappa_d | Weight for linear damping term (to handle one-sided bounds). | `1e-05` |

## Ipopt NLP Scaling

Option | Description | Default |
---|---|---|

nlp_scaling_constr_target_gradient | Target value for constraint function gradient size. | `0` |

nlp_scaling_max_gradient | Maximum gradient after NLP scaling. | `100` |

nlp_scaling_method | Select the technique used for scaling the NLP. | `gradient-based` |

nlp_scaling_min_value | Minimum value of gradient-based scaling values. | `1e-08` |

nlp_scaling_obj_target_gradient | Target value for objective function gradient size. | `0` |

## Ipopt Output

Option | Description | Default |
---|---|---|

inf_pr_output | Determines what value is printed in the 'inf_pr' output column. | `original` |

print_eval_error | Switch to enable printing information about function evaluation errors into the GAMS listing file. | `yes` |

print_frequency_iter | Determines at which iteration frequency the summarizing iteration output line should be printed. | `1` |

print_frequency_time | Determines at which time frequency the summarizing iteration output line should be printed. | `0` |

print_info_string | Enables printing of additional info string at end of iteration output. | `no` |

print_level | Output verbosity level. | `5` |

print_timing_statistics | Switch to print timing statistics. | `no` |

replace_bounds | Indicates if all variable bounds should be replaced by inequality constraints | `no` |

## Ipopt Pardiso Linear Solver

Option | Description | Default |
---|---|---|

pardiso_matching_strategy | Matching strategy to be used by Pardiso | `complete+2x2` |

pardiso_max_iterative_refinement_steps | Limit on number of iterative refinement steps. | `1` |

pardiso_msglvl | Pardiso message level | `0` |

pardiso_order | Controls the fill-in reduction ordering algorithm for the input matrix. | `metis` |

pardiso_redo_symbolic_fact_only_if_inertia_wrong | Toggle for handling case when elements were perturbed by Pardiso. | `no` |

pardiso_repeated_perturbation_means_singular | Interpretation of perturbed elements. | `no` |

pardiso_skip_inertia_check | Always pretend inertia is correct. | `no` |

## Ipopt Restoration Phase

Option | Description | Default |
---|---|---|

bound_mult_reset_threshold | Threshold for resetting bound multipliers after the restoration phase. | `1000` |

constr_mult_reset_threshold | Threshold for resetting equality and inequality multipliers after restoration phase. | `0` |

evaluate_orig_obj_at_resto_trial | Determines if the original objective function should be evaluated at restoration phase trial points. | `yes` |

expect_infeasible_problem | Enable heuristics to quickly detect an infeasible problem. | `no` |

expect_infeasible_problem_ctol | Threshold for disabling 'expect_infeasible_problem' option. | `0.001` |

expect_infeasible_problem_ytol | Multiplier threshold for activating 'expect_infeasible_problem' option. | `1e+08` |

max_resto_iter | Maximum number of successive iterations in restoration phase. | `3000000` |

max_soft_resto_iters | Maximum number of iterations performed successively in soft restoration phase. | `10` |

required_infeasibility_reduction | Required reduction of infeasibility before leaving restoration phase. | `0.9` |

resto_failure_feasibility_threshold | Threshold for primal infeasibility to declare failure of restoration phase. | `0` |

resto_penalty_parameter | Penalty parameter in the restoration phase objective function. | `1000` |

resto_proximity_weight | Weighting factor for the proximity term in restoration phase objective. | `1` |

soft_resto_pderror_reduction_factor | Required reduction in primal-dual error in the soft restoration phase. | `0.9999` |

start_with_resto | Tells algorithm to switch to restoration phase in first iteration. | `no` |

## Ipopt Step Calculation

Option | Description | Default |
---|---|---|

fast_step_computation | Indicates if the linear system should be solved quickly. | `no` |

first_hessian_perturbation | Size of first x-s perturbation tried. | `0.0001` |

jacobian_regularization_exponent | Exponent for mu in the regularization for rank-deficient constraint Jacobians. | `0.25` |

jacobian_regularization_value | Size of the regularization for rank-deficient constraint Jacobians. | `1e-08` |

max_hessian_perturbation | Maximum value of regularization parameter for handling negative curvature. | `1e+20` |

max_refinement_steps | Maximum number of iterative refinement steps per linear system solve. | `10` |

mehrotra_algorithm | Indicates if we want to do Mehrotra's algorithm. | `no` |

min_hessian_perturbation | Smallest perturbation of the Hessian block. | `1e-20` |

min_refinement_steps | Minimum number of iterative refinement steps per linear system solve. | `1` |

neg_curv_test_reg | Whether to do the curvature test with the primal regularization (see Zavala and Chiang, 2014). | `yes` |

neg_curv_test_tol | Tolerance for heuristic to ignore wrong inertia. | `0` |

perturb_always_cd | Active permanent perturbation of constraint linearization. | `no` |

perturb_dec_fact | Decrease factor for x-s perturbation. | `0.333333` |

perturb_inc_fact | Increase factor for x-s perturbation. | `8` |

perturb_inc_fact_first | Increase factor for x-s perturbation for very first perturbation. | `100` |

residual_improvement_factor | Minimal required reduction of residual test ratio in iterative refinement. | `1` |

residual_ratio_max | Iterative refinement tolerance | `1e-10` |

residual_ratio_singular | Threshold for declaring linear system singular after failed iterative refinement. | `1e-05` |

## Ipopt Warm Start

Option | Description | Default |
---|---|---|

warm_start_bound_frac | same as bound_frac for the regular initializer. | `0.001` |

warm_start_bound_push | same as bound_push for the regular initializer. | `0.001` |

warm_start_init_point | Warm-start for initial point | `no` |

warm_start_mult_bound_push | same as mult_bound_push for the regular initializer. | `0.001` |

warm_start_mult_init_max | Maximum initial value for the equality multipliers. | `1e+06` |

warm_start_slack_bound_frac | same as slack_bound_frac for the regular initializer. | `0.001` |

warm_start_slack_bound_push | same as slack_bound_push for the regular initializer. | `0.001` |

# Detailed Options Description

In the following we give a detailed list of options available for Couenne, including those for the underlying Ipopt and Bonmin solvers.

**2mir_cuts** *(integer)*: Frequency k (in terms of nodes) for generating 2mir_cuts cuts in branch-and-cut. ↵

If k > 0, cuts are generated every k nodes, if -99 < k < 0 cuts are generated every -k nodes but Cbc may decide to stop generating cuts, if not enough are generated at the root node, if k=-99 generate cuts only at the root node, if k=0 or 100 do not generate cuts.

Range: [

`-100`

, ∞]Default:

`0`

**acceptable_compl_inf_tol** *(real)*: 'Acceptance' threshold for the complementarity conditions. ↵

Absolute tolerance on the complementarity. "Acceptable" termination requires that the max-norm of the (unscaled) complementarity is less than this threshold; see also acceptable_tol.

Default:

`0.01`

**acceptable_constr_viol_tol** *(real)*: 'Acceptance' threshold for the constraint violation. ↵

Absolute tolerance on the constraint violation. "Acceptable" termination requires that the max-norm of the (unscaled) constraint violation is less than this threshold; see also acceptable_tol.

Default:

`0.01`

**acceptable_dual_inf_tol** *(real)*: 'Acceptance' threshold for the dual infeasibility. ↵

Absolute tolerance on the dual infeasibility. "Acceptable" termination requires that the (max-norm of the unscaled) dual infeasibility is less than this threshold; see also acceptable_tol.

Default:

`1e+10`

**acceptable_iter** *(integer)*: Number of 'acceptable' iterates before triggering termination. ↵

If the algorithm encounters this many successive "acceptable" iterates (see "acceptable_tol"), it terminates, assuming that the problem has been solved to best possible accuracy given round-off. If it is set to zero, this heuristic is disabled.

Default:

`15`

**acceptable_obj_change_tol** *(real)*: 'Acceptance' stopping criterion based on objective function change. ↵

If the relative change of the objective function (scaled by Max(1,|f(x)|)) is less than this value, this part of the acceptable tolerance termination is satisfied; see also acceptable_tol. This is useful for the quasi-Newton option, which has trouble to bring down the dual infeasibility.

Default:

`1e+20`

**acceptable_tol** *(real)*: 'Acceptable' convergence tolerance (relative). ↵

Determines which (scaled) overall optimality error is considered to be "acceptable." There are two levels of termination criteria. If the usual "desired" tolerances (see tol, dual_inf_tol etc) are satisfied at an iteration, the algorithm immediately terminates with a success message. On the other hand, if the algorithm encounters "acceptable_iter" many iterations in a row that are considered "acceptable", it will terminate before the desired convergence tolerance is met. This is useful in cases where the algorithm might not be able to achieve the "desired" level of accuracy.

Default:

`1e-06`

**accept_after_max_steps** *(integer)*: Accept a trial point after maximal this number of steps. ↵

Even if it does not satisfy line search conditions.

Range: [

`-1`

, ∞]Default:

`-1`

**accept_every_trial_step** *(string)*: Always accept the first trial step. ↵

Setting this option to "yes" essentially disables the line search and makes the algorithm take aggressive steps, without global convergence guarantees.

Default:

`no`

value meaning `no`

don't arbitrarily accept the full step `yes`

always accept the full step

**adaptive_mu_globalization** *(string)*: Globalization strategy for the adaptive mu selection mode. ↵

To achieve global convergence of the adaptive version, the algorithm has to switch to the monotone mode (Fiacco-McCormick approach) when convergence does not seem to appear. This option sets the criterion used to decide when to do this switch. (Only used if option "mu_strategy" is chosen as "adaptive".)

Default:

`obj-constr-filter`

value meaning `kkt-error`

nonmonotone decrease of kkt-error `never-monotone-mode`

disables globalization `obj-constr-filter`

2-dim filter for objective and constraint violation

**adaptive_mu_kkterror_red_fact** *(real)*: Sufficient decrease factor for 'kkt-error' globalization strategy. ↵

For the "kkt-error" based globalization strategy, the error must decrease by this factor to be deemed sufficient decrease.

Range: [

`0`

,`1`

]Default:

`0.9999`

**adaptive_mu_kkterror_red_iters** *(integer)*: Maximum number of iterations requiring sufficient progress. ↵

For the "kkt-error" based globalization strategy, sufficient progress must be made for "adaptive_mu_kkterror_red_iters" iterations. If this number of iterations is exceeded, the globalization strategy switches to the monotone mode.

Default:

`4`

**adaptive_mu_kkt_norm_type** *(string)*: Norm used for the KKT error in the adaptive mu globalization strategies. ↵

When computing the KKT error for the globalization strategies, the norm to be used is specified with this option. Note, this options is also used in the QualityFunctionMuOracle.

Default:

`2-norm-squared`

value meaning `1-norm`

use the 1-norm (abs sum) `2-norm`

use 2-norm `2-norm-squared`

use the 2-norm squared (sum of squares) `max-norm`

use the infinity norm (max)

**adaptive_mu_monotone_init_factor** *(real)*: Determines the initial value of the barrier parameter when switching to the monotone mode. ↵

When the globalization strategy for the adaptive barrier algorithm switches to the monotone mode and fixed_mu_oracle is chosen as "average_compl", the barrier parameter is set to the current average complementarity times the value of "adaptive_mu_monotone_init_factor".

Default:

`0.8`

**adaptive_mu_restore_previous_iterate** *(string)*: Indicates if the previous iterate should be restored if the monotone mode is entered. ↵

When the globalization strategy for the adaptive barrier algorithm switches to the monotone mode, it can either start from the most recent iterate (no), or from the last iterate that was accepted (yes).

Default:

`no`

value meaning `no`

don't restore accepted iterate `yes`

restore accepted iterate

**add_only_violated_oa** *(string)*: Do we add all OA cuts or only the ones violated by current point? ↵

Default:

`no`

value meaning `no`

Add all cuts `yes`

Add only violated cuts

**aggressive_fbbt** *(string)*: Aggressive feasibility-based bound tightening (to use with NLP points) ↵

Aggressive FBBT is a version of probing that also allows to reduce the solution set, although it is not as quick as FBBT. It can be applied up to a certain depth of the B&B tree – see ``log_num_abt_per_level''. In general, this option is useful but can be switched off if a problem is too large and seems not to benefit from it.

Default:

`yes`

value meaning `no`

`yes`

**algorithm** *(string)*: Choice of the algorithm. ↵

This will preset some of the options of bonmin depending on the algorithm choice.

Default:

`B-BB`

value meaning `b-bb`

simple branch-and-bound algorithm, `b-ecp`

ecp cuts based branch-and-cut a la FilMINT. `b-hyb`

hybrid outer approximation based branch-and-cut, `b-ifp`

Iterated Feasibility Pump for MINLP. `b-oa`

OA Decomposition algorithm, `b-qg`

Quesada and Grossmann branch-and-cut algorithm,

**allowable_fraction_gap** *(real)*: Specify the value of relative gap under which the algorithm stops. ↵

Stop the tree search when the gap between the objective value of the best known solution and the best bound on the objective of any solution is less than this fraction of the absolute value of the best known solution value.

Range: [-∞, ∞]

Default:

`0.1`

**allowable_gap** *(real)*: Specify the value of absolute gap under which the algorithm stops. ↵

Stop the tree search when the gap between the objective value of the best known solution and the best bound on the objective of any solution is less than this.

Range: [-∞, ∞]

Default:

`0`

**alpha_for_y** *(string)*: Method to determine the step size for constraint multipliers. ↵

This option determines how the step size (alpha_y) will be calculated when updating the constraint multipliers.

Default:

`primal`

value meaning `acceptor`

Call LSAcceptor to get step size for y `bound-mult`

use step size for the bound multipliers (good for LPs) `dual-and-full`

use the dual step size, and full step if delta_x ≤ alpha_for_y_tol `full`

take a full step of size one `max`

use the max of primal and bound multipliers `min`

use the min of primal and bound multipliers `min-dual-infeas`

choose step size minimizing new dual infeasibility `primal`

use primal step size `primal-and-full`

use the primal step size, and full step if delta_x ≤ alpha_for_y_tol `safer-min-dual-infeas`

like 'min_dual_infeas', but safeguarded by 'min' and 'max'

**alpha_for_y_tol** *(real)*: Tolerance for switching to full equality multiplier steps. ↵

This is only relevant if "alpha_for_y" is chosen "primal-and-full" or "dual-and-full". The step size for the equality constraint multipliers is taken to be one if the max-norm of the primal step is less than this tolerance.

Default:

`10`

**alpha_min_frac** *(real)*: Safety factor for the minimal step size (before switching to restoration phase). ↵

(This is gamma_alpha in Eqn. (20) in the implementation paper.)

Range: [

`0`

,`1`

]Default:

`0.05`

**alpha_red_factor** *(real)*: Fractional reduction of the trial step size in the backtracking line search. ↵

At every step of the backtracking line search, the trial step size is reduced by this factor.

Range: [

`0`

,`1`

]Default:

`0.5`

**art_cutoff** *(real)*: Artificial cutoff ↵

Default value is infinity.

Range: [-∞, ∞]

Default:

`maxdouble`

**art_lower** *(real)*: Artificial lower bound ↵

Default value is -COIN_DBL_MAX.

Range: [-∞, ∞]

Default:

`mindouble`

**barrier_tol_factor** *(real)*: Factor for mu in barrier stop test. ↵

The convergence tolerance for each barrier problem in the monotone mode is the value of the barrier parameter times "barrier_tol_factor". This option is also used in the adaptive mu strategy during the monotone mode. (This is kappa_epsilon in implementation paper).

Default:

`10`

**bb_log_interval** *(integer)*: Interval at which node level output is printed. ↵

Set the interval (in terms of number of nodes) at which a log on node resolutions (consisting of lower and upper bounds) is given.

Default:

`100`

**bb_log_level** *(integer)*: specify main branch-and-bound log level. ↵

Set the level of output of the branch-and-bound : 0 - none, 1 - minimal, 2 - normal low, 3 - normal high

Range: [

`0`

,`5`

]Default:

`1`

**boundtightening_print_level** *(integer)*: Output level for bound tightening code in Couenne ↵

Range: [

`-2`

,`12`

]Default:

`0`

**bound_frac** *(real)*: Desired minimum relative distance from the initial point to bound. ↵

Determines how much the initial point might have to be modified in order to be sufficiently inside the bounds (together with "bound_push"). (This is kappa_2 in Section 3.6 of implementation paper.)

Range: [

`0`

,`0.5`

]Default:

`0.01`

**bound_mult_init_method** *(string)*: Initialization method for bound multipliers ↵

This option defines how the iterates for the bound multipliers are initialized. If "constant" is chosen, then all bound multipliers are initialized to the value of "bound_mult_init_val". If "mu-based" is chosen, the each value is initialized to the the value of "mu_init" divided by the corresponding slack variable. This latter option might be useful if the starting point is close to the optimal solution.

Default:

`constant`

value meaning `constant`

set all bound multipliers to the value of bound_mult_init_val `mu-based`

initialize to mu_init/x_slack

**bound_mult_init_val** *(real)*: Initial value for the bound multipliers. ↵

All dual variables corresponding to bound constraints are initialized to this value.

Default:

`1`

**bound_mult_reset_threshold** *(real)*: Threshold for resetting bound multipliers after the restoration phase. ↵

After returning from the restoration phase, the bound multipliers are updated with a Newton step for complementarity. Here, the change in the primal variables during the entire restoration phase is taken to be the corresponding primal Newton step. However, if after the update the largest bound multiplier exceeds the threshold specified by this option, the multipliers are all reset to 1.

Default:

`1000`

**bound_push** *(real)*: Desired minimum absolute distance from the initial point to bound. ↵

Determines how much the initial point might have to be modified in order to be sufficiently inside the bounds (together with "bound_frac"). (This is kappa_1 in Section 3.6 of implementation paper.)

Default:

`0.01`

**bound_relax_factor** *(real)*: Factor for initial relaxation of the bounds. ↵

Before start of the optimization, the bounds given by the user are relaxed. This option sets the factor for this relaxation. If it is set to zero, then then bounds relaxation is disabled. (See Eqn.(35) in implementation paper.)

Default:

`1e-10`

**branching_object** *(string)*: type of branching object for variable selection ↵

Default:

`var_obj`

value meaning `expr_obj`

use one object for each nonlinear expression `var_obj`

use one object for each variable `vt_obj`

use Violation Transfer from Tawarmalani and Sahinidis

**branching_print_level** *(integer)*: Output level for braching code in Couenne ↵

Range: [

`-2`

,`12`

]Default:

`0`

**branch_conv_cuts** *(string)*: Apply convexification cuts before branching (for now only within strong branching) ↵

After applying a branching rule and before resolving the subproblem, generate a round of linearization cuts with the new bounds enforced by the rule.

Default:

`yes`

value meaning `no`

`yes`

**branch_fbbt** *(string)*: Apply bound tightening before branching ↵

After applying a branching rule and before re-solving the subproblem, apply Bound Tightening.

Default:

`yes`

value meaning `no`

`yes`

**branch_lp_clamp** *(real)*: Defines safe interval percentage for using LP point as a branching point. ↵

Range: [

`0`

,`1`

]Default:

`0.2`

**branch_lp_clamp_cube** *(real)*: Defines safe interval percentage [0,0.5] for using LP point as a branching point. ↵

Range: [

`0`

,`0.5`

]Default:

`0.2`

**branch_lp_clamp_div** *(real)*: Defines safe interval percentage [0,0.5] for using LP point as a branching point. ↵

Range: [

`0`

,`0.5`

]Default:

`0.2`

**branch_lp_clamp_exp** *(real)*: Defines safe interval percentage [0,0.5] for using LP point as a branching point. ↵

Range: [

`0`

,`0.5`

]Default:

`0.2`

**branch_lp_clamp_log** *(real)*: Defines safe interval percentage [0,0.5] for using LP point as a branching point. ↵

Range: [

`0`

,`0.5`

]Default:

`0.2`

**branch_lp_clamp_negpow** *(real)*: Defines safe interval percentage [0,0.5] for using LP point as a branching point. ↵

Range: [

`0`

,`0.5`

]Default:

`0.2`

**branch_lp_clamp_pow** *(real)*: Defines safe interval percentage [0,0.5] for using LP point as a branching point. ↵

Range: [

`0`

,`0.5`

]Default:

`0.2`

**branch_lp_clamp_prod** *(real)*: Defines safe interval percentage [0,0.5] for using LP point as a branching point. ↵

Range: [

`0`

,`0.5`

]Default:

`0.2`

**branch_lp_clamp_sqr** *(real)*: Defines safe interval percentage [0,0.5] for using LP point as a branching point. ↵

Range: [

`0`

,`0.5`

]Default:

`0.2`

**branch_lp_clamp_trig** *(real)*: Defines safe interval percentage [0,0.5] for using LP point as a branching point. ↵

Range: [

`0`

,`0.5`

]Default:

`0.2`

**branch_midpoint_alpha** *(real)*: Defines convex combination of mid point and current LP point: b = alpha x_lp + (1-alpha) (lb+ub)/2. ↵

Range: [

`0`

,`1`

]Default:

`0.25`

**branch_pt_select** *(string)*: Chooses branching point selection strategy ↵

Default:

`mid-point`

value meaning `balanced`

minimizes max distance from curve to convexification `lp-central`

LP point if within [k,1-k] of the bound intervals, middle point otherwise(k defined by branch_lp_clamp) `lp-clamped`

LP point clamped in [k,1-k] of the bound intervals (k defined by lp_clamp) `mid-point`

convex combination of current point and mid point `min-area`

minimizes total area of the two convexifications `no-branch`

do not branch, return null infeasibility; for testing purposes only

**branch_pt_select_cube** *(string)*: Chooses branching point selection strategy for operator cube. ↵

Default is to use the value of

`branch_pt_select`

(value`common`

).Default:

`common`

value meaning `balanced`

`common`

`lp-central`

`lp-clamped`

`mid-point`

`min-area`

`no-branch`

**branch_pt_select_div** *(string)*: Chooses branching point selection strategy for operator div. ↵

Default is to use the value of

`branch_pt_select`

(value`common`

).Default:

`common`

value meaning `balanced`

`common`

`lp-central`

`lp-clamped`

`mid-point`

`min-area`

`no-branch`

**branch_pt_select_exp** *(string)*: Chooses branching point selection strategy for operator exp. ↵

Default is to use the value of

`branch_pt_select`

(value`common`

).Default:

`common`

value meaning `balanced`

`common`

`lp-central`

`lp-clamped`

`mid-point`

`min-area`

`no-branch`

**branch_pt_select_log** *(string)*: Chooses branching point selection strategy for operator log. ↵

Default is to use the value of

`branch_pt_select`

(value`common`

).Default:

`common`

value meaning `balanced`

`common`

`lp-central`

`lp-clamped`

`mid-point`

`min-area`

`no-branch`

**branch_pt_select_negpow** *(string)*: Chooses branching point selection strategy for operator negpow. ↵

Default is to use the value of

`branch_pt_select`

(value`common`

).Default:

`common`

value meaning `balanced`

`common`

`lp-central`

`lp-clamped`

`mid-point`

`min-area`

`no-branch`

**branch_pt_select_pow** *(string)*: Chooses branching point selection strategy for operator pow. ↵

Default is to use the value of

`branch_pt_select`

(value`common`

).Default:

`common`

value meaning `balanced`

`common`

`lp-central`

`lp-clamped`

`mid-point`

`min-area`

`no-branch`

**branch_pt_select_prod** *(string)*: Chooses branching point selection strategy for operator prod. ↵

Default is to use the value of

`branch_pt_select`

(value`common`

).Default:

`common`

value meaning `balanced`

`common`

`lp-central`

`lp-clamped`

`mid-point`

`min-area`

`no-branch`

**branch_pt_select_sqr** *(string)*: Chooses branching point selection strategy for operator sqr. ↵

Default is to use the value of

`branch_pt_select`

(value`common`

).Default:

`common`

value meaning `balanced`

`common`

`lp-central`

`lp-clamped`

`mid-point`

`min-area`

`no-branch`

**branch_pt_select_trig** *(string)*: Chooses branching point selection strategy for operator trig. ↵

Default is to use the value of

`branch_pt_select`

(value`common`

).Default:

`common`

value meaning `balanced`

`common`

`lp-central`

`lp-clamped`

`mid-point`

`min-area`

`no-branch`

**candidate_sort_criterion** *(string)*: Choice of the criterion to choose candidates in strong-branching ↵

Default:

`best-ps-cost`

value meaning `best-ps-cost`

Sort by decreasing pseudo-cost `least-fractional`

Sort by increasing integer infeasibility `most-fractional`

Sort by decreasing integer infeasibility `worst-ps-cost`

Sort by increasing pseudo-cost

**check_derivatives_for_naninf** *(string)*: Indicates whether it is desired to check for Nan/Inf in derivative matrices ↵

Activating this option will cause an error if an invalid number is detected in the constraint Jacobians or the Lagrangian Hessian. If this is not activated, the test is skipped, and the algorithm might proceed with invalid numbers and fail. If test is activated and an invalid number is detected, the matrix is written to output with print_level corresponding to J_MORE_DETAILED; so beware of large output!

Default:

`no`

value meaning `no`

Don't check (faster). `yes`

Check Jacobians and Hessian for Nan and Inf.

**check_lp** *(string)*: Check all LPs through an independent call to OsiClpSolverInterface::initialSolve() ↵

Default:

`no`

value meaning `no`

`yes`

**clique_cuts** *(integer)*: Frequency k (in terms of nodes) for generating clique_cuts cuts in branch-and-cut. ↵

See option

`2mir_cuts`

for the meaning of k.Range: [

`-100`

, ∞]Default:

`0`

**coeff_var_threshold** *(real)*: Coefficient of variation threshold (for dynamic definition of cutoff_decr). ↵

Default:

`0.1`

**compl_inf_tol** *(real)*: Desired threshold for the complementarity conditions. ↵

Absolute tolerance on the complementarity. Successful termination requires that the max-norm of the (unscaled) complementarity is less than this threshold.

Default:

`0.0001`

**constraint_violation_norm_type** *(string)*: Norm to be used for the constraint violation in the line search. ↵

Determines which norm should be used when the algorithm computes the constraint violation in the line search.

Default:

`1-norm`

value meaning `1-norm`

use the 1-norm `2-norm`

use the 2-norm `max-norm`

use the infinity norm

**constr_mult_init_max** *(real)*: Maximum allowed least-square guess of constraint multipliers. ↵

Determines how large the initial least-square guesses of the constraint multipliers are allowed to be (in max-norm). If the guess is larger than this value, it is discarded and all constraint multipliers are set to zero. This options is also used when initializing the restoration phase. By default, "resto.constr_mult_init_max" (the one used in RestoIterateInitializer) is set to zero.

Default:

`1000`

**constr_mult_reset_threshold** *(real)*: Threshold for resetting equality and inequality multipliers after restoration phase. ↵

After returning from the restoration phase, the constraint multipliers are recomputed by a least square estimate. This option triggers when those least-square estimates should be ignored.

Default:

`0`

**constr_viol_tol** *(real)*: Desired threshold for the constraint violation. ↵

Absolute tolerance on the constraint violation. Successful termination requires that the max-norm of the (unscaled) constraint violation is less than this threshold.

Default:

`0.0001`

**cont_var_priority** *(integer)*: Priority of continuous variable branching ↵

When branching, this is compared to the priority of integer variables, whose priority is given by int_var_priority, and SOS, whose priority is 10. Higher values mean smaller priority.

Range: [

`1`

, ∞]Default:

`99`

**convexification_cuts** *(integer)*: Specify the frequency (in terms of nodes) at which couenne ecp cuts are generated. ↵

A frequency of 0 amounts to never solve the NLP relaxation.

Range: [

`-99`

, ∞]Default:

`1`

**convexification_points** *(integer)*: Specify the number of points at which to convexify when convexification type is uniform-grid or around-current-point. ↵

Default:

`4`

**convexification_type** *(string)*: Determines in which point the linear over/under-estimator are generated ↵

For the lower envelopes of convex functions, this is the number of points where a supporting hyperplane is generated. This only holds for the initial linearization, as all other linearizations only add at most one cut per expression.

Default:

`current-point-only`

value meaning `around-current-point`

At points around current optimum of relaxation `current-point-only`

Only at current optimum of relaxation `uniform-grid`

Points chosen in a uniform grid between the bounds of the problem

**convexifying_print_level** *(integer)*: Output level for convexifying code in Couenne ↵

Range: [

`-2`

,`12`

]Default:

`0`

**corrector_compl_avrg_red_fact** *(real)*: Complementarity tolerance factor for accepting corrector step (unsupported!). ↵

This option determines the factor by which complementarity is allowed to increase for a corrector step to be accepted.

Default:

`1`

**corrector_type** *(string)*: The type of corrector steps that should be taken (unsupported!). ↵

If "mu_strategy" is "adaptive", this option determines what kind of corrector steps should be tried.

Default:

`none`

value meaning `affine`

corrector step towards mu=0 `none`

no corrector `primal-dual`

corrector step towards current mu

**cover_cuts** *(integer)*: Frequency k (in terms of nodes) for generating cover_cuts cuts in branch-and-cut. ↵

See option

`2mir_cuts`

for the meaning of k.Range: [

`-100`

, ∞]Default:

`0`

**crossconv_cuts** *(integer)*: The frequency (in terms of nodes) at which Couenne cross-aux convexification cuts are generated. ↵

A frequency of 0 (default) means these cuts are never generated. Any positive number n instructs Couenne to generate them at every n nodes of the B&B tree. A negative number -n means that generation should be attempted at the root node, and if successful it can be repeated at every n nodes, otherwise it is stopped altogether.

Range: [

`-99`

, ∞]Default:

`0`

**cutoff** *(real)*: Specify cutoff value. ↵

cutoff should be the value of a feasible solution known by the user (if any). The algorithm will only look for solutions better than cutoff.

Range: [

`-1e+100`

,`1e+100`

]Default:

`1e+100`

**cutoff_decr** *(real)*: Specify cutoff decrement. ↵

Specify the amount by which cutoff is decremented below a new best upper-bound (usually a small positive value but in non-convex problems it may be a negative value).

Range: [

`-1e+10`

,`1e+10`

]Default:

`1e-05`

**delete_redundant** *(string)*: Eliminate redundant variables, which appear in the problem as x_k = x_h ↵

Default:

`yes`

value meaning `no`

Keep redundant variables, making the problem a bit larger `yes`

Eliminate redundant variables (the problem will be equivalent, only smaller)

**delta** *(real)*: Multiplier for constraint violation in the switching rule. ↵

(See Eqn. (19) in the implementation paper.)

Default:

`1`

**dependency_detection_with_rhs** *(string)*: Indicates if the right hand sides of the constraints should be considered during dependency detection ↵

Default:

`no`

value meaning `no`

only look at gradients `yes`

also consider right hand side

**dependency_detector** *(string)*: Indicates which linear solver should be used to detect linearly dependent equality constraints. ↵

The default and available choices depend on how Ipopt has been compiled. This is experimental and does not work well.

Default:

`none`

value meaning `ma28`

use MA28 `mumps`

use MUMPS `none`

don't check; no extra work at beginning

**disjcuts_print_level** *(integer)*: Output level for disjunctive cuts in Couenne ↵

Range: [

`-2`

,`12`

]Default:

`0`

**disj_active_cols** *(string)*: Only include violated variable bounds in the Cut Generating LP (CGLP). ↵

This reduces the size of the CGLP, but may produce less efficient cuts.

Default:

`no`

value meaning `no`

`yes`

**disj_active_rows** *(string)*: Only include violated linear inequalities in the CGLP. ↵

This reduces the size of the CGLP, but may produce less efficient cuts.

Default:

`no`

value meaning `no`

`yes`

**disj_cumulative** *(string)*: Add previous disjunctive cut to current CGLP. ↵

When generating disjunctive cuts on a set of disjunctions 1, 2, ..., k, introduce the cut relative to the previous disjunction i-1 in the CGLP used for disjunction i. Notice that, although this makes the cut generated more efficient, it increases the rank of the disjunctive cut generated.

Default:

`no`

value meaning `no`

`yes`

**disj_depth_level** *(integer)*: Depth of the B&B tree when to start decreasing the number of objects that generate disjunctions. ↵

This has a similar behavior as log_num_obbt_per_level. A value of -1 means that generation can be done at all nodes.

Range: [

`-1`

, ∞]Default:

`5`

**disj_depth_stop** *(integer)*: Depth of the B&B tree where separation of disjunctive cuts is stopped. ↵

A value of -1 means that generation can be done at all nodes

Range: [

`-1`

, ∞]Default:

`20`

**disj_init_number** *(integer)*: Maximum number of disjunction to consider at each iteration. ↵

-1 means no limit.

Range: [

`-1`

, ∞]Default:

`10`

**disj_init_perc** *(real)*: The maximum fraction of all disjunctions currently violated by the problem to consider for generating disjunctions. ↵

Range: [

`0`

,`1`

]Default:

`0.5`

**display_stats** *(string)*: display statistics at the end of the run ↵

Default:

`no`

value meaning `no`

`yes`

**diverging_iterates_tol** *(real)*: Threshold for maximal value of primal iterates. ↵

If any component of the primal iterates exceeded this value (in absolute terms), the optimization is aborted with the exit message that the iterates seem to be diverging.

Default:

`1e+20`

**dual_inf_tol** *(real)*: Desired threshold for the dual infeasibility. ↵

Absolute tolerance on the dual infeasibility. Successful termination requires that the max-norm of the (unscaled) dual infeasibility is less than this threshold.

Default:

`1`

**dynamic_def_cutoff_decr** *(string)*: Do you want to define the parameter cutoff_decr dynamically? ↵

Default:

`no`

value meaning `no`

`yes`

**enable_dynamic_nlp** *(string)*: Enable dynamic linear and quadratic rows addition in nlp ↵

Default:

`no`

value meaning `no`

`yes`

**enable_lp_implied_bounds** *(string)*: Enable OsiSolverInterface::tightenBounds () – warning: it has caused some trouble to Couenne ↵

Default:

`no`

value meaning `no`

`yes`

**enable_sos** *(string)*: Use Special Ordered Sets (SOS) as indicated in the MINLP model ↵

Default:

`no`

value meaning `no`

`yes`

**estimate_select** *(string)*: How the min/max estimates of the subproblems' bounds are used in strong branching ↵

Default:

`normal`

value meaning `normal`

as usual in literature `product`

use their product

**eta_phi** *(real)*: Relaxation factor in the Armijo condition. ↵

(See Eqn. (20) in the implementation paper)

Range: [

`0`

,`0.5`

]Default:

`1e-08`

**evaluate_orig_obj_at_resto_trial** *(string)*: Determines if the original objective function should be evaluated at restoration phase trial points. ↵

Setting this option to "yes" makes the restoration phase algorithm evaluate the objective function of the original problem at every trial point encountered during the restoration phase, even if this value is not required. In this way, it is guaranteed that the original objective function can be evaluated without error at all accepted iterates; otherwise the algorithm might fail at a point where the restoration phase accepts an iterate that is good for the restoration phase problem, but not the original problem. On the other hand, if the evaluation of the original objective is expensive, this might be costly.

Default:

`yes`

value meaning `no`

skip evaluation `yes`

evaluate at every trial point

**expect_infeasible_problem** *(string)*: Enable heuristics to quickly detect an infeasible problem. ↵

This options is meant to activate heuristics that may speed up the infeasibility determination if you expect that there is a good chance for the problem to be infeasible. In the filter line search procedure, the restoration phase is called more quickly than usually, and more reduction in the constraint violation is enforced before the restoration phase is left. If the problem is square, this option is enabled automatically.

Default:

`no`

value meaning `no`

the problem probably be feasible `yes`

the problem has a good chance to be infeasible

**expect_infeasible_problem_ctol** *(real)*: Threshold for disabling 'expect_infeasible_problem' option. ↵

If the constraint violation becomes smaller than this threshold, the "expect_infeasible_problem" heuristics in the filter line search are disabled. If the problem is square, this options is set to 0.

Default:

`0.001`

**expect_infeasible_problem_ytol** *(real)*: Multiplier threshold for activating 'expect_infeasible_problem' option. ↵

If the max norm of the constraint multipliers becomes larger than this value and "expect_infeasible_problem" is chosen, then the restoration phase is entered.

Default:

`1e+08`

**fast_step_computation** *(string)*: Indicates if the linear system should be solved quickly. ↵

If set to yes, the algorithm assumes that the linear system that is solved to obtain the search direction, is solved sufficiently well. In that case, no residuals are computed, and the computation of the search direction is a little faster.

Default:

`no`

value meaning `no`

Verify solution of linear system by computing residuals. `yes`

Trust that linear systems are solved well.

**feasibility_bt** *(string)*: Feasibility-based (cheap) bound tightening (FBBT) ↵

A pre-processing technique to reduce the bounding box, before the generation of linearization cuts. This is a quick and effective way to reduce the solution set, and it is highly recommended to keep it active.

Default:

`yes`

value meaning `no`

`yes`

**feasibility_pump_objective_norm** *(integer)*: Norm of feasibility pump objective function ↵

Range: [

`1`

,`2`

]Default:

`1`

**feas_pump_convcuts** *(string)*: Separate MILP-feasible, MINLP-infeasible solution during or after MILP solver. ↵

Default:

`none`

value meaning `external`

Done after the MILP solver, in a Benders-like fashion `integrated`

Done within the MILP solver in a branch-and-cut fashion `none`

Just proceed to the NLP `postcut`

Do one round of cuts and proceed with NLP

**feas_pump_fademult** *(real)*: decrease/increase rate of multipliers ↵

1 keeps initial multipliers from one call to the next; any <1 multiplies ALL of them

Range: [

`0`

,`1`

]Default:

`1`

**feas_pump_heuristic** *(string)*: Apply the nonconvex Feasibility Pump ↵

An implementation of the Feasibility Pump for nonconvex MINLPs

Default:

`no`

value meaning `no`

never called `once`

call it at most once `only`

Call it exactly once and then exit `yes`

called any time Cbc calls heuristics

**feas_pump_iter** *(integer)*: Number of iterations in the main Feasibility Pump loop ↵

-1 means no limit

Range: [

`-1`

, ∞]Default:

`10`

**feas_pump_level** *(integer)*: Specify the logarithm of the number of feasibility pumps to perform on average for each level of given depth of the tree. ↵

Solve as many nlp's at the nodes for each level of the tree. Nodes are randomly selected. If for a given level there are less nodes than this number nlp are solved for every nodes. For example, if parameter is 8 NLPs are solved for all node until level 8, then for half the node at level 9, 1/4 at level 10.... Set to -1 to perform at all nodes.

Range: [

`-1`

, ∞]Default:

`3`

**feas_pump_milpmethod** *(integer)*: How should the integral solution be constructed? ↵

0: automatic, 1: aggressive heuristics, large node limit, 2: default, node limit, 3: RENS, 4: Objective Feasibility Pump, 5: MINLP rounding heuristic, 6: rounding, -1: solve MILP completely

Range: [

`-1`

,`6`

]Default:

`0`

**feas_pump_mult_dist_milp** *(real)*: Weight of the distance in the distance function of the milp problem ↵

0: neglected; 1: full weight; a in ]0,1[: weight is \(a^k\) where k is the FP iteration; a in ]-1,0[: weight is \(1-|a|^k\)

Range: [

`-1`

,`1`

]Default:

`0`

**feas_pump_mult_dist_nlp** *(real)*: Weight of the distance in the distance function of the nlp problem ↵

0: neglected; 1: full weight; a in ]0,1[: weight is \(a^k\) where k is the FP iteration; a in ]-1,0[: weight is \(1-|a|^k\)

Range: [

`-1`

,`1`

]Default:

`0`

**feas_pump_mult_hess_milp** *(real)*: Weight of the Hessian in the distance function of the milp problem ↵

0: neglected; 1: full weight; a in ]0,1[: weight is \(a^k\) where k is the FP iteration; a in ]-1,0[: weight is \(1-|a|^k\)

Range: [

`-1`

,`1`

]Default:

`0`

**feas_pump_mult_hess_nlp** *(real)*: Weight of the Hessian in the distance function of the nlp problem ↵

Range: [

`-1`

,`1`

]Default:

`0`

**feas_pump_mult_objf_milp** *(real)*: Weight of the original objective function in the distance function of the milp problem ↵

Range: [

`-1`

,`1`

]Default:

`0`

**feas_pump_mult_objf_nlp** *(real)*: Weight of the original objective function in the distance function of the nlp problem ↵

Range: [

`-1`

,`1`

]Default:

`0`

**feas_pump_nseprounds** *(integer)*: Number of rounds of convexification cuts. ↵

Range: [

`1`

,`100000`

]Default:

`4`

**feas_pump_poolcomp** *(integer)*: Priority field to compare solutions in FP pool ↵

0: total number of infeasible objects (integer and nonlinear); 1: maximum infeasibility (integer or nonlinear); 2: objective value; 3: compare value of all variables; 4: compare value of all integers (RECOMMENDED).

Range: [

`0`

,`4`

]Default:

`4`

**feas_pump_tabumgt** *(string)*: Retrieval of MILP solutions when the one returned is unsatisfactory ↵

Default:

`pool`

value meaning `cut`

Separate convexification cuts `none`

Bail out of feasibility pump `perturb`

Randomly perturb unsatisfactory solution `pool`

Use a solution pool and replace unsatisfactory solution with Euclidean-closest in pool

**feas_pump_usescip** *(string)*: Should SCIP be used to solve the MILPs? ↵

Note, that SCIP is only available for GAMS users with a SCIP or an academic GAMS license.

Default:

`yes`

value meaning `no`

Use Cbc's branch-and-cut to solve the MILP `yes`

Use SCIP's branch-and-cut or heuristics (see feas_pump_milpmethod option) to solve the MILP

**feas_pump_vardist** *(string)*: Distance computed on integer-only or on both types of variables, in different flavors. ↵

Default:

`integer`

value meaning `all`

Compute the distance using continuous and integer variables `int-postprocess`

Use a post-processing fixed-IP LP to determine a closest-point solution `integer`

Only compute the distance based on integer coordinates (use post-processing if numerical errors occur)

**feas_tolerance** *(real)*: Tolerance for constraints/auxiliary variables ↵

Default value is 1e-5.

Range: [-∞, ∞]

Default:

`1e-05`

**filter_margin_fact** *(real)*: Factor determining width of margin for obj-constr-filter adaptive globalization strategy. ↵

When using the adaptive globalization strategy, "obj-constr-filter", sufficient progress for a filter entry is defined as follows: (new obj) < (filter obj) - filter_margin_fact*(new constr-viol) OR (new constr-viol) < (filter constr-viol) - filter_margin_fact*(new constr-viol). For the description of the "kkt-error-filter" option see "filter_max_margin".

Range: [

`0`

,`1`

]Default:

`1e-05`

**filter_max_margin** *(real)*: Maximum width of margin in obj-constr-filter adaptive globalization strategy. ↵

Default:

`1`

**filter_reset_trigger** *(integer)*: Number of iterations that trigger the filter reset. ↵

If the filter reset heuristic is active and the number of successive iterations in which the last rejected trial step size was rejected because of the filter, the filter is reset.

Range: [

`1`

, ∞]Default:

`5`

**first_hessian_perturbation** *(real)*: Size of first x-s perturbation tried. ↵

The first value tried for the x-s perturbation in the inertia correction scheme.(This is delta_0 in the implementation paper.)

Default:

`0.0001`

**first_perc_for_cutoff_decr** *(real)*: The percentage used when, the coeff of variance is smaller than the threshold, to compute the cutoff_decr dynamically. ↵

Range: [-∞, ∞]

Default:

`-0.02`

**fixed_mu_oracle** *(string)*: Oracle for the barrier parameter when switching to fixed mode. ↵

Determines how the first value of the barrier parameter should be computed when switching to the "monotone mode" in the adaptive strategy. (Only considered if "adaptive" is selected for option "mu_strategy".)

Default:

`average_compl`

value meaning `average_compl`

base on current average complementarity `loqo`

LOQO's centrality rule `probing`

Mehrotra's probing heuristic `quality-function`

minimize a quality function

**fixed_variable_treatment** *(string)*: Determines how fixed variables should be handled. ↵

The main difference between those options is that the starting point in the "make_constraint" case still has the fixed variables at their given values, whereas in the case "make_parameter" the functions are always evaluated with the fixed values for those variables. Also, for "relax_bounds", the fixing bound constraints are relaxed (according to" bound_relax_factor"). For both "make_constraints" and "relax_bounds", bound multipliers are computed for the fixed variables.

Default:

`make_parameter`

value meaning `make_constraint`

Add equality constraints fixing variables `make_parameter`

Remove fixed variable from optimization variables `relax_bounds`

Relax fixing bound constraints

**fixpoint_bt** *(integer)*: The frequency (in terms of nodes) at which Fix Point Bound Tightening is performed. ↵

A frequency of 0 (default) means these cuts are never generated. Any positive number n instructs Couenne to generate them at every n nodes of the B&B tree. A negative number -n means that generation should be attempted at the root node, and if successful it can be repeated at every n nodes, otherwise it is stopped altogether.

Range: [

`-99`

, ∞]Default:

`0`

**fixpoint_bt_model** *(string)*: Choose whether to add an extended fixpoint LP model or a more compact one. ↵

The "extended" option is for debugging purposes; the compact formulation is equivalent and this option should be used

Default:

`compact`

value meaning `compact`

Compact equivalent model obtained by projecting lower/upper bounds of rhs `extended`

Extended model with variables for lower/upper bounds of right-hand sides (see paper by Belotti, Cafieri, Lee, Liberti)

**flow_covers_cuts** *(integer)*: Frequency k (in terms of nodes) for generating flow_covers_cuts cuts in branch-and-cut. ↵

See option

`2mir_cuts`

for the meaning of k.Range: [

`-100`

, ∞]Default:

`0`

**gamma_phi** *(real)*: Relaxation factor in the filter margin for the barrier function. ↵

(See Eqn. (18a) in the implementation paper.)

Range: [

`0`

,`1`

]Default:

`1e-08`

**gamma_theta** *(real)*: Relaxation factor in the filter margin for the constraint violation. ↵

(See Eqn. (18b) in the implementation paper.)

Range: [

`0`

,`1`

]Default:

`1e-05`

**Gomory_cuts** *(integer)*: Frequency k (in terms of nodes) for generating Gomory_cuts cuts in branch-and-cut. ↵

See option

`2mir_cuts`

for the meaning of k.Range: [

`-100`

, ∞]Default:

`0`

**hessian_approximation** *(string)*: Indicates what Hessian information is to be used. ↵

This determines which kind of information for the Hessian of the Lagrangian function is used by the algorithm.

Default:

`exact`

value meaning `exact`

Use second derivatives provided by the NLP. `limited-memory`

Perform a limited-memory quasi-Newton approximation

**hessian_approximation_space** *(string)*: Indicates in which subspace the Hessian information is to be approximated. ↵

Default:

`nonlinear-variables`

value meaning `all-variables`

in space of all variables (without slacks) `nonlinear-variables`

only in space of nonlinear variables.

**heuristic_dive_fractional** *(string)*: if yes runs the Dive Fractional heuristic ↵

Default:

`no`

value meaning `no`

`yes`

**heuristic_dive_MIP_fractional** *(string)*: if yes runs the Dive MIP Fractional heuristic ↵

Default:

`no`

value meaning `no`

`yes`

**heuristic_dive_MIP_vectorLength** *(string)*: if yes runs the Dive MIP VectorLength heuristic ↵

Default:

`no`

value meaning `no`

`yes`

**heuristic_dive_vectorLength** *(string)*: if yes runs the Dive VectorLength heuristic ↵

Default:

`no`

value meaning `no`

`yes`

**heuristic_feasibility_pump** *(string)*: whether the heuristic feasibility pump should be used ↵

Default:

`no`

value meaning `no`

`yes`

**heuristic_RINS** *(string)*: if yes runs the RINS heuristic ↵

Default:

`no`

value meaning `no`

`yes`

**honor_original_bounds** *(string)*: Indicates whether final points should be projected into original bounds. ↵

Ipopt might relax the bounds during the optimization (see, e.g., option "bound_relax_factor"). This option determines whether the final point should be projected back into the user-provide original bounds after the optimization.

Default:

`yes`

value meaning `no`

Leave final point unchanged `yes`

Project final point back into original bounds

**inf_pr_output** *(string)*: Determines what value is printed in the 'inf_pr' output column. ↵

Ipopt works with a reformulation of the original problem, where slacks are introduced and the problem might have been scaled. The choice "internal" prints out the constraint violation of this formulation. With "original" the true constraint violation in the original NLP is printed.

Default:

`original`

value meaning `internal`

max-norm of violation of internal equality constraints `original`

maximal constraint violation in original NLP

**integer_tolerance** *(real)*: Set integer tolerance. ↵

Any number within that value of an integer is considered integer.

Default:

`1e-06`

**int_var_priority** *(integer)*: Priority of integer variable branching ↵

When branching, this is compared to the priority of continuous variables, whose priority is given by cont_var_priority, and SOS, whose priority is 10. Higher values mean smaller priority.

Range: [

`1`

, ∞]Default:

`98`

**iteration_limit** *(integer)*: Set the cumulative maximum number of iteration in the algorithm used to process nodes continuous relaxations in the branch-and-bound. ↵

value 0 deactivates option.

Default:

`maxint`

**iterative_rounding_aggressiveness** *(integer)*: Aggressiveness of the Iterative Rounding heuristic ↵

Set the aggressiveness of the heuristic; i.e., how many iterations should be run, and with which parameters. The maximum time can be overridden by setting the _time and _time_firstcall options. 0 = non aggressive, 1 = standard (default), 2 = aggressive.

Range: [

`0`

,`2`

]Default:

`1`

**iterative_rounding_base_lbrhs** *(integer)*: Base rhs of the local branching constraint for Iterative Rounding ↵

Base rhs for the local branching constraint that defines a neighbourhood of the local incumbent. The base rhs is modified by the algorithm according to variable bounds. This corresponds to k' in the paper. Default 15.

Default:

`15`

**iterative_rounding_heuristic** *(string)*: Do we use the Iterative Rounding heuristic ↵

If enabled, a heuristic based on Iterative Rounding is used to find feasible solutions for the problem. The heuristic may take some time, but usually finds good solutions. Recommended if you want good upper bounds and have Cplex. Not recommended if you do not have Cplex

Default:

`no`

value meaning `no`

`yes`

**iterative_rounding_num_fir_points** *(integer)*: Max number of points rounded at the beginning of Iterative Rounding ↵

Number of different points (obtained solving a log-barrier problem) that the heuristic will try to round at most, during its execution at the root node (i.e. the F-IR heuristic). Default 5.

Range: [

`1`

, ∞]Default:

`5`

**iterative_rounding_omega** *(real)*: Omega parameter of the Iterative Rounding heuristic ↵

Set the omega parameter of the heuristic, which represents a multiplicative factor for the minimum log-barrier parameter of the NLP which is solved to obtain feasible points. This corresponds to \(\omega'\) in the paper. Default 0.2.

Range: [

`0`

,`1`

]Default:

`0.2`

**iterative_rounding_time** *(real)*: Specify the maximum time allowed for the Iterative Rounding heuristic ↵

Maximum CPU time employed by the Iterative Rounding heuristic; if no solution found in this time, failure is reported. This overrides the CPU time set by Aggressiveness if positive.

Range: [-∞, ∞]

Default:

`-1`

**iterative_rounding_time_firstcall** *(real)*: Specify the maximum time allowed for the Iterative Rounding heuristic when no feasible solution is known ↵

Maximum CPU time employed by the Iterative Rounding heuristic when no solution is known; if no solution found in this time, failure is reported.This overrides the CPU time set by Aggressiveness if posive.

Range: [-∞, ∞]

Default:

`-1`

**jacobian_regularization_exponent** *(real)*: Exponent for mu in the regularization for rank-deficient constraint Jacobians. ↵

(This is kappa_c in the implementation paper.)

Default:

`0.25`

**jacobian_regularization_value** *(real)*: Size of the regularization for rank-deficient constraint Jacobians. ↵

(This is bar delta_c in the implementation paper.)

Default:

`1e-08`

**jac_c_constant** *(string)*: Indicates whether all equality constraints are linear ↵

Activating this option will cause Ipopt to ask for the Jacobian of the equality constraints only once from the NLP and reuse this information later.

Default:

`no`

value meaning `no`

Don't assume that all equality constraints are linear `yes`

Assume that equality constraints Jacobian are constant

**jac_d_constant** *(string)*: Indicates whether all inequality constraints are linear ↵

Activating this option will cause Ipopt to ask for the Jacobian of the inequality constraints only once from the NLP and reuse this information later.

Default:

`no`

value meaning `no`

Don't assume that all inequality constraints are linear `yes`

Assume that equality constraints Jacobian are constant

**kappa_d** *(real)*: Weight for linear damping term (to handle one-sided bounds). ↵

(see Section 3.7 in implementation paper.)

Default:

`1e-05`

**kappa_sigma** *(real)*: Factor limiting the deviation of dual variables from primal estimates. ↵

If the dual variables deviate from their primal estimates, a correction is performed. (See Eqn. (16) in the implementation paper.) Setting the value to less than 1 disables the correction.

Default:

`1e+10`

**kappa_soc** *(real)*: Factor in the sufficient reduction rule for second order correction. ↵

This option determines how much a second order correction step must reduce the constraint violation so that further correction steps are attempted. (See Step A-5.9 of Algorithm A in the implementation paper.)

Default:

`0.99`

**least_square_init_duals** *(string)*: Least square initialization of all dual variables ↵

If set to yes, Ipopt tries to compute least-square multipliers (considering ALL dual variables). If successful, the bound multipliers are possibly corrected to be at least bound_mult_init_val. This might be useful if the user doesn't know anything about the starting point, or for solving an LP or QP. This overwrites option "bound_mult_init_method".

Default:

`no`

value meaning `no`

use bound_mult_init_val and least-square equality constraint multipliers `yes`

overwrite user-provided point with least-square estimates

**least_square_init_primal** *(string)*: Least square initialization of the primal variables ↵

If set to yes, Ipopt ignores the user provided point and solves a least square problem for the primal variables (x and s), to fit the linearized equality and inequality constraints. This might be useful if the user doesn't know anything about the starting point, or for solving an LP or QP.

Default:

`no`

value meaning `no`

take user-provided point `yes`

overwrite user-provided point with least-square estimates

**lift_and_project_cuts** *(integer)*: Frequency k (in terms of nodes) for generating lift_and_project_cuts cuts in branch-and-cut. ↵

See option

`2mir_cuts`

for the meaning of k.Range: [

`-100`

, ∞]Default:

`0`

**limited_memory_aug_solver** *(string)*: Strategy for solving the augmented system for low-rank Hessian. ↵

Default:

`sherman-morrison`

value meaning `extended`

use an extended augmented system `sherman-morrison`

use Sherman-Morrison formula

**limited_memory_initialization** *(string)*: Initialization strategy for the limited memory quasi-Newton approximation. ↵

Determines how the diagonal Matrix B_0 as the first term in the limited memory approximation should be computed.

Default:

`scalar1`

value meaning `constant`

sigma = limited_memory_init_val `scalar1`

sigma = s^Ty/s^Ts `scalar2`

sigma = y^Ty/s^Ty `scalar3`

arithmetic average of scalar1 and scalar2 `scalar4`

geometric average of scalar1 and scalar2

**limited_memory_init_val** *(real)*: Value for B0 in low-rank update. ↵

The starting matrix in the low rank update, B0, is chosen to be this multiple of the identity in the first iteration (when no updates have been performed yet), and is constantly chosen as this value, if "limited_memory_initialization" is "constant".

Default:

`1`

**limited_memory_init_val_max** *(real)*: Upper bound on value for B0 in low-rank update. ↵

The starting matrix in the low rank update, B0, is chosen to be this multiple of the identity in the first iteration (when no updates have been performed yet), and is constantly chosen as this value, if "limited_memory_initialization" is "constant".

Default:

`1e+08`

**limited_memory_init_val_min** *(real)*: Lower bound on value for B0 in low-rank update. ↵

The starting matrix in the low rank update, B0, is chosen to be this multiple of the identity in the first iteration (when no updates have been performed yet), and is constantly chosen as this value, if "limited_memory_initialization" is "constant".

Default:

`1e-08`

**limited_memory_max_history** *(integer)*: Maximum size of the history for the limited quasi-Newton Hessian approximation. ↵

This option determines the number of most recent iterations that are taken into account for the limited-memory quasi-Newton approximation.

Default:

`6`

**limited_memory_max_skipping** *(integer)*: Threshold for successive iterations where update is skipped. ↵

If the update is skipped more than this number of successive iterations, we quasi-Newton approximation is reset.

Range: [

`1`

, ∞]Default:

`2`

**limited_memory_special_for_resto** *(string)*: Determines if the quasi-Newton updates should be special during the restoration phase. ↵

Until Nov 2010, Ipopt used a special update during the restoration phase, but it turned out that this does not work well. The new default uses the regular update procedure and it improves results. If for some reason you want to get back to the original update, set this option to "yes".

Default:

`no`

value meaning `no`

use the same update as in regular iterations `yes`

use the a special update during restoration phase

**limited_memory_update_type** *(string)*: Quasi-Newton update formula for the limited memory approximation. ↵

Determines which update formula is to be used for the limited-memory quasi-Newton approximation.

Default:

`bfgs`

value meaning `bfgs`

BFGS update (with skipping) `sr1`

SR1 (not working well)

**linear_scaling_on_demand** *(string)*: Flag indicating that linear scaling is only done if it seems required. ↵

This option is only important if a linear scaling method (e.g., mc19) is used. If you choose "no", then the scaling factors are computed for every linear system from the start. This can be quite expensive. Choosing "yes" means that the algorithm will start the scaling method only when the solutions to the linear system seem not good, and then use it until the end.

Default:

`yes`

value meaning `no`

Always scale the linear system. `yes`

Start using linear system scaling if solutions seem not good.

**linear_solver** *(string)*: Linear solver used for step computations. ↵

Determines which linear algebra package is to be used for the solution of the augmented linear system (for obtaining the search directions). Note, the code must have been compiled with the linear solver you want to choose. Depending on your Ipopt installation, not all options are available.

Default:

`ma27`

value meaning `ma27`

use the Harwell routine MA27 `ma57`

use the Harwell routine MA57 `ma77`

use the Harwell routine HSL_MA77 `ma86`

use the Harwell routine HSL_MA86 `ma97`

use the Harwell routine HSL_MA97 `mumps`

use MUMPS package `pardiso`

use the Pardiso package

**linear_system_scaling** *(string)*: Method for scaling the linear system. ↵

Determines the method used to compute symmetric scaling factors for the augmented system (see also the "linear_scaling_on_demand" option). This scaling is independent of the NLP problem scaling. By default, MC19 is only used if MA27 or MA57 are selected as linear solvers. This value is only available if Ipopt has been compiled with MC19.

Default:

`mc19`

value meaning `mc19`

use the Harwell routine MC19 `none`

no scaling will be performed `slack-based`

use the slack values

**line_search_method** *(string)*: Globalization method used in backtracking line search ↵

Only the "filter" choice is officially supported. But sometimes, good results might be obtained with the other choices.

Default:

`filter`

value meaning `cg-penalty`

Chen-Goldfarb penalty function `filter`

Filter method `penalty`

Standard penalty function

**local_branching_heuristic** *(string)*: Apply local branching heuristic ↵

A local-branching heuristic based is used to find feasible solutions.

Default:

`no`

value meaning `no`

`yes`

**local_optimization_heuristic** *(string)*: Search for local solutions of MINLPs ↵

If enabled, a heuristic based on Ipopt is used to find feasible solutions for the problem. It is highly recommended that this option is left enabled, as it would be difficult to find feasible solutions otherwise.

Default:

`yes`

value meaning `no`

`yes`

**log_num_abt_per_level** *(integer)*: Specify the frequency (in terms of nodes) for aggressive bound tightening. ↵

If -1, apply at every node (expensive!). If 0, apply at root node only. If k≥0, apply with probability 2^(k - level), level being the current depth of the B&B tree.

Range: [

`-1`

, ∞]Default:

`2`

**log_num_local_optimization_per_level** *(integer)*: Specify the logarithm of the number of local optimizations to perform on average for each level of given depth of the tree. ↵

Solve as many nlp's at the nodes for each level of the tree. Nodes are randomly selected. If for a given level there are less nodes than this number nlp are solved for every nodes. For example if parameter is 8, nlp's are solved for all node until level 8, then for half the node at level 9, 1/4 at level 10.... Value -1 specify to perform at all nodes.

Range: [

`-1`

, ∞]Default:

`2`

**log_num_obbt_per_level** *(integer)*: Specify the frequency (in terms of nodes) for optimality-based bound tightening. ↵

If -1, apply at every node (expensive!). If 0, apply at root node only. If k≥0, apply with probability 2^(k - level), level being the current depth of the B&B tree.

Range: [

`-1`

, ∞]Default:

`1`

**lp_log_level** *(integer)*: specify LP log level. ↵

Set the level of output of the linear programming sub-solver in B-Hyb or B-QG : 0 - none, 1 - minimal, 2 - normal low, 3 - normal high, 4 - verbose

Range: [

`0`

,`4`

]Default:

`0`

**lp_solver** *(string)*: Linear Programming solver for the linearization ↵

Default:

`clp`

value meaning `clp`

Use the COIN-OR Open Source solver CLP `cplex`

Use the commercial solver Cplex (license is needed) `gurobi`

Use the commercial solver Gurobi (license is needed) `soplex`

Use the freely available Soplex `xpress-mp`

Use the commercial solver Xpress MP (license is needed)

**ma27_ignore_singularity** *(string)*: Enables MA27's ability to solve a linear system even if the matrix is singular. ↵

Setting this option to "yes" means that Ipopt will call MA27 to compute solutions for right hand sides, even if MA27 has detected that the matrix is singular (but is still able to solve the linear system). In some cases this might be better than using Ipopt's heuristic of small perturbation of the lower diagonal of the KKT matrix.

Default:

`no`

value meaning `no`

Don't have MA27 solve singular systems `yes`

Have MA27 solve singular systems

**ma27_la_init_factor** *(real)*: Real workspace memory for MA27. ↵

The initial real workspace memory = la_init_factor * memory required by unfactored system. Ipopt will increase the workspace size by meminc_factor if required. This option is only available if Ipopt has been compiled with MA27.

Range: [

`1`

, ∞]Default:

`5`

**ma27_liw_init_factor** *(real)*: Integer workspace memory for MA27. ↵

The initial integer workspace memory = liw_init_factor * memory required by unfactored system. Ipopt will increase the workspace size by meminc_factor if required. This option is only available if Ipopt has been compiled with MA27.

Range: [

`1`

, ∞]Default:

`5`

**ma27_meminc_factor** *(real)*: Increment factor for workspace size for MA27. ↵

If the integer or real workspace is not large enough, Ipopt will increase its size by this factor. This option is only available if Ipopt has been compiled with MA27.

Range: [

`1`

, ∞]Default:

`2`

**ma27_pivtol** *(real)*: Pivot tolerance for the linear solver MA27. ↵

A smaller number pivots for sparsity, a larger number pivots for stability. This option is only available if Ipopt has been compiled with MA27.

Range: [

`0`

,`1`

]Default:

`1e-08`

**ma27_pivtolmax** *(real)*: Maximum pivot tolerance for the linear solver MA27. ↵

Ipopt may increase pivtol as high as pivtolmax to get a more accurate solution to the linear system. This option is only available if Ipopt has been compiled with MA27.

Range: [

`0`

,`1`

]Default:

`0.0001`

**ma27_skip_inertia_check** *(string)*: Always pretend inertia is correct. ↵

Setting this option to "yes" essentially disables inertia check. This option makes the algorithm non-robust and easily fail, but it might give some insight into the necessity of inertia control.

Default:

`no`

value meaning `no`

check inertia `yes`

skip inertia check

**ma28_pivtol** *(real)*: Pivot tolerance for linear solver MA28. ↵

This is used when MA28 tries to find the dependent constraints.

Range: [

`0`

,`1`

]Default:

`0.01`

**ma57_automatic_scaling** *(string)*: Controls MA57 automatic scaling ↵

This option controls the internal scaling option of MA57. For higher reliability of the MA57 solver, you may want to set this option to yes. This is ICNTL(15) in MA57.

Default:

`no`

value meaning `no`

Do not scale the linear system matrix `yes`

Scale the linear system matrix

**ma57_block_size** *(integer)*: Controls block size used by Level 3 BLAS in MA57BD ↵

This is ICNTL(11) in MA57.

Range: [

`1`

, ∞]Default:

`16`

**ma57_node_amalgamation** *(integer)*: Node amalgamation parameter ↵

This is ICNTL(12) in MA57.

Range: [

`1`

, ∞]Default:

`16`

**ma57_pivot_order** *(integer)*: Controls pivot order in MA57 ↵

This is ICNTL(6) in MA57.

Range: [

`0`

,`5`

]Default:

`5`

**ma57_pivtol** *(real)*: Pivot tolerance for the linear solver MA57. ↵

A smaller number pivots for sparsity, a larger number pivots for stability. This option is only available if Ipopt has been compiled with MA57.

Range: [

`0`

,`1`

]Default:

`1e-08`

**ma57_pivtolmax** *(real)*: Maximum pivot tolerance for the linear solver MA57. ↵

Ipopt may increase pivtol as high as ma57_pivtolmax to get a more accurate solution to the linear system. This option is only available if Ipopt has been compiled with MA57.

Range: [

`0`

,`1`

]Default:

`0.0001`

**ma57_pre_alloc** *(real)*: Safety factor for work space memory allocation for the linear solver MA57. ↵

If 1 is chosen, the suggested amount of work space is used. However, choosing a larger number might avoid reallocation if the suggest values do not suffice. This option is only available if Ipopt has been compiled with MA57.

Range: [

`1`

, ∞]Default:

`1.05`

**ma57_small_pivot_flag** *(integer)*: If set to 1, then when small entries defined by CNTL(2) are detected they are removed and the corresponding pivots placed at the end of the factorization. This can be particularly efficient if the matrix is highly rank deficient. ↵

This is ICNTL(16) in MA57.

Range: [

`0`

,`1`

]Default:

`0`

**ma77_buffer_lpage** *(integer)*: Number of scalars per MA77 buffer page ↵

Number of scalars per an in-core buffer in the out-of-core solver MA77. Must be at most ma77_file_size.

Range: [

`1`

, ∞]Default:

`4096`

**ma77_buffer_npage** *(integer)*: Number of pages that make up MA77 buffer ↵

Number of pages of size buffer_lpage that exist in-core for the out-of-core solver MA77.

Range: [

`1`

, ∞]Default:

`1600`

**ma77_file_size** *(integer)*: Target size of each temporary file for MA77, scalars per type ↵

MA77 uses many temporary files, this option controls the size of each one. It is measured in the number of entries (int or double), NOT bytes.

Range: [

`1`

, ∞]Default:

`2097152`

**ma77_maxstore** *(integer)*: Maximum storage size for MA77 in-core mode ↵

If greater than zero, the maximum size of factors stored in core before out-of-core mode is invoked.

Default:

`0`

**ma77_nemin** *(integer)*: Node Amalgamation parameter ↵

Two nodes in elimination tree are merged if result has fewer than ma77_nemin variables.

Range: [

`1`

, ∞]Default:

`8`

**ma77_order** *(string)*: Controls type of ordering used by HSL_MA77 ↵

This option controls ordering for the solver HSL_MA77.

Default:

`metis`

value meaning `amd`

Use the HSL_MC68 approximate minimum degree algorithm `metis`

Use the MeTiS nested dissection algorithm (if available)

**ma77_print_level** *(integer)*: Debug printing level for the linear solver MA77 ↵

Range: [-∞, ∞]

Default:

`-1`

**ma77_small** *(real)*: Zero Pivot Threshold ↵

Any pivot less than ma77_small is treated as zero.

Default:

`1e-20`

**ma77_static** *(real)*: Static Pivoting Threshold ↵

See MA77 documentation. Either ma77_static=0.0 or ma77_static>ma77_small. ma77_static=0.0 disables static pivoting.

Default:

`0`

**ma77_u** *(real)*: Pivoting Threshold ↵

See MA77 documentation.

Range: [

`0`

,`0.5`

]Default:

`1e-08`

**ma77_umax** *(real)*: Maximum Pivoting Threshold ↵

Maximum value to which u will be increased to improve quality.

Range: [

`0`

,`0.5`

]Default:

`0.0001`

**ma86_nemin** *(integer)*: Node Amalgamation parameter ↵

Two nodes in elimination tree are merged if result has fewer than ma86_nemin variables.

Range: [

`1`

, ∞]Default:

`32`

**ma86_order** *(string)*: Controls type of ordering used by HSL_MA86 ↵

This option controls ordering for the solver HSL_MA86.

Default:

`auto`

value meaning `amd`

Use the HSL_MC68 approximate minimum degree algorithm `auto`

Try both AMD and MeTiS, pick best `metis`

Use the MeTiS nested dissection algorithm (if available)

**ma86_print_level** *(integer)*: Debug printing level for the linear solver MA86 ↵

Range: [-∞, ∞]

Default:

`-1`

**ma86_scaling** *(string)*: Controls scaling of matrix ↵

This option controls scaling for the solver HSL_MA86.

Default:

`mc64`

value meaning `mc64`

Scale linear system matrix using MC64 `mc77`

Scale linear system matrix using MC77 [1,3,0] `none`

Do not scale the linear system matrix

**ma86_small** *(real)*: Zero Pivot Threshold ↵

Any pivot less than ma86_small is treated as zero.

Default:

`1e-20`

**ma86_static** *(real)*: Static Pivoting Threshold ↵

See MA86 documentation. Either ma86_static=0.0 or ma86_static>ma86_small. ma86_static=0.0 disables static pivoting.

Default:

`0`

**ma86_u** *(real)*: Pivoting Threshold ↵

See MA86 documentation.

Range: [

`0`

,`0.5`

]Default:

`1e-08`

**ma86_umax** *(real)*: Maximum Pivoting Threshold ↵

Maximum value to which u will be increased to improve quality.

Range: [

`0`

,`0.5`

]Default:

`0.0001`

**ma97_nemin** *(integer)*: Node Amalgamation parameter ↵

Two nodes in elimination tree are merged if result has fewer than ma97_nemin variables.

Range: [

`1`

, ∞]Default:

`8`

**ma97_order** *(string)*: Controls type of ordering used by HSL_MA97 ↵

Default:

`auto`

value meaning `amd`

Use the HSL_MC68 approximate minimum degree algorithm `auto`

Use HSL_MA97 heuristic to guess best of AMD and METIS `best`

Try both AMD and MeTiS, pick best `matched-amd`

Use the HSL_MC80 matching based ordering with AMD `matched-auto`

Use the HSL_MC80 matching with heuristic choice of AMD or METIS `matched-metis`

Use the HSL_MC80 matching based ordering with METIS `metis`

Use the MeTiS nested dissection algorithm

**ma97_print_level** *(integer)*: Debug printing level for the linear solver MA97 ↵

Range: [-∞, ∞]

Default:

`0`

**ma97_scaling** *(string)*: Specifies strategy for scaling in HSL_MA97 linear solver ↵

Default:

`dynamic`

value meaning `dynamic`

Dynamically select scaling according to rules specified by ma97_scalingX and ma97_switchX options. `mc30`

Scale all linear system matrices using MC30 `mc64`

Scale all linear system matrices using MC64 `mc77`

Scale all linear system matrices using MC77 [1,3,0] `none`

Do not scale the linear system matrix

**ma97_scaling1** *(string)*: First scaling. ↵

If ma97_scaling=dynamic, this scaling is used according to the trigger ma97_switch1. If ma97_switch2 is triggered it is disabled.

Default:

`mc64`

value meaning `mc30`

Scale linear system matrix using MC30 `mc64`

Scale linear system matrix using MC64 `mc77`

Scale linear system matrix using MC77 [1,3,0] `none`

No scaling

**ma97_scaling2** *(string)*: Second scaling. ↵

If ma97_scaling=dynamic, this scaling is used according to the trigger ma97_switch2. If ma97_switch3 is triggered it is disabled.

Default:

`mc64`

value meaning `mc30`

Scale linear system matrix using MC30 `mc64`

Scale linear system matrix using MC64 `mc77`

Scale linear system matrix using MC77 [1,3,0] `none`

No scaling

**ma97_scaling3** *(string)*: Third scaling. ↵

If ma97_scaling=dynamic, this scaling is used according to the trigger ma97_switch3.

Default:

`mc64`

value meaning `mc30`

Scale linear system matrix using MC30 `mc64`

Scale linear system matrix using MC64 `mc77`

Scale linear system matrix using MC77 [1,3,0] `none`

No scaling

**ma97_small** *(real)*: Zero Pivot Threshold ↵

Any pivot less than ma97_small is treated as zero.

Default:

`1e-20`

**ma97_solve_blas3** *(string)*: Controls if blas2 or blas3 routines are used for solve ↵

Default:

`no`

value meaning `no`

Use BLAS2 (faster, some implementations bit incompatible) `yes`

Use BLAS3 (slower)

**ma97_switch1** *(string)*: First switch, determine when ma97_scaling1 is enabled. ↵

If ma97_scaling=dynamic, ma97_scaling1 is enabled according to this condition. If ma97_switch2 occurs this option is henceforth ignored.

Default:

`od_hd_reuse`

value meaning `at_start`

Scaling to be used from the very start. `at_start_reuse`

Scaling to be used on first iteration, then reused thereafter. `high_delay`

Scaling to be used after more than 0.05*n delays are present `high_delay_reuse`

Scaling to be used only when previous itr created more that 0.05*n additional delays, otherwise reuse scaling from previous itr `never`

Scaling is never enabled. `od_hd`

Combination of on_demand and high_delay `od_hd_reuse`

Combination of on_demand_reuse and high_delay_reuse `on_demand`

Scaling to be used after Ipopt request improved solution (i.e. iterative refinement has failed). `on_demand_reuse`

As on_demand, but reuse scaling from previous itr

**ma97_switch2** *(string)*: Second switch, determine when ma97_scaling2 is enabled. ↵

If ma97_scaling=dynamic, ma97_scaling2 is enabled according to this condition. If ma97_switch3 occurs this option is henceforth ignored.

Default:

`never`

value meaning `at_start`

Scaling to be used from the very start. `at_start_reuse`

Scaling to be used on first iteration, then reused thereafter. `high_delay`

Scaling to be used after more than 0.05*n delays are present `high_delay_reuse`

Scaling to be used only when previous itr created more that 0.05*n additional delays, otherwise reuse scaling from previous itr `never`

Scaling is never enabled. `od_hd`

Combination of on_demand and high_delay `od_hd_reuse`

Combination of on_demand_reuse and high_delay_reuse `on_demand`

Scaling to be used after Ipopt request improved solution (i.e. iterative refinement has failed). `on_demand_reuse`

As on_demand, but reuse scaling from previous itr

**ma97_switch3** *(string)*: Third switch, determine when ma97_scaling3 is enabled. ↵

If ma97_scaling=dynamic, ma97_scaling3 is enabled according to this condition.

Default:

`never`

value meaning `at_start`

Scaling to be used from the very start. `at_start_reuse`

Scaling to be used on first iteration, then reused thereafter. `high_delay`

Scaling to be used after more than 0.05*n delays are present `high_delay_reuse`

Scaling to be used only when previous itr created more that 0.05*n additional delays, otherwise reuse scaling from previous itr `never`

Scaling is never enabled. `od_hd`

Combination of on_demand and high_delay `od_hd_reuse`

Combination of on_demand_reuse and high_delay_reuse `on_demand`

Scaling to be used after Ipopt request improved solution (i.e. iterative refinement has failed). `on_demand_reuse`

As on_demand, but reuse scaling from previous itr

**ma97_u** *(real)*: Pivoting Threshold ↵

See MA97 documentation.

Range: [

`0`

,`0.5`

]Default:

`1e-08`

**ma97_umax** *(real)*: Maximum Pivoting Threshold ↵

See MA97 documentation.

Range: [

`0`

,`0.5`

]Default:

`0.0001`

**maxmin_crit_have_sol** *(real)*: Weight towards minimum in of lower and upper branching estimates when a solution has been found. ↵

Range: [

`0`

,`1`

]Default:

`0.1`

**maxmin_crit_no_sol** *(real)*: Weight towards minimum in of lower and upper branching estimates when no solution has been found yet. ↵

Range: [

`0`

,`1`

]Default:

`0.7`

**max_consecutive_failures** *(integer)*: (temporarily removed) Number \(n\) of consecutive unsolved problems before aborting a branch of the tree. ↵

When \(n > 0\), continue exploring a branch of the tree until \(n\) consecutive problems in the branch are unsolved (we call unsolved a problem for which Ipopt can not guarantee optimality within the specified tolerances).

Default:

`10`

**max_consecutive_infeasible** *(integer)*: Number of consecutive infeasible subproblems before aborting a branch. ↵

Will continue exploring a branch of the tree until "max_consecutive_infeasible"consecutive problems are locally infeasible by the NLP sub-solver.

Default:

`0`

**max_cpu_time** *(real)*: Maximum number of CPU seconds. ↵

A limit on CPU seconds that Ipopt can use to solve one problem. If during the convergence check this limit is exceeded, Ipopt will terminate with a corresponding error message.

Default:

`1e+06`

**max_fbbt_iter** *(integer)*: Number of FBBT iterations before stopping even with tightened bounds. ↵

Set to -1 to impose no upper limit

Range: [

`-1`

, ∞]Default:

`3`

**max_filter_resets** *(integer)*: Maximal allowed number of filter resets ↵

A positive number enables a heuristic that resets the filter, whenever in more than "filter_reset_trigger" successive iterations the last rejected trial steps size was rejected because of the filter. This option determine the maximal number of resets that are allowed to take place.

Default:

`5`

**max_hessian_perturbation** *(real)*: Maximum value of regularization parameter for handling negative curvature. ↵

In order to guarantee that the search directions are indeed proper descent directions, Ipopt requires that the inertia of the (augmented) linear system for the step computation has the correct number of negative and positive eigenvalues. The idea is that this guides the algorithm away from maximizers and makes Ipopt more likely converge to first order optimal points that are minimizers. If the inertia is not correct, a multiple of the identity matrix is added to the Hessian of the Lagrangian in the augmented system. This parameter gives the maximum value of the regularization parameter. If a regularization of that size is not enough, the algorithm skips this iteration and goes to the restoration phase. (This is delta_w^max in the implementation paper.)

Default:

`1e+20`

**max_iter** *(integer)*: Maximum number of iterations. ↵

The algorithm terminates with an error message if the number of iterations exceeded this number.

Default:

`3000`

**max_random_point_radius** *(real)*: Set max value r for coordinate of a random point. ↵

When picking a random point, coordinate i will be in the interval [min(max(l,-r),u-r), max(min(u,r),l+r)] (where l is the lower bound for the variable and u is its upper bound)

Default:

`100000`

**max_refinement_steps** *(integer)*: Maximum number of iterative refinement steps per linear system solve. ↵

Iterative refinement (on the full unsymmetric system) is performed for each right hand side. This option determines the maximum number of iterative refinement steps.

Default:

`10`

**max_resto_iter** *(integer)*: Maximum number of successive iterations in restoration phase. ↵

The algorithm terminates with an error message if the number of iterations successively taken in the restoration phase exceeds this number.

Default:

`3000000`

**max_soc** *(integer)*: Maximum number of second order correction trial steps at each iteration. ↵

Choosing 0 disables the second order corrections. (This is p^{max} of Step A-5.9 of Algorithm A in the implementation paper.)

Default:

`4`

**max_soft_resto_iters** *(integer)*: Maximum number of iterations performed successively in soft restoration phase. ↵

If the soft restoration phase is performed for more than so many iterations in a row, the regular restoration phase is called.

Default:

`10`

**mehrotra_algorithm** *(string)*: Indicates if we want to do Mehrotra's algorithm. ↵

If set to yes, Ipopt runs as Mehrotra's predictor-corrector algorithm. This works usually very well for LPs and convex QPs. This automatically disables the line search, and chooses the (unglobalized) adaptive mu strategy with the "probing" oracle, and uses "corrector_type=affine" without any safeguards; you should not set any of those options explicitly in addition. Also, unless otherwise specified, the values of "bound_push", "bound_frac", and "bound_mult_init_val" are set more aggressive, and sets "alpha_for_y=bound_mult".

Default:

`no`

value meaning `no`

Do the usual Ipopt algorithm. `yes`

Do Mehrotra's predictor-corrector algorithm.

**milp_solver** *(string)*: Choose the subsolver to solve MILP sub-problems in OA decompositions. ↵

To use Cplex, a valid license is required.

Default:

`Cbc_D`

value meaning `cbc_d`

Coin Branch and Cut with its default `cbc_par`

Coin Branch and Cut with passed parameters `cplex`

Cplex

**milp_strategy** *(string)*: Choose a strategy for MILPs. ↵

Default:

`find_good_sol`

value meaning `find_good_sol`

Stop sub milps when a solution improving the incumbent is found `solve_to_optimality`

Solve MILPs to optimality

**minlp_disj_cuts** *(integer)*: The frequency (in terms of nodes) at which Couenne disjunctive cuts are generated. ↵

A frequency of 0 (default) means these cuts are never generated. Any positive number n instructs Couenne to generate them at every n nodes of the B&B tree. A negative number -n means that generation should be attempted at the root node, and if successful it can be repeated at every n nodes, otherwise it is stopped altogether.

Range: [

`-99`

, ∞]Default:

`0`

**min_hessian_perturbation** *(real)*: Smallest perturbation of the Hessian block. ↵

The size of the perturbation of the Hessian block is never selected smaller than this value, unless no perturbation is necessary. (This is delta_w^min in implementation paper.)

Default:

`1e-20`

**min_number_strong_branch** *(integer)*: Sets minimum number of variables for strong branching (overriding trust) ↵

Default:

`0`

**min_refinement_steps** *(integer)*: Minimum number of iterative refinement steps per linear system solve. ↵

Iterative refinement (on the full unsymmetric system) is performed for each right hand side. This option determines the minimum number of iterative refinements (i.e. at least "min_refinement_steps" iterative refinement steps are enforced per right hand side.)

Default:

`1`

**mir_cuts** *(integer)*: Frequency k (in terms of nodes) for generating mir_cuts cuts in branch-and-cut. ↵

See option

`2mir_cuts`

for the meaning of k.Range: [

`-100`

, ∞]Default:

`0`

**multilinear_separation** *(string)*: Separation for multilinear terms ↵

Type of separation for multilinear terms where the dependent variable is also bounded

Default:

`tight`

value meaning `none`

No separation – just use the four McCormick inequalities `simple`

Use one considering lower curve only `tight`

Use one considering both curves pi(x) = l_{k+1} and pi(x) = u_{k+1}

**mumps_dep_tol** *(real)*: Pivot threshold for detection of linearly dependent constraints in MUMPS. ↵

When MUMPS is used to determine linearly dependent constraints, this is determines the threshold for a pivot to be considered zero. This is CNTL(3) in MUMPS.

Range: [-∞, ∞]

Default:

`0`

**mumps_mem_percent** *(integer)*: Percentage increase in the estimated working space for MUMPS. ↵

In MUMPS when significant extra fill-in is caused by numerical pivoting, larger values of mumps_mem_percent may help use the workspace more efficiently. On the other hand, if memory requirement are too large at the very beginning of the optimization, choosing a much smaller value for this option, such as 5, might reduce memory requirements.

Default:

`1000`

**mumps_permuting_scaling** *(integer)*: Controls permuting and scaling in MUMPS ↵

This is ICNTL(6) in MUMPS.

Range: [

`0`

,`7`

]Default:

`7`

**mumps_pivot_order** *(integer)*: Controls pivot order in MUMPS ↵

This is ICNTL(7) in MUMPS.

Range: [

`0`

,`7`

]Default:

`7`

**mumps_pivtol** *(real)*: Pivot tolerance for the linear solver MUMPS. ↵

A smaller number pivots for sparsity, a larger number pivots for stability. This option is only available if Ipopt has been compiled with MUMPS.

Range: [

`0`

,`1`

]Default:

`1e-06`

**mumps_pivtolmax** *(real)*: Maximum pivot tolerance for the linear solver MUMPS. ↵

Ipopt may increase pivtol as high as pivtolmax to get a more accurate solution to the linear system. This option is only available if Ipopt has been compiled with MUMPS.

Range: [

`0`

,`1`

]Default:

`0.1`

**mumps_scaling** *(integer)*: Controls scaling in MUMPS ↵

This is ICNTL(8) in MUMPS.

Range: [

`-2`

,`77`

]Default:

`77`

**mu_allow_fast_monotone_decrease** *(string)*: Allow skipping of barrier problem if barrier test is already met. ↵

If set to "no", the algorithm enforces at least one iteration per barrier problem, even if the barrier test is already met for the updated barrier parameter.

Default:

`yes`

value meaning `no`

Take at least one iteration per barrier problem `yes`

Allow fast decrease of mu if barrier test it met

**mu_init** *(real)*: Initial value for the barrier parameter. ↵

This option determines the initial value for the barrier parameter (mu). It is only relevant in the monotone, Fiacco-McCormick version of the algorithm. (i.e., if "mu_strategy" is chosen as "monotone")

Default:

`0.1`

**mu_linear_decrease_factor** *(real)*: Determines linear decrease rate of barrier parameter. ↵

For the Fiacco-McCormick update procedure the new barrier parameter mu is obtained by taking the minimum of mu*"mu_linear_decrease_factor" and mu^"superlinear_decrease_power". (This is kappa_mu in implementation paper.) This option is also used in the adaptive mu strategy during the monotone mode.

Range: [

`0`

,`1`

]Default:

`0.2`

**mu_max** *(real)*: Maximum value for barrier parameter. ↵

This option specifies an upper bound on the barrier parameter in the adaptive mu selection mode. If this option is set, it overwrites the effect of mu_max_fact. (Only used if option "mu_strategy" is chosen as "adaptive".)

Default:

`100000`

**mu_max_fact** *(real)*: Factor for initialization of maximum value for barrier parameter. ↵

This option determines the upper bound on the barrier parameter. This upper bound is computed as the average complementarity at the initial point times the value of this option. (Only used if option "mu_strategy" is chosen as "adaptive".)

Default:

`1000`

**mu_min** *(real)*: Minimum value for barrier parameter. ↵

This option specifies the lower bound on the barrier parameter in the adaptive mu selection mode. By default, it is set to the minimum of 1e-11 and min("tol","compl_inf_tol")/("barrier_tol_factor"+1), which should be a reasonable value. (Only used if option "mu_strategy" is chosen as "adaptive".)

Default:

`1e-11`

**mu_oracle** *(string)*: Oracle for a new barrier parameter in the adaptive strategy. ↵

Determines how a new barrier parameter is computed in each "free-mode" iteration of the adaptive barrier parameter strategy. (Only considered if "adaptive" is selected for option "mu_strategy").

Default:

`quality-function`

value meaning `loqo`

LOQO's centrality rule `probing`

Mehrotra's probing heuristic `quality-function`

minimize a quality function

**mu_strategy** *(string)*: Update strategy for barrier parameter. ↵

Determines which barrier parameter update strategy is to be used.

Default:

`monotone`

value meaning `adaptive`

use the adaptive update strategy `monotone`

use the monotone (Fiacco-McCormick) strategy

**mu_superlinear_decrease_power** *(real)*: Determines superlinear decrease rate of barrier parameter. ↵

For the Fiacco-McCormick update procedure the new barrier parameter mu is obtained by taking the minimum of mu*"mu_linear_decrease_factor" and mu^"superlinear_decrease_power". (This is theta_mu in implementation paper.) This option is also used in the adaptive mu strategy during the monotone mode.

Range: [

`1`

,`2`

]Default:

`1.5`

**mu_target** *(real)*: Desired value of complementarity. ↵

Usually, the barrier parameter is driven to zero and the termination test for complementarity is measured with respect to zero complementarity. However, in some cases it might be desired to have Ipopt solve barrier problem for strictly positive value of the barrier parameter. In this case, the value of "mu_target" specifies the final value of the barrier parameter, and the termination tests are then defined with respect to the barrier problem for this value of the barrier parameter.

Default:

`0`

**neg_curv_test_reg** *(string)*: Whether to do the curvature test with the primal regularization (see Zavala and Chiang, 2014). ↵

Default:

`yes`

value meaning `no`

use original IPOPT approach, in which the primal regularization is ignored `yes`

use primal regularization with the inertia-free curvature test

**neg_curv_test_tol** *(real)*: Tolerance for heuristic to ignore wrong inertia. ↵

If nonzero, incorrect inertia in the augmented system is ignored, and Ipopt tests if the direction is a direction of positive curvature. This tolerance is alpha_n in the paper by Zavala and Chiang (2014) and it determines when the direction is considered to be sufficiently positive. A value in the range of [1e-12, 1e-11] is recommended.

Default:

`0`

**nlpheur_print_level** *(integer)*: Output level for NLP heuristic in Couenne ↵

Range: [

`-2`

,`12`

]Default:

`0`

**nlp_failure_behavior** *(string)*: Set the behavior when an NLP or a series of NLP are unsolved by Ipopt (we call unsolved an NLP for which Ipopt is not able to guarantee optimality within the specified tolerances). ↵

If set to "fathom", the algorithm will fathom the node when Ipopt fails to find a solution to the nlp at that node within the specified tolerances. The algorithm then becomes a heuristic, and the user will be warned that the solution might not be optimal.

Default:

`stop`

value meaning `fathom`

Continue when failure happens. `stop`

Stop when failure happens.

**nlp_log_at_root** *(integer)*: specify a different log level for root relaxation. ↵

Range: [

`0`

,`12`

]Default:

`0`

**nlp_log_level** *(integer)*: specify NLP solver interface log level (independent from ipopt print_level). ↵

Set the level of output of the OsiTMINLPInterface : 0 - none, 1 - normal, 2 - verbose

Range: [

`0`

,`2`

]Default:

`1`

**nlp_scaling_constr_target_gradient** *(real)*: Target value for constraint function gradient size. ↵

If a positive number is chosen, the scaling factor the constraint functions is computed so that the gradient has the max norm of the given size at the starting point. This overrides nlp_scaling_max_gradient for the constraint functions.

Default:

`0`

**nlp_scaling_max_gradient** *(real)*: Maximum gradient after NLP scaling. ↵

This is the gradient scaling cut-off. If the maximum gradient is above this value, then gradient based scaling will be performed. Scaling parameters are calculated to scale the maximum gradient back to this value. (This is g_max in Section 3.8 of the implementation paper.) Note: This option is only used if "nlp_scaling_method" is chosen as "gradient-based".

Default:

`100`

**nlp_scaling_method** *(string)*: Select the technique used for scaling the NLP. ↵

Selects the technique used for scaling the problem internally before it is solved. For user-scaling, the parameters come from the NLP. If you are using AMPL, they can be specified through suffixes ("scaling_factor")

Default:

`gradient-based`

value meaning `equilibration-based`

scale the problem so that first derivatives are of order 1 at random points (only available with MC19) `gradient-based`

scale the problem so the maximum gradient at the starting point is scaling_max_gradient `none`

no problem scaling will be performed

**nlp_scaling_min_value** *(real)*: Minimum value of gradient-based scaling values. ↵

This is the lower bound for the scaling factors computed by gradient-based scaling method. If some derivatives of some functions are huge, the scaling factors will otherwise become very small, and the (unscaled) final constraint violation, for example, might then be significant. Note: This option is only used if "nlp_scaling_method" is chosen as "gradient-based".

Default:

`1e-08`

**nlp_scaling_obj_target_gradient** *(real)*: Target value for objective function gradient size. ↵

If a positive number is chosen, the scaling factor the objective function is computed so that the gradient has the max norm of the given size at the starting point. This overrides nlp_scaling_max_gradient for the objective function.

Default:

`0`

**node_comparison** *(string)*: Choose the node selection strategy. ↵

Choose the strategy for selecting the next node to be processed.

Default:

`best-bound`

value meaning `best-bound`

choose node with the smallest bound, `best-guess`

choose node with smallest guessed integer solution `breadth-first`

Perform breadth first search, `depth-first`

Perform depth first search, `dynamic`

Cbc dynamic strategy (starts with a depth first search and turn to best bound after 3 integer feasible solutions have been found).

**node_limit** *(integer)*: Set the maximum number of nodes explored in the branch-and-bound search. ↵

Default:

`maxint`

**number_before_trust** *(integer)*: Set the number of branches on a variable before its pseudo costs are to be believed in dynamic strong branching. ↵

A value of 0 disables pseudo costs.

Default:

`8`

**number_before_trust_list** *(integer)*: Set the number of branches on a variable before its pseudo costs are to be believed during setup of strong branching candidate list. ↵

The default value is that of "number_before_trust"

Range: [

`-1`

, ∞]Default:

`0`

**number_look_ahead** *(integer)*: Sets limit of look-ahead strong-branching trials ↵

Default:

`0`

**number_strong_branch** *(integer)*: Choose the maximum number of variables considered for strong branching. ↵

Set the number of variables on which to do strong branching.

Default:

`20`

**number_strong_branch_root** *(integer)*: Maximum number of variables considered for strong branching in root node. ↵

Default:

`maxint`

**num_cut_passes** *(integer)*: Set the maximum number of cut passes at regular nodes of the branch-and-cut. ↵

Default:

`1`

**num_cut_passes_at_root** *(integer)*: Set the maximum number of cut passes at regular nodes of the branch-and-cut. ↵

Default:

`20`

**num_iterations_suspect** *(integer)*: Number of iterations over which a node is considered 'suspect' (for debugging purposes only, see detailed documentation). ↵

When the number of iterations to solve a node is above this number, the subproblem at this node is considered to be suspect and it will be written into a file (set to -1 to deactivate this).

Range: [

`-1`

, ∞]Default:

`-1`

**num_resolve_at_infeasibles** *(integer)*: Number \(k\) of tries to resolve an infeasible node (other than the root) of the tree with different starting point. ↵

The algorithm will solve all the infeasible nodes with \(k\) different random starting points and will keep the best local optimum found.

Default:

`0`

**num_resolve_at_node** *(integer)*: Number \(k\) of tries to resolve a node (other than the root) of the tree with different starting point. ↵

The algorithm will solve all the nodes with \(k\) different random starting points and will keep the best local optimum found.

Default:

`0`

**num_resolve_at_root** *(integer)*: Number \(k\) of tries to resolve the root node with different starting points. ↵

The algorithm will solve the root node with \(k\) random starting points and will keep the best local optimum found.

Default:

`0`

**num_retry_unsolved_random_point** *(integer)*: Number \(k\) of times that the algorithm will try to resolve an unsolved NLP with a random starting point (we call unsolved an NLP for which Ipopt is not able to guarantee optimality within the specified tolerances). ↵

When Ipopt fails to solve a continuous NLP sub-problem, if \(k > 0\), the algorithm will try again to solve the failed NLP with \(k\) new randomly chosen starting points or until the problem is solved with success.

Default:

`0`

**nu_inc** *(real)*: Increment of the penalty parameter. ↵

Default:

`0.0001`

**nu_init** *(real)*: Initial value of the penalty parameter. ↵

Default:

`1e-06`

**oa_cuts_log_level** *(integer)*: level of log when generating OA cuts. ↵

0: outputs nothing, 1: when a cut is generated, its violation and index of row from which it originates, 2: always output violation of the cut. 3: output generated cuts incidence vectors.

Default:

`0`

**oa_cuts_scope** *(string)*: Specify if OA cuts added are to be set globally or locally valid ↵

Default:

`global`

value meaning `global`

Cuts are treated as globally valid `local`

Cuts are treated as locally valid

**oa_rhs_relax** *(real)*: Value by which to relax OA cut ↵

RHS of OA constraints will be relaxed by this amount times the absolute value of the initial rhs if it is ≥ 1 (otherwise by this amount).

Default:

`1e-08`

**obj_max_inc** *(real)*: Determines the upper bound on the acceptable increase of barrier objective function. ↵

Trial points are rejected if they lead to an increase in the barrier objective function by more than obj_max_inc orders of magnitude.

Range: [

`1`

, ∞]Default:

`5`

**optimality_bt** *(string)*: Optimality-based (expensive) bound tightening (OBBT) ↵

This is another bound reduction technique aiming at reducing the solution set by looking at the initial LP relaxation. This technique is computationally expensive, and should be used only when necessary.

Default:

`yes`

value meaning `no`

`yes`

**orbital_branching** *(string)*: detect symmetries and apply orbital branching ↵

Default:

`no`

value meaning `no`

`yes`

**orbital_branching_depth** *(integer)*: Maximum depth at which the symmetry group is computed ↵

Select -1 if you want to compute the symmetry group at all nodes

Range: [

`-1`

, ∞]Default:

`10`

**output_level** *(integer)*: Output level ↵

Range: [

`-2`

,`12`

]Default:

`4`

**pardiso_matching_strategy** *(string)*: Matching strategy to be used by Pardiso ↵

This is IPAR(13) in Pardiso manual.

Default:

`complete+2x2`

value meaning `complete`

Match complete (IPAR(13)=1) `complete+2x2`

Match complete+2x2 (IPAR(13)=2) `constraints`

Match constraints (IPAR(13)=3)

**pardiso_max_iterative_refinement_steps** *(integer)*: Limit on number of iterative refinement steps. ↵

The solver does not perform more than the absolute value of this value steps of iterative refinement and stops the process if a satisfactory level of accuracy of the solution in terms of backward error is achieved. If negative, the accumulation of the residue uses extended precision real and complex data types. Perturbed pivots result in iterative refinement. The solver automatically performs two steps of iterative refinements when perturbed pivots are obtained during the numerical factorization and this option is set to 0.

Range: [-∞, ∞]

Default:

`1`

**pardiso_msglvl** *(integer)*: Pardiso message level ↵

This determines the amount of analysis output from the Pardiso solver. This is MSGLVL in the Pardiso manual.

Default:

`0`

**pardiso_order** *(string)*: Controls the fill-in reduction ordering algorithm for the input matrix. ↵

Default:

`metis`

value meaning `amd`

minimum degree algorithm `metis`

MeTiS nested dissection algorithm `one`

undocumented `pmetis`

parallel (OpenMP) version of MeTiS nested dissection algorithm

**pardiso_redo_symbolic_fact_only_if_inertia_wrong** *(string)*: Toggle for handling case when elements were perturbed by Pardiso. ↵

Default:

`no`

value meaning `no`

Always redo symbolic factorization when elements were perturbed `yes`

Only redo symbolic factorization when elements were perturbed if also the inertia was wrong

**pardiso_repeated_perturbation_means_singular** *(string)*: Interpretation of perturbed elements. ↵

Default:

`no`

value meaning `no`

Don't assume that matrix is singular if elements were perturbed after recent symbolic factorization `yes`

Assume that matrix is singular if elements were perturbed after recent symbolic factorization

**pardiso_skip_inertia_check** *(string)*: Always pretend inertia is correct. ↵

Setting this option to "yes" essentially disables inertia check. This option makes the algorithm non-robust and easily fail, but it might give some insight into the necessity of inertia control.

Default:

`no`

value meaning `no`

check inertia `yes`

skip inertia check

**perturb_always_cd** *(string)*: Active permanent perturbation of constraint linearization. ↵

This options makes the delta_c and delta_d perturbation be used for the computation of every search direction. Usually, it is only used when the iteration matrix is singular.

Default:

`no`

value meaning `no`

perturbation only used when required `yes`

always use perturbation

**perturb_dec_fact** *(real)*: Decrease factor for x-s perturbation. ↵

The factor by which the perturbation is decreased when a trial value is deduced from the size of the most recent successful perturbation. (This is kappa_w^- in the implementation paper.)

Range: [

`0`

,`1`

]Default:

`0.333333`

**perturb_inc_fact** *(real)*: Increase factor for x-s perturbation. ↵

The factor by which the perturbation is increased when a trial value was not sufficient - this value is used for the computation of all perturbations except for the first. (This is kappa_w^+ in the implementation paper.)

Range: [

`1`

, ∞]Default:

`8`

**perturb_inc_fact_first** *(real)*: Increase factor for x-s perturbation for very first perturbation. ↵

The factor by which the perturbation is increased when a trial value was not sufficient - this value is used for the computation of the very first perturbation and allows a different value for for the first perturbation than that used for the remaining perturbations. (This is bar_kappa_w^+ in the implementation paper.)

Range: [

`1`

, ∞]Default:

`100`

**print_eval_error** *(string)*: Switch to enable printing information about function evaluation errors into the GAMS listing file. ↵

Default:

`yes`

value meaning `no`

`yes`

**print_frequency_iter** *(integer)*: Determines at which iteration frequency the summarizing iteration output line should be printed. ↵

Summarizing iteration output is printed every print_frequency_iter iterations, if at least print_frequency_time seconds have passed since last output.

Range: [

`1`

, ∞]Default:

`1`

**print_frequency_time** *(real)*: Determines at which time frequency the summarizing iteration output line should be printed. ↵

Summarizing iteration output is printed if at least print_frequency_time seconds have passed since last output and the iteration number is a multiple of print_frequency_iter.

Default:

`0`

**print_info_string** *(string)*: Enables printing of additional info string at end of iteration output. ↵

This string contains some insider information about the current iteration. For details, look for "Diagnostic Tags" in the Ipopt documentation.

Default:

`no`

value meaning `no`

don't print string `yes`

print string at end of each iteration output

**print_level** *(integer)*: Output verbosity level. ↵

Sets the default verbosity level for console output. The larger this value the more detailed is the output.

Range: [

`0`

,`12`

]Default:

`5`

**print_timing_statistics** *(string)*: Switch to print timing statistics. ↵

If selected, the program will print the CPU usage (user time) for selected tasks.

Default:

`no`

value meaning `no`

don't print statistics `yes`

print all timing statistics

**probing_cuts** *(integer)*: Frequency k (in terms of nodes) for generating probing_cuts cuts in branch-and-cut. ↵

See option

`2mir_cuts`

for the meaning of k.Range: [

`-100`

, ∞]Default:

`0`

**problem_print_level** *(integer)*: Output level for problem manipulation code in Couenne ↵

Range: [

`-2`

,`12`

]Default:

`2`

**pseudocost_mult** *(string)*: Multipliers of pseudocosts for estimating and update estimation of bound ↵

Default:

`interval_br_rev`

value meaning `infeasibility`

infeasibility returned by object `interval_br`

width of the interval between bound and branching point `interval_br_rev`

similar to interval_br, reversed `interval_lp`

width of the interval between bound and current lp point `interval_lp_rev`

similar to interval_lp, reversed `projectdist`

distance between current LP point and resulting branches' LP points

**pseudocost_mult_lp** *(string)*: Use distance between LP points to update multipliers of pseudocosts after simulating branching ↵

Default:

`no`

value meaning `no`

`yes`

**pump_for_minlp** *(string)*: whether to run the feasibility pump heuristic for MINLP ↵

Default:

`no`

value meaning `no`

`yes`

**quadrilinear_decomp** *(string)*: type of decomposition for quadrilinear terms (see work by Cafieri, Lee, Liberti) ↵

Default:

`rAI`

value meaning `bi+tri`

Bilinear, THEN trilinear term: x5 = ((x1 x2) x3 x4)) `hier-bi`

Hierarchical decomposition: x5 = ((x1 x2) (x3 x4)) `rai`

Recursive decomposition in bilinear terms (as in Ryoo and Sahinidis): x5 = ((x1 x2) x3) x4) `tri+bi`

Trilinear and bilinear term: x5 = (x1 (x2 x3 x4))

**quality_function_balancing_term** *(string)*: The balancing term included in the quality function for centrality. ↵

This determines whether a term is added to the quality function that penalizes situations where the complementarity is much smaller than dual and primal infeasibilities. (Only used if option "mu_oracle" is set to "quality-function".)

Default:

`none`

value meaning `cubic`

Max(0,Max(dual_inf,primal_inf)-compl)^3 `none`

no balancing term is added

**quality_function_centrality** *(string)*: The penalty term for centrality that is included in quality function. ↵

This determines whether a term is added to the quality function to penalize deviation from centrality with respect to complementarity. The complementarity measure here is the xi in the Loqo update rule. (Only used if option "mu_oracle" is set to "quality-function".)

Default:

`none`

value meaning `cubed-reciprocal`

complementarity * the reciprocal of the centrality measure cubed `log`

complementarity * the log of the centrality measure `none`

no penalty term is added `reciprocal`

complementarity * the reciprocal of the centrality measure

**quality_function_max_section_steps** *(integer)*: Maximum number of search steps during direct search procedure determining the optimal centering parameter. ↵

The golden section search is performed for the quality function based mu oracle. (Only used if option "mu_oracle" is set to "quality-function".)

Default:

`8`

**quality_function_norm_type** *(string)*: Norm used for components of the quality function. ↵

(Only used if option "mu_oracle" is set to "quality-function".)

Default:

`2-norm-squared`

value meaning `1-norm`

use the 1-norm (abs sum) `2-norm`

use 2-norm `2-norm-squared`

use the 2-norm squared (sum of squares) `max-norm`

use the infinity norm (max)

**quality_function_section_qf_tol** *(real)*: Tolerance for the golden section search procedure determining the optimal centering parameter (in the function value space). ↵

The golden section search is performed for the quality function based mu oracle. (Only used if option "mu_oracle" is set to "quality-function".)

Range: [

`0`

,`1`

]Default:

`0`

**quality_function_section_sigma_tol** *(real)*: Tolerance for the section search procedure determining the optimal centering parameter (in sigma space). ↵

The golden section search is performed for the quality function based mu oracle. (Only used if option "mu_oracle" is set to "quality-function".)

Range: [

`0`

,`1`

]Default:

`0.01`

**random_generator_seed** *(integer)*: Set seed for random number generator (a value of -1 sets seeds to time since Epoch). ↵

Range: [

`-1`

, ∞]Default:

`0`

**random_point_perturbation_interval** *(real)*: Amount by which starting point is perturbed when choosing to pick random point by perturbing starting point ↵

Default:

`1`

**random_point_type** *(string)*: method to choose a random starting point ↵

Default:

`Jon`

value meaning `andreas`

perturb the starting point of the problem within a prescribed interval `claudia`

perturb the starting point using the perturbation radius suffix information `jon`

Choose random point uniformly between the bounds

**read_solution_file** *(string)*: Read a file with the optimal solution to test if algorithms cuts it. ↵

For Debugging purposes only.

Default:

`no`

value meaning `no`

`yes`

**recalc_y** *(string)*: Tells the algorithm to recalculate the equality and inequality multipliers as least square estimates. ↵

This asks the algorithm to recompute the multipliers, whenever the current infeasibility is less than recalc_y_feas_tol. Choosing yes might be helpful in the quasi-Newton option. However, each recalculation requires an extra factorization of the linear system. If a limited memory quasi-Newton option is chosen, this is used by default.

Default:

`no`

value meaning `no`

use the Newton step to update the multipliers `yes`

use least-square multiplier estimates

**recalc_y_feas_tol** *(real)*: Feasibility threshold for recomputation of multipliers. ↵

If recalc_y is chosen and the current infeasibility is less than this value, then the multipliers are recomputed.

Default:

`1e-06`

**redcost_bt** *(string)*: Reduced cost bound tightening ↵

This bound reduction technique uses the reduced costs of the LP in order to infer better variable bounds.

Default:

`yes`

value meaning `no`

`yes`

**reduce_split_cuts** *(integer)*: Frequency k (in terms of nodes) for generating reduce_split_cuts cuts in branch-and-cut. ↵

See option

`2mir_cuts`

for the meaning of k.Range: [

`-100`

, ∞]Default:

`0`

**red_cost_branching** *(string)*: Apply Reduced Cost Branching (instead of the Violation Transfer) – MUST have vt_obj enabled ↵

Default:

`no`

value meaning `no`

Use Violation Transfer with \(\sum \|\pi_i a_{ij}\|\) `yes`

Use Reduced cost branching with \(\|\sum \pi_i a_{ij}\|\)

**reformulate_print_level** *(integer)*: Output level for reformulating problems in Couenne ↵

Range: [

`-2`

,`12`

]Default:

`0`

**replace_bounds** *(string)*: Indicates if all variable bounds should be replaced by inequality constraints ↵

This option must be set for the inexact algorithm

Default:

`no`

value meaning `no`

leave bounds on variables `yes`

replace variable bounds by inequality constraints

**required_infeasibility_reduction** *(real)*: Required reduction of infeasibility before leaving restoration phase. ↵

The restoration phase algorithm is performed, until a point is found that is acceptable to the filter and the infeasibility has been reduced by at least the fraction given by this option.

Range: [

`0`

,`1`

]Default:

`0.9`

**residual_improvement_factor** *(real)*: Minimal required reduction of residual test ratio in iterative refinement. ↵

If the improvement of the residual test ratio made by one iterative refinement step is not better than this factor, iterative refinement is aborted.

Default:

`1`

**residual_ratio_max** *(real)*: Iterative refinement tolerance ↵

Iterative refinement is performed until the residual test ratio is less than this tolerance (or until "max_refinement_steps" refinement steps are performed).

Default:

`1e-10`

**residual_ratio_singular** *(real)*: Threshold for declaring linear system singular after failed iterative refinement. ↵

If the residual test ratio is larger than this value after failed iterative refinement, the algorithm pretends that the linear system is singular.

Default:

`1e-05`

**resolve_on_small_infeasibility** *(real)*: If a locally infeasible problem is infeasible by less than this, resolve it with initial starting point. ↵

Default:

`0`

**resto_failure_feasibility_threshold** *(real)*: Threshold for primal infeasibility to declare failure of restoration phase. ↵

If the restoration phase is terminated because of the "acceptable" termination criteria and the primal infeasibility is smaller than this value, the restoration phase is declared to have failed. The default value is 1e2*tol, where tol is the general termination tolerance.

Default:

`0`

**resto_penalty_parameter** *(real)*: Penalty parameter in the restoration phase objective function. ↵

This is the parameter rho in equation (31a) in the Ipopt implementation paper.

Default:

`1000`

**resto_proximity_weight** *(real)*: Weighting factor for the proximity term in restoration phase objective. ↵

This determines how the parameter zera in equation (29a) in the implementation paper is computed. zeta here is resto_proximity_weight*sqrt(mu), where mu is the current barrier parameter.

Default:

`1`

**rho** *(real)*: Value in penalty parameter update formula. ↵

Range: [

`0`

,`1`

]Default:

`0.1`

**sdp_cuts** *(integer)*: The frequency (in terms of nodes) at which Couenne SDP cuts are generated. ↵

Range: [

`-99`

, ∞]Default:

`0`

**sdp_cuts_fillmissing** *(string)*: Create fictitious auxiliary variables to fill non-fully dense minors. Can make a difference when Q has at least one zero term. ↵

Default:

`no`

value meaning `no`

Do not create auxiliaries and simply use Fourier-Motzkin to substitute a missing auxiliary y_ij with inequalities that use bounds and the definition y_ij = x_i x_j Advantage: limits the creation of auxiliaries, reformulation stays small. Default. `yes`

Create (at the beginning) auxiliaries that are linearized (through McCormick) and used within an SDP cut. This allows tighter cuts although it increases the size of the reformulation and hence of the linear relaxation.

**sdp_cuts_neg_ev** *(string)*: Only use negative eigenvalues to create sdp cuts. ↵

Default:

`yes`

value meaning `no`

use all eigenvalues regardless of their sign. `yes`

exclude all non-negative eigenvalues.

**sdp_cuts_num_ev** *(integer)*: The number of eigenvectors of matrix X to be used to create sdp cuts. ↵

Set to -1 to indicate that all n eigenvectors should be used. Eigenvalues are sorted in non-decreasing order, hence selecting 1 will provide cuts on the most negative eigenvalue.

Range: [

`-1`

, ∞]Default:

`-1`

**sdp_cuts_sparsify** *(string)*: Make cuts sparse by greedily reducing X one column at a time before extracting eigenvectors. ↵

Default:

`no`

value meaning `no`

`yes`

**second_perc_for_cutoff_decr** *(real)*: The percentage used when, the coeff of variance is greater than the threshold, to compute the cutoff_decr dynamically. ↵

Range: [-∞, ∞]

Default:

`-0.05`

**setup_pseudo_frac** *(real)*: Proportion of strong branching list that has to be taken from most-integer-infeasible list. ↵

Range: [

`0`

,`1`

]Default:

`0.5`

**sigma_max** *(real)*: Maximum value of the centering parameter. ↵

This is the upper bound for the centering parameter chosen by the quality function based barrier parameter update. (Only used if option "mu_oracle" is set to "quality-function".)

Default:

`100`

**sigma_min** *(real)*: Minimum value of the centering parameter. ↵

This is the lower bound for the centering parameter chosen by the quality function based barrier parameter update. (Only used if option "mu_oracle" is set to "quality-function".)

Default:

`1e-06`

**skip_corr_if_neg_curv** *(string)*: Skip the corrector step in negative curvature iteration (unsupported!). ↵

The corrector step is not tried if negative curvature has been encountered during the computation of the search direction in the current iteration. This option is only used if "mu_strategy" is "adaptive".

Default:

`yes`

value meaning `no`

don't skip `yes`

skip

**skip_corr_in_monotone_mode** *(string)*: Skip the corrector step during monotone barrier parameter mode (unsupported!). ↵

The corrector step is not tried if the algorithm is currently in the monotone mode (see also option "barrier_strategy").This option is only used if "mu_strategy" is "adaptive".

Default:

`yes`

value meaning `no`

don't skip `yes`

skip

**slack_bound_frac** *(real)*: Desired minimum relative distance from the initial slack to bound. ↵

Determines how much the initial slack variables might have to be modified in order to be sufficiently inside the inequality bounds (together with "slack_bound_push"). (This is kappa_2 in Section 3.6 of implementation paper.)

Range: [

`0`

,`0.5`

]Default:

`0.01`

**slack_bound_push** *(real)*: Desired minimum absolute distance from the initial slack to bound. ↵

Determines how much the initial slack variables might have to be modified in order to be sufficiently inside the inequality bounds (together with "slack_bound_frac"). (This is kappa_1 in Section 3.6 of implementation paper.)

Default:

`0.01`

**slack_move** *(real)*: Correction size for very small slacks. ↵

Due to numerical issues or the lack of an interior, the slack variables might become very small. If a slack becomes very small compared to machine precision, the corresponding bound is moved slightly. This parameter determines how large the move should be. Its default value is mach_eps^{3/4}. (See also end of Section 3.5 in implementation paper - but actual implementation might be somewhat different.)

Default:

`1.81899e-12`

**soc_method** *(integer)*: Ways to apply second order correction ↵

This option determins the way to apply second order correction, 0 is the method described in the implementation paper. 1 is the modified way which adds alpha on the rhs of x and s rows.

Range: [

`0`

,`1`

]Default:

`0`

**soft_resto_pderror_reduction_factor** *(real)*: Required reduction in primal-dual error in the soft restoration phase. ↵

The soft restoration phase attempts to reduce the primal-dual error with regular steps. If the damped primal-dual step (damped only to satisfy the fraction-to-the-boundary rule) is not decreasing the primal-dual error by at least this factor, then the regular restoration phase is called. Choosing "0" here disables the soft restoration phase.

Default:

`0.9999`

**solution_limit** *(integer)*: Abort after that much integer feasible solution have been found by algorithm ↵

value 0 deactivates option

Default:

`maxint`

**solvetrace** *(string)*: Name of file for writing solving progress information. ↵

**solvetracenodefreq** *(integer)*: Frequency in number of nodes for writing solving progress information. ↵

giving 0 disables writing of N-lines to trace file

Default:

`100`

**solvetracetimefreq** *(real)*: Frequency in seconds for writing solving progress information. ↵

giving 0.0 disables writing of T-lines to trace file

Default:

`5`

**start_with_resto** *(string)*: Tells algorithm to switch to restoration phase in first iteration. ↵

Setting this option to "yes" forces the algorithm to switch to the feasibility restoration phase in the first iteration. If the initial point is feasible, the algorithm will abort with a failure.

Default:

`no`

value meaning `no`

don't force start in restoration phase `yes`

force start in restoration phase

**s_max** *(real)*: Scaling threshold for the NLP error. ↵

(See paragraph after Eqn. (6) in the implementation paper.)

Default:

`100`

**s_phi** *(real)*: Exponent for linear barrier function model in the switching rule. ↵

(See Eqn. (19) in the implementation paper.)

Range: [

`1`

, ∞]Default:

`2.3`

**s_theta** *(real)*: Exponent for current constraint violation in the switching rule. ↵

(See Eqn. (19) in the implementation paper.)

Range: [

`1`

, ∞]Default:

`1.1`

**tau_min** *(real)*: Lower bound on fraction-to-the-boundary parameter tau. ↵

(This is tau_min in the implementation paper.) This option is also used in the adaptive mu strategy during the monotone mode.

Range: [

`0`

,`1`

]Default:

`0.99`

**theta_max_fact** *(real)*: Determines upper bound for constraint violation in the filter. ↵

The algorithmic parameter theta_max is determined as theta_max_fact times the maximum of 1 and the constraint violation at initial point. Any point with a constraint violation larger than theta_max is unacceptable to the filter (see Eqn. (21) in the implementation paper).

Default:

`10000`

**theta_min_fact** *(real)*: Determines constraint violation threshold in the switching rule. ↵

The algorithmic parameter theta_min is determined as theta_min_fact times the maximum of 1 and the constraint violation at initial point. The switching rules treats an iteration as an h-type iteration whenever the current constraint violation is larger than theta_min (see paragraph before Eqn. (19) in the implementation paper).

Default:

`0.0001`

**time_limit** *(real)*: Set the global maximum computation time (in secs) for the algorithm. ↵

Default:

`1000`

**tiny_element** *(real)*: Value for tiny element in OA cut ↵

We will remove "cleanly" (by relaxing cut) an element lower than this.

Default:

`1e-08`

**tiny_step_tol** *(real)*: Tolerance for detecting numerically insignificant steps. ↵

If the search direction in the primal variables (x and s) is, in relative terms for each component, less than this value, the algorithm accepts the full step without line search. If this happens repeatedly, the algorithm will terminate with a corresponding exit message. The default value is 10 times machine precision.

Default:

`2.22045e-15`

**tiny_step_y_tol** *(real)*: Tolerance for quitting because of numerically insignificant steps. ↵

If the search direction in the primal variables (x and s) is, in relative terms for each component, repeatedly less than tiny_step_tol, and the step in the y variables is smaller than this threshold, the algorithm will terminate.

Default:

`0.01`

**tol** *(real)*: Desired convergence tolerance (relative). ↵

Determines the convergence tolerance for the algorithm. The algorithm terminates successfully, if the (scaled) NLP error becomes smaller than this value, and if the (absolute) criteria according to "dual_inf_tol", "constr_viol_tol", and "compl_inf_tol" are met. (This is epsilon_tol in Eqn. (6) in implementation paper). See also "acceptable_tol" as a second termination criterion. Note, some other algorithmic features also use this quantity to determine thresholds etc.

Default:

`1e-08`

**tree_search_strategy** *(string)*: Pick a strategy for traversing the tree ↵

All strategies can be used in conjunction with any of the node comparison functions. Options which affect dfs-dive are max-backtracks-in-dive and max-dive-depth. The dfs-dive won't work in a non-convex problem where objective does not decrease down branches.

Default:

`probed-dive`

value meaning `dfs-dive`

Dive in the tree if possible doing a depth first search. Backtrack on leaves or when a prescribed depth is attained or when estimate of best possible integer feasible solution in subtree is worst than cutoff. `dfs-dive-dynamic`

Same as dfs-dive but once enough solution are found switch to best-bound and if too many nodes switch to depth-first. `dive`

Dive in the tree if possible, otherwise pick top node as sorted by the tree comparison function. `probed-dive`

Dive in the tree exploring two children before continuing the dive at each level. `top-node`

Always pick the top node as sorted by the node comparison function

**trust_strong** *(string)*: Fathom strong branching LPs when their bound is above the cutoff ↵

Default:

`yes`

value meaning `no`

`yes`

**trust_strong_branching_for_pseudo_cost** *(string)*: Whether or not to trust strong branching results for updating pseudo costs. ↵

Default:

`yes`

value meaning `no`

`yes`

**twoimpl_depth_level** *(integer)*: Depth of the B&B tree when to start decreasing the chance of running this algorithm. ↵

This has a similar behavior as log_num_obbt_per_level. A value of -1 means that generation can be done at all nodes.

Range: [

`-1`

, ∞]Default:

`5`

**twoimpl_depth_stop** *(integer)*: Depth of the B&B tree where separation is stopped. ↵

A value of -1 means that generation can be done at all nodes

Range: [

`-1`

, ∞]Default:

`20`

**two_implied_bt** *(integer)*: The frequency (in terms of nodes) at which Couenne two-implied bounds are tightened. ↵

Range: [

`-99`

, ∞]Default:

`0`

**two_implied_max_trials** *(integer)*: The number of iteration at each call to the cut generator. ↵

Range: [

`1`

, ∞]Default:

`2`

**use_auxcons** *(string)*: Use constraints-defined auxiliaries, i.e. auxiliaries w = f(x) defined by original constraints f(x) - w = 0 ↵

Default:

`yes`

value meaning `no`

`yes`

**use_quadratic** *(string)*: Use quadratic expressions and related exprQuad class ↵

If enabled, then quadratic forms are not reformulated and therefore decomposed as a sum of auxiliary variables, each associated with a bilinear term, but rather taken as a whole expression. Envelopes for these expressions are generated through alpha-convexification.

Default:

`no`

value meaning `no`

Use an auxiliary for each bilinear term `yes`

Create only one auxiliary for a quadratic expression

**use_semiaux** *(string)*: Use semiauxiliaries, i.e. auxiliaries defined as w ≥ f(x) rather than w := f(x)) ↵

Default:

`yes`

value meaning `no`

Only use auxiliaries assigned with '=' `yes`

Use auxiliaries defined by w ≤ f(x), w ≥ f(x), and w = f(x)

**variable_selection** *(string)*: Chooses variable selection strategy ↵

Default:

`strong-branching`

value meaning `lp-strong-branching`

Perform strong branching with LP approximation `most-fractional`

Choose most fractional variable `nlp-strong-branching`

Perform strong branching with NLP approximation `osi-simple`

Osi method to do simple branching `osi-strong`

Osi method to do strong branching `qp-strong-branching`

Perform strong branching with QP approximation `random`

Method to choose branching variable randomly `reliability-branching`

Use reliability branching `strong-branching`

Perform strong branching

**very_tiny_element** *(real)*: Value for very tiny element in OA cut ↵

Algorithm will take the risk of neglecting an element lower than this.

Default:

`1e-17`

**violated_cuts_only** *(string)*: Yes if only violated convexification cuts should be added ↵

Default:

`yes`

value meaning `no`

`yes`

**warm_start** *(string)*: Select the warm start method ↵

This will affect the function getWarmStart(), and as a consequence the warm starting in the various algorithms.

Default:

`none`

value meaning `fake_basis`

builds fake basis, useful for cut management in Cbc (warm start is the same as in none) `interior_point`

Warm start with an interior point of direct parent `none`

No warm start, just start NLPs from optimal solution of the root relaxation `optimum`

Warm start with direct parent optimum

**warm_start_bound_frac** *(real)*: same as bound_frac for the regular initializer. ↵

Range: [

`0`

,`0.5`

]Default:

`0.001`

**warm_start_bound_push** *(real)*: same as bound_push for the regular initializer. ↵

Default:

`0.001`

**warm_start_init_point** *(string)*: Warm-start for initial point ↵

Indicates whether this optimization should use a warm start initialization, where values of primal and dual variables are given (e.g., from a previous optimization of a related problem.)

Default:

`no`

value meaning `no`

do not use the warm start initialization `yes`

use the warm start initialization

**warm_start_mult_bound_push** *(real)*: same as mult_bound_push for the regular initializer. ↵

Default:

`0.001`

**warm_start_mult_init_max** *(real)*: Maximum initial value for the equality multipliers. ↵

Range: [-∞, ∞]

Default:

`1e+06`

**warm_start_slack_bound_frac** *(real)*: same as slack_bound_frac for the regular initializer. ↵

Range: [

`0`

,`0.5`

]Default:

`0.001`

**warm_start_slack_bound_push** *(real)*: same as slack_bound_push for the regular initializer. ↵

Default:

`0.001`

**watchdog_shortened_iter_trigger** *(integer)*: Number of shortened iterations that trigger the watchdog. ↵

If the number of successive iterations in which the backtracking line search did not accept the first trial point exceeds this number, the watchdog procedure is activated. Choosing "0" here disables the watchdog procedure.

Default:

`10`

**watchdog_trial_iter_max** *(integer)*: Maximum number of watchdog iterations. ↵

This option determines the number of trial iterations allowed before the watchdog procedure is aborted and the algorithm returns to the stored point.

Range: [

`1`

, ∞]Default:

`3`