CONOPT is based on the usual NLP model in which all variables are continuous and all constraints are smooth with smooth first derivatives. In addition, the Jacobian (the matrix of first derivatives) is assumed to be sparse. CONOPT attempts to find a local optimum satisfying the usual Karish-Kuhn-Tucker optimality conditions.
The nonlinear functions defining the model and their analytic derivatives are assumed to be computable with high accuracy.
2nd derivatives are needed in some components of CONOPT and models with many degrees of freedom can only be solved efficiently if 2nd derivatives are available.
Models are assumed to be well scaled. CONOPT has an automatic scaling option, but nonlinear models are hard to scale automatically and a good user scaling is often crucial for large models.
Models with discrete variables cannot be solved by CONOPT. Some modeling systems such as AIMMS, AMPL, and GAMS and the LINDO API provide a system around CONOPT (a Branch & Bound or an Outer Approximation algorithm) that can handle discrete variables.
Models with non-differentiable functions may be submitted to CONOPT, but CONOPT will become less reliable and it may terminate in a point that is not a local optimum.
Dense models can also be solved with CONOPT, but computing time may be slightly higher than for algorithms using dense linear algebra.
CONOPT will usually not work well with noisy functions. In particular, nonlinear functions based on iterative solution of sub-models or numerical integration of differential equations will usually create problems for CONOPT. Derivatives computed with numerical differences are usually not sufficiently accurate.
CONOPT cannot guarantee that the solution is the global optimum. The user must be familiar with the theory of local vs. global solutions and judge for himself. When models have multiple local optima or local minima for the sum of infeasibility objective them CONOPT may terminate in any of these points.