Changeset 453


Ignore:
Timestamp:
Aug 16, 2005 6:54:47 PM (14 years ago)
Author:
andreasw
Message:

several minor corrections

File:
1 edited

Legend:

Unmodified
Added
Removed
  • branches/dev/Docs/documentation.tex

    r450 r453  
    1414\setlength{\topmargin}{-0.5in}         % Top margin
    1515\renewcommand{\baselinestretch}{1.1}
    16 %\usepackage{times} % Times Roman font ?
     16\usepackage{amsfonts}
     17
     18\newcommand{\RR}{{\mathbb{R}}}
     19
    1720
    1821\begin{document}
    19 \title{
    20 \begin{large}
    21 \textbf{Introduction to Ipopt:}
    22 \end{large} \\
    23 \begin{small}
    24 A tutorial for downloading, installing, and using Ipopt. \\ Yoshiaki
    25 Kawajiri,\footnote{Department of Chemical Engineering, Carnegie Mellon
    26 University, Pittsburgh, PA, 15213, Email: kawajiri@cmu.edu} Carl
     22\title{Introduction to Ipopt:\\
     23A tutorial for downloading, installing, and using Ipopt.}
     24
     25\author{Yoshiaki
     26Kawajiri\footnote{Department of Chemical Engineering, Carnegie Mellon
     27University, Pittsburgh, PA, 15213, Email: kawajiri@cmu.edu}, Carl
    2728D. Laird\footnote{Department of Chemical Engineering, Carnegie Mellon
    28 University, Pittsburgh, PA, 15213, Email: kawajiri@cmu.edu}
    29 \end{small}
    30 }
     29University, Pittsburgh, PA, 15213, Email: kawajiri@cmu.edu}}
    3130
    3231%\date{\today}
     
    3635Ipopt (\underline{I}nterior \underline{P}oint \underline{Opt}imizer) is an open
    3736source software package for large-scale nonlinear optimization. It can
    38 be used to solve general nonlinear programming problems of the form,
     37be used to solve general nonlinear programming problems of the form
    3938\begin{eqnarray}
    40 \min_{x} &f(x) \label{obj} \\
    41 \mbox{s.t.} \;  &g^L \leq g(x) \leq g^U \\
    42                 &x^L \leq x \leq x^U, \label{bounds}
     39\min_{x} &&f(x) \label{obj} \\
     40\mbox{s.t.} \;  &&g^L \leq g(x) \leq g^U \\
     41                &&x^L \leq x \leq x^U, \label{bounds}
    4342\end{eqnarray}
    44 where $x \in \Re^n$ are the optimization variables (possibly with
    45 lower and upper bounds, $x^L$ and $x^U$), $f(x) \in \Re^1$ is the
    46 objective function, and $g(x) \in \Re^m$ are the general nonlinear
    47 constraints. The functions $f(x)$ and $g(x)$ can be linear or
    48 nonlinear and convex or non-convex. The constraints, $g(x)$ have lower
    49 and upper bounds, $g^L$ and $g^U$. Note that equality constraints of
    50 the form $g_i(x)=c$ can be specified by setting $g^L_{i}=g^U_{i}=c$.
     43where $x \in \RR^n$ are the optimization variables (possibly with
     44lower and upper bounds, $x^L\in(\RR\cup\{-\infty\})^n$ and
     45$x^U\in(\RR\cup\{+\infty\})^n$), $f:\RR^n\longrightarrow\RR$ is the
     46objective function, and $g:\RR^n\longrightarrow \RR^m$ are the general
     47nonlinear constraints.  The functions $f(x)$ and $g(x)$ can be linear
     48or nonlinear and convex or non-convex (but are assumed to be twice
     49continuously differentiable). The constraints, $g(x)$, have lower and
     50upper bounds, $g^L\in(\RR\cup\{-\infty\})^n$ and
     51$g^U\in(\RR\cup\{+\infty\})^m$. Note that equality constraints of the
     52form $g_i(x)=\bar g_i$ can be specified by setting $g^L_{i}=g^U_{i}=\bar g_i$.
    5153
    5254Ipopt implements an interior point line search filter method. The
    53 mathematical details of the algorithm can be found in the reports by
    54 the main author of Ipopt, Andreas W\"achter
     55mathematical details of the algorithm can be found in the reports
    5556\cite{AndreasPaper} \cite{AndreasThesis}.
    5657
     
    6970to look at the archives before posting a question.
    7071
    71 \section{New Ipopt versus Old Ipopt}
     72\section{History of Ipopt}
    7273The original Ipopt (Fortran version) was a product of the dissertation
    7374research of Andreas W\"achter \cite{AndreasThesis}, under Lorenz
     
    102103\begin{enumerate}
    103104\item{Create a directory to store the code}\\
    104 {\tt \$ mkdir ipopt}\\
     105{\tt \$ mkdir Ipopt}\\
    105106Note the {\tt \$} indicates the command line
    106107prompt, do not type {\tt \$}, only the text following it.
    107108\item{Download the code to the new directory}\\
    108 {\tt \$ cd ipopt; svn co https://www.coin-or.org/svn/ipopt-devel/trunk}
     109{\tt \$ cd Ipopt; svn co https://www.coin-or.org/svn/ipopt-devel/trunk}
    109110\end{enumerate}
    110111
    111112\subsection{Download External Code}
    112113Ipopt uses a few external packages that are not included in the
    113 distribution, namely ASL (the Ampl Solver Library), blas, lapack and
    114 some routines from the Harwell Subroutine Library.
    115 
    116 
    117 \subsubsection{Download ASL, blas, and lapack}
    118 Retrieving ASL, blas, and lapack is straightforward using scripts
    119 included with the ipopt distribution. These scripts download the required
    120 files from the Netlib Repository \\
    121 (http://www.netlib.org).\\
     114distribution, namely ASL (the Ampl Solver Library), BLAS, and
     115some routines from the Harwell Subroutine Library.
     116
     117Note that you only need to obtain the ASL if you intend to use Ipopt
     118from AMPL.  It is not required if you want to specify yout
     119optimization problem in a programming language (C++, C, or Fortran).
     120
     121\subsubsection{Download BLAS and ASL}
     122If you have the download utility \texttt{wget} installed on your
     123system, retrieving ASL, and BLAS is straightforward using scripts
     124included with the ipopt distribution. These scripts download the
     125required files from the Netlib Repository (http://www.netlib.org).\\
     126
     127\noindent
    122128{\tt \$ cd trunk/ipopt/Extern/blas; ./get.blas}\\
    123 {\tt \$ cd trunk/ipopt/Extern/lapack; ./get.lapack}\\
    124 {\tt \$ cd trunk/ipopt/Extern/ASL; ./get.asl}\\
    125 
    126 \subsubsection{Download HSL libraries}
     129{\tt \$ cd ../ASL; ./get.asl}\\
     130
     131\noindent
     132If you don't have \texttt{wget} installed on your system, please read
     133the \texttt{INSTALL.*} files in the \texttt{trunk/ipopt/Extern/blas}
     134and \texttt{trunk/ipopt/Extern/ASL} directories for alternative
     135instructions.
     136
     137\subsubsection{Download HSL Subroutines}
    127138In addition to the IPOPT source code, two additional subroutines have
    128139to be downloaded from the Harwell Subroutine Library (HSL).  The
     
    150161\end{enumerate}
    151162
     163
    152164\section{Compiling and Installing Ipopt} \label{sec.comp_and_inst}
    153165Ipopt can be easily compiled and installed with the usual {\tt configure},
     
    179191representation that should meet the needs of most users.
    180192
    181 This tutorial will discuss four interfaces to Ipopt, the AMPL modeling
    182 language interface, and the C, C++, and Fortran code interfaces.  AMPL
    183 is a 3rd party modeling language tool that allows users to write their
    184 optimization problem in a syntax that resembles the way the problem
    185 would be written mathematically. Once the problem has been formulated
    186 in AMPL, the problem can be easily solved using the (already compiled)
    187 executable. Interfacing your problem by directly linking code requires
    188 more effort to write, but can be far more efficient for large
    189 problems.
     193This tutorial will discuss four interfaces to Ipopt, namely the AMPL
     194modeling language interface, and the C, C++, and Fortran code
     195interfaces.  AMPL is a 3rd party modeling language tool that allows
     196users to write their optimization problem in a syntax that resembles
     197the way the problem would be written mathematically. Once the problem
     198has been formulated in AMPL, the problem can be easily solved using
     199the (already compiled) executable. Interfacing your problem by
     200directly linking code requires more effort to write, but can be far
     201more efficient for large problems.
    190202
    191203We will illustrate how to use each of the four interfaces using an
     
    193205\begin{eqnarray}
    194206\min_{x \in \Re^4} &x_1 x_4 (x_1 + x_2 + x_3)  +  x_3 \label{ex_obj} \\
    195 \mbox{s.t.} \;  &x_1 x_2 x_3 x_4 \ge 25 \label{ex_ineq} \\
    196                 &x_1^2 + x_2^2 + x_3^2 + x_4^2  =  40 \label{ex_equ} \\
    197                 &1 \leq x_1, x_2, x_3, x_4 \leq 5, \label{ex_bounds}
     207\mbox{s.t.} \;  &x_1 x_2 x_3 x_4 \ge 25 \label{ex_ineq} \\
     208                &x_1^2 + x_2^2 + x_3^2 + x_4^2  =  40 \label{ex_equ} \\
     209                &1 \leq x_1, x_2, x_3, x_4 \leq 5, \label{ex_bounds}
    198210\end{eqnarray}
    199211with the starting point,
     
    209221Interfacing through the AMPL interface is by far the easiest way to
    210222solve a problem with Ipopt. The user must simply formulate the problem
    211 in AMPL syntax, and solve the problem through the AMPL
    212 environment. There are drawbacks, however. AMPL is a 3rd party package
    213 and, as such, must be appropriately licensed (a free, limited student
    214 version is available from the AMPL website,
     223in AMPL syntax, and solve the problem through the AMPL environment.
     224There are drawbacks, however. AMPL is a 3rd party package and, as
     225such, must be appropriately licensed (a free, student version for
     226limited problem size is available from the AMPL website,
    215227www.ampl.com). Furthermore, the AMPL environment may be prohibitive
    216228for very large problems. Nevertheless, formulating the problem in AMPL
     
    223235
    224236The problem presented in equations (\ref{ex_obj}-\ref{ex_startpt}) can
    225 be solved with ipopt with the following AMPL mod file.
     237be solved with Ipopt with the following AMPL mod file.
    226238\subsubsection{hs071\_ampl.mod}
    227239\begin{verbatim}
     
    239251# specify the objective function
    240252minimize obj:
    241                 x1 * x4 * (x1 + x2 + x3) + x3;
    242        
     253                x1 * x4 * (x1 + x2 + x3) + x3;
     254       
    243255# specify the constraints
    244256s.t.
    245        
    246         inequality:
    247                 x1 * x2 * x3 * x4 >= 25;
    248                
    249         equality:
    250                 x1\^2 + x2\^2 + x3\^2 +x4\^2 = 40;
    251 
    252 # specify the starting point           
     257       
     258        inequality:
     259                x1 * x2 * x3 * x4 >= 25;
     260               
     261        equality:
     262                x1^2 + x2^2 + x3^2 +x4^2 = 40;
     263
     264# specify the starting point           
    253265let x1 := 1;
    254266let x2 := 5;
     
    273285\begin{verbatim}
    274286$ ampl
    275 > model hs071\_ampl.mod;
     287> model hs071_ampl.mod;
    276288.
    277289.
     
    298310\caption{Information Required By Ipopt}
    299311\begin{enumerate}
    300         \item Problem dimensions \label{it.prob_dim}
    301                 \begin{itemize}
    302                         \item number of variables
    303                         \item number of constraints
    304                 \end{itemize}
    305         \item Problem bounds
    306                 \begin{itemize}
    307                         \item variable bounds
    308                         \item constraint bounds
    309                 \end{itemize}
    310         \item Initial starting point
    311                 \begin{itemize}
    312                         \item Initial values for the primal $x$ variables
    313                         \item Initial values for the multipliers may
    314                                 be given, but are not required
    315                 \end{itemize}
    316         \item Problem Structure \label{it.prob_struct}
    317                 \begin{itemize}
    318                 \item number of nonzeros in the jacobian of the constraints
    319                 \item number of nonzeros in the hessian of the lagrangian
    320                 \item Structure of the Jacobian of the constraints
    321                 \item Structure of the Hessian of the Lagrangian
    322                 \end{itemize}
    323         \item Evaluation of Problem Functions \label{it.prob_eval} \\
    324         Information evaluated using a given point
    325         ($x_k, \lambda_k$ coming from Ipopt)
    326                 \begin{itemize}
    327                         \item Objective function, $f(x_k)$
    328                         \item Gradient of the objective $\nabla f(x_k)$
    329                         \item Constraint residuals, $g(x_k)$
    330                         \item Jacobian of the constraints, $\nabla g(x_k)$
    331                         \item Hessian of the Lagrangian,
    332                         $\alpha_f \nabla^2_x f(x_k) + \nabla^2_x g(x_k)^T \lambda$
    333                 \end{itemize}
     312        \item Problem dimensions \label{it.prob_dim}
     313                \begin{itemize}
     314                        \item number of variables
     315                        \item number of constraints
     316                \end{itemize}
     317        \item Problem bounds
     318                \begin{itemize}
     319                        \item variable bounds
     320                        \item constraint bounds
     321                \end{itemize}
     322        \item Initial starting point
     323                \begin{itemize}
     324                        \item Initial values for the primal $x$ variables
     325                        \item Initial values for the multipliers (only
     326                          required for a warm start option)
     327                \end{itemize}
     328        \item Problem Structure \label{it.prob_struct}
     329                \begin{itemize}
     330                \item number of nonzeros in the Jacobian of the constraints
     331                \item number of nonzeros in the Hessian of the Lagrangian
     332                \item Structure of the Jacobian of the constraints
     333                \item Structure of the Hessian of the Lagrangian
     334                \end{itemize}
     335        \item Evaluation of Problem Functions \label{it.prob_eval} \\
     336        Information evaluated using a given point
     337        ($x_k, \lambda_k, \sigma_f$ coming from Ipopt)
     338                \begin{itemize}
     339                        \item Objective function, $f(x_k)$
     340                        \item Gradient of the objective $\nabla f(x_k)$
     341                        \item Constraint residuals, $g(x_k)$
     342                        \item Jacobian of the constraints, $\nabla g(x_k)$
     343                        \item Hessian of the Lagrangian,
     344                        $\sigma_f \nabla^2 f(x_k) + \sum_{i=1}^m\lambda_i\nabla^2 g_i(x_k)$
     345                \end{itemize}
    334346\end{enumerate}
    335347\label{fig.required_info}
    336348\end{figure}
    337349\vspace{0.1in}
    338 The information required by Ipopt is shown in figure
     350The information required by Ipopt is shown in Figure
    339351\ref{fig.required_info}. The problem dimensions and bounds are
    340352straightforward and come solely from the problem definition. The
     
    363375(\ref{ex_obj}-\ref{ex_bounds}).
    364376
    365 The gradient of the objective is given by,
     377The gradient of the objective is given by
    366378\begin{equation}
    367379\left[
     
    374386\right],
    375387\end{equation}
    376 the jacobian of
     388the Jacobian of
    377389the constraints is,
    378390\begin{equation}
    379391\left[
    380392\begin{array}{cccc}
    381 x_2 x_3 x_4     & x_1 x_3 x_4   & x_1 x_2 x_4   & x_1 x_2 x_3   \\
    382 2 x_1           & 2 x_2         & 2 x_3         & 2 x_4
     393x_2 x_3 x_4     & x_1 x_3 x_4   & x_1 x_2 x_4   & x_1 x_2 x_3   \\
     3942 x_1           & 2 x_2         & 2 x_3         & 2 x_4
    383395\end{array}
    384396\right].
    385397\end{equation}
    386398
    387 We need to determine the hessian of the Lagrangian.  The Lagrangian is
     399We need to determine the Hessian of the Lagrangian.  The Lagrangian is
    388400given by $f(x) + g(x)^T \lambda$ and the Hessian of the Lagrangian is
    389 technically, $ \nabla^2_x f(x_k) + \nabla^2_x g(x_k)^T \lambda$,
    390 however, so that Ipopt can ask for the hessian of the objective or the
     401technically, $ \nabla^2 f(x_k) + sum_{i=1}^m\lambda_i\nabla^2 g_i(x_k)$,
     402however, so that Ipopt can ask for the Hessian of the objective or the
    391403constraints independently if required, we introduce a factor
    392404($\sigma_f$) in front of the objective term. The value for $\sigma_f$
     
    407419\sigma_f \left[
    408420\begin{array}{cccc}
    409 2 x_4           & x_4           & x_4           & 2 x_1 + x_2 + x_3     \\
    410 x_4             & 0             & 0             & x_1                   \\
    411 x_4             & 0             & 0             & x_1                   \\
    412 2 x_1+x_2+x_3   & x_1           & x_1           & 0
     4212 x_4           & x_4           & x_4           & 2 x_1 + x_2 + x_3     \\
     422x_4             & 0             & 0             & x_1                   \\
     423x_4             & 0             & 0             & x_1                   \\
     4242 x_1+x_2+x_3   & x_1           & x_1           & 0
    413425\end{array}
    414426\right]
    415427+
     428\lambda_1
    416429\left[
    417430\begin{array}{cccc}
    418 0               & x_3 x_4       & x_2 x_4       & x_2 x_3       \\
    419 x_3 x_4         & 0             & x_1 x_4       & x_1 x_3       \\
    420 x_2 x_4         & x_1 x_4       & 0             & x_1 x_2       \\
    421 x_2 x_3         & x_1 x_3       & x_1 x_2       & 0
     4310               & x_3 x_4       & x_2 x_4       & x_2 x_3       \\
     432x_3 x_4         & 0             & x_1 x_4       & x_1 x_3       \\
     433x_2 x_4         & x_1 x_4       & 0             & x_1 x_2       \\
     434x_2 x_3         & x_1 x_3       & x_1 x_2       & 0
    422435\end{array}
    423 \right] \lambda_1
     436\right]
    424437+
     438\lambda_2
    425439\left[
    426440\begin{array}{cccc}
    427 2       & 0     & 0     & 0     \\
    428 0       & 2     & 0     & 0     \\
    429 0       & 0     & 2     & 0     \\
    430 0       & 0     & 0     & 2
     4412       & 0     & 0     & 0     \\
     4420       & 2     & 0     & 0     \\
     4430       & 0     & 2     & 0     \\
     4440       & 0     & 0     & 2
    431445\end{array}
    432 \right] \lambda_2
     446\right]
    433447\end{equation}
    434 where the first term comes from the hessian of the objective function, and the
    435 second and third term from the hessian of (\ref{ex_ineq}) and (\ref{ex_equ})
     448where the first term comes from the Hessian of the objective function, and the
     449second and third term from the Hessian of (\ref{ex_ineq}) and (\ref{ex_equ})
    436450respectively. Therefore, the dual variables $\lambda_1$ and $\lambda_2$
    437451are then the multipliers for constraints (\ref{ex_ineq}) and (\ref{ex_equ})
     
    460474(\ref{ex_obj}-\ref{ex_bounds}) using, first C++, then C, and finally
    461475Fortran. Completed versions of these examples can be found in {\tt
    462 ipopt/trunk/Examples} under {\tt hs071\_cpp}, {\tt hs071\_c}, {\tt
     476Ipopt/trunk/Examples} under {\tt hs071\_cpp}, {\tt hs071\_c}, {\tt
    463477hs071\_f}.
    464478
    465479As a user, you are responsible for coding two sections of the program
    466 that solves a problem using Ipopt, the executable ({\tt main}) and the
     480that solves a problem using Ipopt: the executable ({\tt main}) and the
    467481problem representation.  Generally, you will write an executable that
    468482prepares the problem, and then passes control over to Ipopt through an
    469483{\tt Optimize} call. In this {\tt Optimize} call, you will give Ipopt
    470484everything that it requires to call back to your code whenever it
    471 needs functions evaluated (like the objective, the jacobian, etc.).
     485needs functions evaluated (like the objective, the Jacobian, etc.).
    472486In each of the three sections that follow (C++, C, and Fortran), we
    473487will first discuss how to code the problem representation, and then
     
    500514the compiler that we are using the Ipopt namespace, and create the
    501515declaration of the {\tt HS071\_NLP} class, inheriting off of {\tt
    502 TNLP}. Have a look at the {\tt TNLP} class in {\tt IpTNLP.hpp}; you
     516  TNLP}. Have a look at the {\tt TNLP} class in {\tt IpTNLP.hpp}; you
    503517will see eight pure virtual methods that we must implement. Declare
    504 these methods in the header file.
    505 Implement each of the methods in {\tt HS071\_NLP.cpp} using the descriptions
    506 given below. In {\tt hs071\_nlp.cpp}, first include the header file for yoru class
    507 and tell the compiler that you are using the Ipopt namespace. A full version of
    508 these files can be found in the {\tt Examples/hs071\_cpp} directory.
     518these methods in the header file.  Implement each of the methods in
     519{\tt HS071\_NLP.cpp} using the descriptions given below. In {\tt
     520  hs071\_nlp.cpp}, first include the header file for your class and
     521tell the compiler that you are using the Ipopt namespace. A full
     522version of these files can be found in the {\tt Examples/hs071\_cpp}
     523directory.
    509524
    510525\paragraph{virtual bool get\_nlp\_info(Index\& n, Index\& m, Index\& nnz\_jac\_g, \\
     
    516531\item {\tt n}: (out), the number of variables in the problem (dimension of {\tt x}).
    517532\item {\tt m}: (out), the number of constraints in the problem (dimension of {\tt g}).
    518 \item {\tt nnz\_jac\_g}: (out), the number of nonzero entries in the jacobian.
    519 \item {\tt nnz\_h\_lag}: (out), the number of nonzero entries in the hessian.
     533\item {\tt nnz\_jac\_g}: (out), the number of nonzero entries in the Jacobian.
     534\item {\tt nnz\_h\_lag}: (out), the number of nonzero entries in the Hessian.
    520535\item {\tt index\_style}: (out), the style used for row/col entries in the sparse matrix
    521536format (C\_STYLE: 0-based, FORTRAN\_STYLE: 1-based).
     
    523538Ipopt uses this information when allocating the arrays that
    524539it will later ask you to fill with values. Be careful in this method
    525 since incorrect values could cause memory bugs which may be very
     540since incorrect values will cause memory bugs which may be very
    526541difficult to find.
    527542
    528 Our example problem has 4 variables (n), and two
    529 constraints (m). The jacobian for this small problem is actually dense
    530 and has 8 nonzeros (we will still represent this jacobian using the
    531 sparse matrix triplet format). The Hessian of the Lagrangian has 16
    532 nonzeros. Keep in mind that the number of nonzeros is the total number of elements
    533 that may \emph{ever} be nonzero, not just those that are nonzero at the starting
    534 point. This information is set once for the entire problem.
     543Our example problem has 4 variables (n), and two constraints (m). The
     544Jacobian for this small problem is actually dense and has 8 nonzeros
     545(we will still represent this Jacobian using the sparse matrix triplet
     546format). The Hessian of the Lagrangian has 10 ``symmetric'' nonzeros.
     547Keep in mind that the number of nonzeros is the total number of
     548elements that may \emph{ever} be nonzero, not just those that are
     549nonzero at the starting point. This information is set once for the
     550entire problem.
    535551
    536552\begin{verbatim}
    537553bool HS071_NLP::get_nlp_info(Index& n, Index& m, Index& nnz_jac_g,
    538                              Index& nnz_h_lag, IndexStyleEnum& index_style)
     554                             Index& nnz_h_lag, IndexStyleEnum& index_style)
    539555{
    540556  // The problem described in HS071_NLP.hpp has 4 variables, x[0] through x[3]
     
    544560  m = 2;
    545561
    546   // in this example the jacobian is dense and contains 8 nonzeros
     562  // in this example the Jacobian is dense and contains 8 nonzeros
    547563  nnz_jac_g = 8;
    548564
    549   // the hessian is also dense and has 16 total nonzeros, but we
     565  // the Hessian is also dense and has 16 total nonzeros, but we
    550566  // only need the lower left corner (since it is symmetric)
    551567  nnz_h_lag = 10;
     
    570586\item {\tt g\_u}: (out) the upper bounds for {\tt g}.
    571587\end{itemize}
    572 The values of {\tt n} and {\tt m} that you specified in {\tt get\_nlp\_info}
    573 are passed to you for debug checking.
    574 Setting a lower bound to a value less than {\tt nlp\_lower\_bound\_inf\_}
    575 (data member of {\tt TNLP}) will cause
     588The values of {\tt n} and {\tt m} that you specified in {\tt
     589  get\_nlp\_info} are passed to you for debug checking.  Setting a
     590lower bound to a value less than or equal to the value of the option
     591{\tt nlp\_lower\_bound\_inf} (data member of {\tt TNLP}) will cause
    576592Ipopt to assume no lower bound. Likewise, specifying the upper bound
    577 above {\tt nlp\_upper\_bound\_inf\_} will cause Ipopt to assume no
    578 upper bound. The variables, {\tt nlp\_lower\_bound\_inf\_} and {\tt
    579 nlp\_upper\_bound\_inf\_}, are set to $-1e^{19}$ and $1e^{19}$
    580 respectively, by default, but may be modified by changing the options
    581 {\tt nlp\_lower\_bound\_inf} and {\tt nlp\_upper\_bound\_inf} (see
    582 Section \ref{sec.options}).
     593above or equal to the value of the option {\tt nlp\_upper\_bound\_inf}
     594will cause Ipopt to assume no upper bound.  These options, {\tt
     595  nlp\_lower\_bound\_inf} and {\tt nlp\_upper\_bound\_inf}, are set to
     596$-10^{19}$ and $10^{19}$ respectively, by default, but may be modified
     597by changing the options (see Section \ref{sec.options}).
    583598
    584599In our example, the first constraint has a lower bound of $25$ and no upper
    585600bound, so we set the lower bound of constraint {\tt [0]} to $25$ and
    586 the upper bound to some number greater than $1e^{19}$. The second
     601the upper bound to some number greater than $10^{19}$. The second
    587602constraint is an equality constraint and we set both bounds to
    588603$40$. Ipopt recognizes this as an equality constraint and does not
     
    626641\paragraph{virtual bool get\_starting\_point(Index n, bool init\_x, Number* x, \\
    627642              bool init\_z, Number* z\_L, Number* z\_U, Index m,
    628         bool init\_lambda, Number* lambda)}
     643        bool init\_lambda, Number* lambda)}
    629644$\;$ \\
    630645Give Ipopt the starting point before it begins iterating.
     
    634649\item {\tt x}: (out), the initial values for the primal variables, {\tt x}.
    635650\item {\tt init\_z}: (in), if true, this method must provide an initial value
    636         for the bound multipliers {\tt z\_L}, and {\tt z\_U}.
     651        for the bound multipliers {\tt z\_L}, and {\tt z\_U}.
    637652\item {\tt z\_L}: (out), the initial values for the bound multipliers, {\tt z\_L}.
    638653\item {\tt z\_U}: (out), the initial values for the bound multipliers, {\tt z\_U}.
    639654\item {\tt m}: (in), the number of constraints in the problem (dimension of {\tt g}).
    640655\item {\tt init\_lambda}: (in), if true, this method must provide an initial value
    641         for the constraint multipliers, {\tt lambda}.
     656        for the constraint multipliers, {\tt lambda}.
    642657\item {\tt lambda}: (out), the initial values for the constraint multipliers, {\tt lambda}.
    643658\end{itemize}
     
    668683  // Here, we assume we only have starting values for x, if you code
    669684  // your own NLP, you can provide starting values for the dual variables
    670   // if you wish
     685  // if you wish to use a warmstart option
    671686  assert(init_x == true);
    672687  assert(init_z == false);
     
    684699
    685700\paragraph{virtual bool eval\_f(Index n, const Number* x,
    686                 bool new\_x, Number\& obj\_value)}
     701                bool new\_x, Number\& obj\_value)}
    687702$\;$ \\
    688703Return the value of the objective function as calculated using {\tt x}.
     
    690705\item {\tt n}: (in), the number of variables in the problem (dimension of {\tt x}).
    691706\item {\tt x}: (in), the current values for the primal variables, {\tt x}.
    692 \item {\tt new\_x}: (in), false if an evaluation method was previously called
    693         with the same values in {\tt x}, true otherwise.
     707\item {\tt new\_x}: (in), false if any evaluation method was previously called
     708        with the same values in {\tt x}, true otherwise.
    694709\item {\tt obj\_value}: (out) the value of the objective function ($f(x)$).
    695710\end{itemize}
     
    698713any of the evaluation methods used the same $x$ values. This can be
    699714helpful when users have efficient implementations that calculate
    700 multiple outputs at once. Ipopt internally cache's results from the
     715multiple outputs at once. Ipopt internally caches results from the
    701716{\tt TNLP} and generally, this flag can be ignored.
    702717
     
    719734
    720735\paragraph{virtual bool eval\_grad\_f(Index n, const Number* x, bool new\_x,
    721         Number* grad\_f)}
     736        Number* grad\_f)}
    722737$\;$ \\
    723738Return the gradient of the objective to Ipopt, as calculated by the values in {\tt x}.
     
    725740\item {\tt n}: (in), the number of variables in the problem (dimension of {\tt x}).
    726741\item {\tt x}: (in), the current values for the primal variables, {\tt x}.
    727 \item {\tt new\_x}: (in), false if an evaluation method was previously called
    728         with the same values in {\tt x}, true otherwise.
     742\item {\tt new\_x}: (in), false if any evaluation method was previously called
     743        with the same values in {\tt x}, true otherwise.
    729744\item {\tt grad\_f}: (out) the array of values for the gradient of the
    730         objective function ($\nabla f(x)$).
     745        objective function ($\nabla f(x)$).
    731746\end{itemize}
    732747
    733 The gradient array is in the same order as the $x$ variables (i.e. the
     748The gradient array is in the same order as the $x$ variables (i.e.\ the
    734749gradient of the objective with respect to {\tt x[2]} should be put in
    735 {\tt grad\_f[2]}.
     750{\tt grad\_f[2]}).
    736751
    737752The boolean variable {\tt new\_x} will be false if the last call to
    738753any of the evaluation methods used the same $x$ values. This can be
    739754helpful when users have efficient implementations that caclulate
    740 multiple outputs at once. Ipopt internally cache's results from the
     755multiple outputs at once. Ipopt internally caches results from the
    741756{\tt TNLP} and generally, this flag can be ignored.
    742757
     
    745760get\_nlp\_info}.
    746761
    747 In our example, we ignore the {\tt new\_x} flag and calculate the values for the
    748 gradient of the objective.
     762In our example, we ignore the {\tt new\_x} flag and calculate the
     763values for the gradient of the objective.
    749764\begin{verbatim}
    750765bool HS071_NLP::eval_grad_f(Index n, const Number* x, bool new_x, Number* grad_f)
     
    763778
    764779\paragraph{virtual bool eval\_g(Index n, const Number* x,
    765         bool new\_x, Index m, Number* g)}
     780        bool new\_x, Index m, Number* g)}
    766781$\;$ \\
    767782Give Ipopt the value of the constraints as calculated by the values in {\tt x}.
     
    769784\item {\tt n}: (in), the number of variables in the problem (dimension of {\tt x}).
    770785\item {\tt x}: (in), the current values for the primal variables, {\tt x}.
    771 \item {\tt new\_x}: (in), false if an evaluation method was previously called
    772         with the same values in {\tt x}, true otherwise.
     786\item {\tt new\_x}: (in), false if any evaluation method was previously called
     787        with the same values in {\tt x}, true otherwise.
    773788\item {\tt m}: (in), the number of constraints in the problem (dimension of {\tt g}).
    774789\item {\tt g}: (out) the array of constraint residuals.
     
    781796any of the evaluation methods used the same $x$ values. This can be
    782797helpful when users have efficient implementations that calculate
    783 multiple outputs at once. Ipopt internally cache's results from the
     798multiple outputs at once. Ipopt internally caches results from the
    784799{\tt TNLP} and generally, this flag can be ignored.
    785800
     
    788803{\tt get\_nlp\_info}.
    789804
     805In our example, we ignore the {\tt new\_x} flag and calculate the
     806values for the gradient of the objective.
     807\begin{verbatim}
     808bool HS071_NLP::eval_g(Index n, const Number* x, bool new_x, Index m, Number* g)
     809{
     810  assert(n == 4);
     811  assert(m == 2);
     812
     813  g[0] = x[0] * x[1] * x[2] * x[3];
     814  g[1] = x[0]*x[0] + x[1]*x[1] + x[2]*x[2] + x[3]*x[3];
     815
     816  return true;
     817}
     818\end{verbatim}
     819
    790820\paragraph{virtual bool eval\_jac\_g(Index n, const Number* x, bool new\_x, \\
    791821                       Index m, Index nele\_jac, Index* iRow,
    792         Index *jCol, Number* values)}
     822        Index *jCol, Number* values)}
    793823$\;$ \\
    794 Return either the structure of the jacobian of the constraints, or the values for the
    795 jacobian of the constraints as calculated by the values in {\tt x}.
     824Return either the structure of the Jacobian of the constraints, or the values for the
     825Jacobian of the constraints as calculated by the values in {\tt x}.
    796826\begin{itemize}
    797827\item {\tt n}: (in), the number of variables in the problem (dimension of {\tt x}).
    798828\item {\tt x}: (in), the current values for the primal variables, {\tt x}.
    799 \item {\tt new\_x}: (in), false if an evaluation method was previously called
    800         with the same values in {\tt x}, true otherwise.
     829\item {\tt new\_x}: (in), false if any evaluation method was previously called
     830        with the same values in {\tt x}, true otherwise.
    801831\item {\tt m}: (in), the number of constraints in the problem (dimension of {\tt g}).
    802832\item {\tt n\_elel\_jac}: (in), the number of nonzero elements in the
    803         jacobian (dimension of {\tt iRow}, {\tt jCol}, and {\tt values}).
    804 \item {\tt iRow}: (out), the row indices of entries in the jacobian of the constraints.
    805 \item {\tt jCol}: (out), the column indices of entries in the jacobian of the constraints.
    806 \item {\tt values}: (out), the values of the entries in the jacobian of the constraints.
     833        Jacobian (dimension of {\tt iRow}, {\tt jCol}, and {\tt values}).
     834\item {\tt iRow}: (out), the row indices of entries in the Jacobian of the constraints.
     835\item {\tt jCol}: (out), the column indices of entries in the Jacobian of the constraints.
     836\item {\tt values}: (out), the values of the entries in the Jacobian of the constraints.
    807837\end{itemize}
    808838
    809 The jacobian is the matrix of derivatives where
     839The Jacobian is the matrix of derivatives where
    810840the derivative of constraint $i$ with respect to variable $j$ is
    811841placed in row $i$ and column $j$. See Appendix \ref{app.triplet} for a discussion of
     
    813843
    814844If the {\tt iRow} and {\tt jCol} arguments are not NULL, then Ipopt
    815 wants you to fill in the structure of the jacobian (the row and column
     845wants you to fill in the structure of the Jacobian (the row and column
    816846indices only). At this time, the {\tt x} argument and the {\tt values}
    817847argument will be NULL.
    818848
    819849If the {\tt x} argument and the {\tt values} argument are not NULL,
    820 then Ipopt wants you to fill in the values of the jacobian as
     850then Ipopt wants you to fill in the values of the Jacobian as
    821851calculated from the array {\tt x} (using the same order as you used
    822852when specifying the structure). At this time, the {\tt iRow} and {\tt
     
    826856any of the evaluation methods used the same $x$ values. This can be
    827857helpful when users have efficient implementations that caclulate
    828 multiple outputs at once. Ipopt internally cache's results from the
     858multiple outputs at once. Ipopt internally caches results from the
    829859{\tt TNLP} and generally, this flag can be ignored.
    830860
     
    833863values you specified in {\tt get\_nlp\_info}.
    834864
    835 In our example, the jacobian is actually dense, but we still
     865In our example, the Jacobian is actually dense, but we still
    836866specify it using the sparse format.
    837867
     
    842872{
    843873  if (values == NULL) {
    844     // return the structure of the jacobian
    845 
    846     // this particular jacobian is dense
     874    // return the structure of the Jacobian
     875
     876    // this particular Jacobian is dense
    847877    iRow[0] = 0; jCol[0] = 0;
    848878    iRow[1] = 0; jCol[1] = 1;
     
    855885  }
    856886  else {
    857     // return the values of the jacobian of the constraints
     887    // return the values of the Jacobian of the constraints
    858888   
    859889    values[0] = x[1]*x[2]*x[3]; // 0,0
     
    874904\paragraph{virtual bool eval\_h(Index n, const Number* x, bool new\_x,\\
    875905   Number obj\_factor, Index m, const Number* lambda, bool new\_lambda,\\
    876         Index nele\_hess, Index* iRow, Index* jCol, Number* values)}
     906        Index nele\_hess, Index* iRow, Index* jCol, Number* values)}
    877907$\;$ \\
    878908Return the structure of the Hessian of the Lagrangian or the values of the
     
    882912\item {\tt n}: (in), the number of variables in the problem (dimension of {\tt x}).
    883913\item {\tt x}: (in), the current values for the primal variables, {\tt x}.
    884 \item {\tt new\_x}: (in), false if an evaluation method was previously called
    885         with the same values in {\tt x}, true otherwise.
    886 \item {\tt obj\_factor}: (in), factor in front of the objective term in the hessian.
     914\item {\tt new\_x}: (in), false if any evaluation method was previously called
     915        with the same values in {\tt x}, true otherwise.
     916\item {\tt obj\_factor}: (in), factor in front of the objective term in the Hessian.
    887917\item {\tt m}: (in), the number of constraints in the problem (dimension of {\tt g}).
    888918\item {\tt lambda}: (in), the current values of the equality multipliers to use
    889         for each constraint in the evaluation of the hessian.
     919        for each constraint in the evaluation of the Hessian.
    890920\item {\tt n\_elel\_jac}: (in), the number of nonzero elements in the
    891         jacobian (dimension of {\tt iRow}, {\tt jCol}, and {\tt values}).
    892 \item {\tt new\_lambda}: (in), false if an evaluation method was previously called
    893         with the same values in {\tt lambda}, true otherwise.
    894 \item {\tt nele\_hess}: (in), the number of nonzero elements in the hessian
    895         (dimension of {\tt iRow}, {\tt jCol}, and {\tt values}.
    896 \item {\tt iRow}: (out), the row indices of entries in the hessian.
    897 \item {\tt jCol}: (out), the column indices of entries in the hessian.
    898 \item {\tt values}: (out), the values of the entries in the hessian.
     921        Jacobian (dimension of {\tt iRow}, {\tt jCol}, and {\tt values}).
     922\item {\tt new\_lambda}: (in), false if any evaluation method was previously called
     923        with the same values in {\tt lambda}, true otherwise.
     924\item {\tt nele\_hess}: (in), the number of nonzero elements in the Hessian
     925        (dimension of {\tt iRow}, {\tt jCol}, and {\tt values}.
     926\item {\tt iRow}: (out), the row indices of entries in the Hessian.
     927\item {\tt jCol}: (out), the column indices of entries in the Hessian.
     928\item {\tt values}: (out), the values of the entries in the Hessian.
    899929\end{itemize}
    900930
    901931If the {\tt iRow} and {\tt jCol} arguments are not NULL, then Ipopt
    902 wants you to fill in the structure of the hessian (the row and column
     932wants you to fill in the structure of the Hessian (the row and column
    903933indices only). In this case, the {\tt x}, {\tt lambda}, and {\tt
    904934values} arrays will be NULL.
    905935
    906936If the {\tt x}, {\tt lambda}, and {\tt values} arrays are not NULL,
    907 then Ipopt wants you to fill in the values of the hessian as
     937then Ipopt wants you to fill in the values of the Hessian as
    908938calculated using {\tt x} and {\tt lambda} (using the same order as you
    909939used when specifying the structure). In this case, the {\tt iRow} and
     
    913943false if the last call to any of the evaluation methods used the same
    914944values. This can be helpful when users have efficient implementations
    915 that caclulate multiple outputs at once. Ipopt internally cache's
     945that caclulate multiple outputs at once. Ipopt internally caches
    916946results from the {\tt TNLP} and generally, this flag can be ignored.
    917947
     
    920950values you specified in {\tt get\_nlp\_info}.
    921951
    922 In our example, the hessian is dense, but we still specify it using the
    923 sparse matrix format. Because the hessian is symmetric, we only need to
     952In our example, the Hessian is dense, but we still specify it using the
     953sparse matrix format. Because the Hessian is symmetric, we only need to
    924954specify the lower left corner.
    925955\begin{verbatim}
     
    933963    // triangle only.
    934964
    935     // the hessian for this problem is actually dense
     965    // the Hessian for this problem is actually dense
    936966    Index idx=0;
    937967    for (Index row = 0; row < 4; row++) {
    938968      for (Index col = 0; col <= row; col++) {
    939         iRow[idx] = row;
    940         jCol[idx] = col;
    941         idx++;
     969        iRow[idx] = row;
     970        jCol[idx] = col;
     971        idx++;
    942972      }
    943973    }
     
    953983
    954984    values[1] = obj_factor * (x[3]);   // 1,0
    955     values[2] = 0;                   // 1,1
     985    values[2] = 0;                     // 1,1
    956986
    957987    values[3] = obj_factor * (x[3]);   // 2,0
    958     values[4] = 0;                   // 2,1
    959     values[5] = 0;                   // 2,2
     988    values[4] = 0;                     // 2,1
     989    values[5] = 0;                     // 2,2
    960990
    961991    values[6] = obj_factor * (2*x[0] + x[1] + x[2]); // 3,0
    962     values[7] = obj_factor * (x[0]);             // 3,1
    963     values[8] = obj_factor * (x[0]);             // 3,2
    964     values[9] = 0;                             // 3,3
     992    values[7] = obj_factor * (x[0]);                 // 3,1
     993    values[8] = obj_factor * (x[0]);                 // 3,2
     994    values[9] = 0;                                   // 3,3
    965995
    966996
     
    9981028\begin{itemize}
    9991029\item {\tt status}: (in), gives the status of the algorithm
    1000         as specified in {\tt IpAlgTypes.hpp},
    1001         \begin{itemize}
    1002         \item {\tt SUCCESS}: Algorithm terminated successfully
    1003         at a locally optimal point.
    1004         \item {\tt MAXITER\_EXCEEDED}: Maximum number of iterations
    1005         exceeded (can be specified by an option).
    1006         \item {\tt STOP\_AT\_TINY\_STEP}: Algorithm proceeds with very little
    1007         progress.
    1008         \item {\tt STOP\_AT\_ACCEPTABLE\_POINT}: Algorithm stopped at a point
    1009         that was converged, not to desired tolerances,
    1010         but to acceptable tolerances.
    1011         \item {\tt LOCAL\_INFEASIBILITY}: Algorithm converged to a point
    1012         of local infeasibility. Problem may be infeasible.
    1013         \item {\tt RESTORATION\_FAILURE}: Restoration phase was called, but
    1014         failed to find a more feasible point.
    1015         \item {\tt INTERNAL\_ERROR}: An unknown internal error occurred. Please
    1016         contact the Ipopt Authors through the mailing list.
    1017         \end{itemize}
     1030        as specified in {\tt IpAlgTypes.hpp},
     1031        \begin{itemize}
     1032        \item {\tt SUCCESS}: Algorithm terminated successfully
     1033        at a locally optimal point.
     1034        \item {\tt MAXITER\_EXCEEDED}: Maximum number of iterations
     1035        exceeded (can be specified by an option).
     1036        \item {\tt STOP\_AT\_TINY\_STEP}: Algorithm proceeds with very little
     1037        progress.
     1038        \item {\tt STOP\_AT\_ACCEPTABLE\_POINT}: Algorithm stopped at a point
     1039        that was converged, not to ``desired'' tolerances,
     1040        but to ``acceptable'' tolerances (see the ??? options).
     1041        \item {\tt LOCAL\_INFEASIBILITY}: Algorithm converged to a point
     1042        of local infeasibility. Problem may be infeasible.
     1043        \item {\tt RESTORATION\_FAILURE}: Restoration phase was called, but
     1044        failed to find a more feasible point.
     1045        \item {\tt INTERNAL\_ERROR}: An unknown internal error occurred. Please
     1046        contact the Ipopt Authors through the mailing list.
     1047        \end{itemize}
    10181048\item {\tt n}: (in), the number of variables in the problem (dimension of {\tt x}).
    10191049\item {\tt x}: (in), the current values for the primal variables, {\tt x}.
     
    10451075  printf("\n\nSolution of the primal variables, x\n");
    10461076  for (Index i=0; i<n; i++) {
    1047     printf("x[%d] = %g\n", i, x[i]);
     1077    printf("x[%d] = %e\n", i, x[i]);
    10481078  }
    10491079
    10501080  printf("\n\nSolution of the bound multipliers, z_L and z_U\n");
    10511081  for (Index i=0; i<n; i++) {
    1052     printf("z_L[%d] = %g\n", i, z_L[i]);
     1082    printf("z_L[%d] = %e\n", i, z_L[i]);
    10531083  }
    10541084  for (Index i=0; i<n; i++) {
    1055     printf("z_U[%d] = %g\n", i, z_U[i]);
     1085    printf("z_U[%d] = %e\n", i, z_U[i]);
    10561086  }
    10571087
    10581088  printf("\n\nObjective value\n");
    1059   printf("f(x*) = %g\n", obj_value);
     1089  printf("f(x*) = %e\n", obj_value);
    10601090}
    10611091\end{verbatim}
     
    10761106used in Ipopt, see Appendix \ref{app.smart_ptr}.
    10771107
    1078 Create the file {\tt MyExample.cpp} in the MyExample
    1079 directory. Include {\tt HS071\_NLP.hpp} and {\tt IpSmartPtr.hpp}, tell the
    1080 compiler to use the Ipopt namespace, and implement the {\tt main}
     1108Create the file {\tt MyExample.cpp} in the MyExample directory.
     1109Include {\tt HS071\_NLP.hpp} and {\tt IpIpoptApplication.hpp}, tell
     1110the compiler to use the Ipopt namespace, and implement the {\tt main}
    10811111function.
    10821112\begin{verbatim}
     1113#include "IpIpoptApplication.hpp"
    10831114#include "HS071_NLP.hpp"
    1084 #include "IpSmartPtr.hpp"
    1085 #include <iostream>
    10861115
    10871116using namespace Ipopt;
     
    10891118int main()
    10901119{
    1091         // use a SmartPtr to point the new HS071_NLP
    1092         SmartPtr<TNLP> mynlp = new HS071_NLP();
    1093        
    1094         // use a SmartPtr to point to new IpoptApplication
    1095         SmartPtr<IpoptApplication> app = new IpoptApplication();
    1096 
    1097         // Ask Ipopt to solve the problem
    1098         SolverReturn status = app->OptimizeTNLP(mynlp);
    1099         if (status == SUCCESSFUL) {
    1100                 std::cout << "SOLVED :)" << std::endl;
    1101                 return 0;
    1102         }
    1103         else {
    1104                 std::cout << "FAILED! :(" << std::endl;
    1105                 return -1;
    1106         }
    1107        
    1108         // As the SmartPtr's go out of scope, the reference counts will be decremented
    1109         // and the mynlp and app objects will automatically be deleted.
     1120        // use a SmartPtr to point the new HS071_NLP
     1121        SmartPtr<TNLP> mynlp = new HS071_NLP();
     1122       
     1123        // use a SmartPtr to point to new IpoptApplication
     1124        SmartPtr<IpoptApplication> app = new IpoptApplication();
     1125
     1126        // Ask Ipopt to solve the problem
     1127        SolverReturn status = app->OptimizeTNLP(mynlp);
     1128        if (status == SUCCESSFUL) {
     1129                std::cout << "SOLVED :)" << std::endl;
     1130                return 0;
     1131        }
     1132        else {
     1133                std::cout << "FAILED! :(" << std::endl;
     1134                return -1;
     1135        }
     1136       
     1137        // As the SmartPtr's go out of scope, the reference counts will be decremented
     1138        // and the mynlp and app objects will automatically be deleted.
    11101139}
    11111140\end{verbatim}
     
    11211150with the compiler and linker used on your system, you can build the
    11221151code, including the ipopt library (and other necessary libraries).  If
    1123 you are using Linux, then a sample makefile exists already that was
     1152you are using Linux/UNIX, then a sample makefile exists already that was
    11241153created by configure. Copy {\tt Examples/hs071\_cpp/Makefile} into
    11251154your {\tt MyExample} directory. This makefile was created for the
     
    12241253\bibitem{AndreasPaper}
    12251254W\"achter, A. and Biegler, L.T.:''On the Implementation of a Primal-Dual
    1226         Interior Point Filter Line Search Algorithm for Large-Scale
    1227         Nonlinear Programming'', Research Report, IBM T. J. Watson
    1228         Research Center, Yorktown, USA (2004)
     1255        Interior Point Filter Line Search Algorithm for Large-Scale
     1256        Nonlinear Programming'', Research Report, IBM T. J. Watson
     1257        Research Center, Yorktown, USA (2004)
    12291258\bibitem{AndreasThesis}
    12301259W\"achter, A.:''An Interior Point Algorithm for Large-Scale Nonlinear
    1231         Optimization with Applications in Process Engineering'',
    1232         Ph.D. Thesis, Carnegie Mellon University, Pittsburgh, USA (2002)
     1260        Optimization with Applications in Process Engineering'',
     1261        Ph.D. Thesis, Carnegie Mellon University, Pittsburgh, USA (2002)
    12331262\bibitem{MargotClassText}
    12341263Margot, F.: Course material for \textit{47852 Open Source Software for
    1235         Optimization}, Carnegie Mellon University (2005)
     1264        Optimization}, Carnegie Mellon University (2005)
    12361265\end{thebibliography}
    12371266
     
    12451274\left[
    12461275\begin{array}{ccccccc}
    1247 1.1     & 0             & 0             & 0             & 0             & 0             & 0.5 \\
    1248 0       & 1.9   & 0             & 0             & 0             & 0             & 0.5 \\
    1249 0       & 0             & 2.6   & 0             & 0             & 0             & 0.5 \\
    1250 0       & 0             & 7.8   & 0.6   & 0             & 0             & 0    \\
    1251 0       & 0             & 0             & 1.5   & 2.7   & 0             & 0     \\
    1252 1.6     & 0             & 0             & 0             & 0.4   & 0             & 0     \\
    1253 0       & 0             & 0             & 0             & 0             & 0.9   & 1.7 \\
     12761.1     & 0             & 0             & 0             & 0             & 0             & 0.5 \\
     12770       & 1.9   & 0             & 0             & 0             & 0             & 0.5 \\
     12780       & 0             & 2.6   & 0             & 0             & 0             & 0.5 \\
     12790       & 0             & 7.8   & 0.6   & 0             & 0             & 0    \\
     12800       & 0             & 0             & 1.5   & 2.7   & 0             & 0     \\
     12811.6     & 0             & 0             & 0             & 0.4   & 0             & 0     \\
     12820       & 0             & 0             & 0             & 0             & 0.9   & 1.7 \\
    12541283\end{array}
    12551284\right].
    12561285\]
    12571286
    1258 A standard dense matrix representation would need to store $7 \cdot 7{=} 49$ floating point numbers, where many entries would be zero. In triplet format, however, only the nonzero entries are stored. The triplet format records the row number, the column number, and the value of all nonzero entries in the matrix. For the matrix above, this means storing $14$ integers for the rows, $14$ integers for the columns, and $14$ floating point numbers for the values. While this does not seem like a huge space savings over the $49$ floating point numbers stored in the dense representation, for larger matrices, the space savings are very dramatic\footnote{For an $n \times n$ matrix, the dense representation grows with the the square of $n$, while the sparse representation grows linearly in the number of nonzeros.}
    1259 
    1260 In triplet format used in these Ipopt interfaces, the row and column numbers are 1-based, and the above matrix is represented by,
     1287A standard dense matrix representation would need to store $7 \cdot
     12887{=} 49$ floating point numbers, where many entries would be zero. In
     1289triplet format, however, only the nonzero entries are stored. The
     1290triplet format records the row number, the column number, and the
     1291value of all nonzero entries in the matrix. For the matrix above, this
     1292means storing $14$ integers for the rows, $14$ integers for the
     1293columns, and $14$ floating point numbers for the values. While this
     1294does not seem like a huge space savings over the $49$ floating point
     1295numbers stored in the dense representation, for larger matrices, the
     1296space savings are very dramatic\footnote{For an $n \times n$ matrix,
     1297  the dense representation grows with the the square of $n$, while the
     1298  sparse representation grows linearly in the number of nonzeros.}
     1299
     1300In triplet format used in these Ipopt interfaces, the row and column
     1301numbers are 1-based {\bf have table for both 0- and 1-based}, and the
     1302above matrix is represented by
    12611303\[
    12621304\begin{array}{ccc}
    1263 row     &       col     &       value \\
    1264 1       &       1       &       1.1     \\
    1265 1       &       7       &       0.5     \\
    1266 2       &       2       &       1.9     \\
    1267 2       &       7       &       0.5     \\
    1268 3       &       3       &       2.6     \\
    1269 3       &       7       &       0.5     \\
    1270 4       &       3       &       7.8     \\
    1271 4       &       4       &       0.6     \\
    1272 5       &       4       &       1.5     \\
    1273 5       &       5       &       2.7     \\
    1274 6       &       1       &       1.6     \\
    1275 6       &       5       &       0.4     \\
    1276 7       &       6       &       0.9     \\
    1277 7       &       7       &       1.7
     1305row     &       col     &       value \\
     13061       &       1       &       1.1     \\
     13071       &       7       &       0.5     \\
     13082       &       2       &       1.9     \\
     13092       &       7       &       0.5     \\
     13103       &       3       &       2.6     \\
     13113       &       7       &       0.5     \\
     13124       &       3       &       7.8     \\
     13134       &       4       &       0.6     \\
     13145       &       4       &       1.5     \\
     13155       &       5       &       2.7     \\
     13166       &       1       &       1.6     \\
     13176       &       5       &       0.4     \\
     13187       &       6       &       0.9     \\
     13197       &       7       &       1.7
    12781320\end{array}
    12791321\]
    12801322
     1323The individual elements of the matrix can be listed in any order, and
     1324if there are multiple items for the same nonzero position, the values
     1325provided for those positions are added.
     1326
     1327MISSING: Symmetric matrices
     1328
     1329
    12811330\section{The Smart Pointer Implementation: SmartPtr}
     1331
    12821332The SmartPtr class is described in {\tt IpSmartPtr.hpp}. It is a
    12831333template class that takes care of deleting objects for us so we need
    12841334not be concerned about memory leaks. Instead of pointing to an object
    1285 with a raw C++ pointer (e.g. {\tt HS071\_NLP*}), we use a SmartPtr. Every time a
    1286 SmartPtr is set to reference an object, it increments a counter in
    1287 that object (see the ReferencedObject base class if you are
     1335with a raw C++ pointer (e.g. {\tt HS071\_NLP*}), we use a SmartPtr.
     1336Every time a SmartPtr is set to reference an object, it increments a
     1337counter in that object (see the ReferencedObject base class if you are
    12881338interested). If a SmartPtr is done with the object, either by leaving
    12891339scope or being set to point to another object, the counter is
Note: See TracChangeset for help on using the changeset viewer.