Changeset 453
 Timestamp:
 Aug 16, 2005 6:54:47 PM (14 years ago)
 File:

 1 edited
Legend:
 Unmodified
 Added
 Removed

branches/dev/Docs/documentation.tex
r450 r453 14 14 \setlength{\topmargin}{0.5in} % Top margin 15 15 \renewcommand{\baselinestretch}{1.1} 16 %\usepackage{times} % Times Roman font ? 16 \usepackage{amsfonts} 17 18 \newcommand{\RR}{{\mathbb{R}}} 19 17 20 18 21 \begin{document} 19 \title{ 20 \begin{large} 21 \textbf{Introduction to Ipopt:} 22 \end{large} \\ 23 \begin{small} 24 A tutorial for downloading, installing, and using Ipopt. \\ Yoshiaki 25 Kawajiri,\footnote{Department of Chemical Engineering, Carnegie Mellon 26 University, Pittsburgh, PA, 15213, Email: kawajiri@cmu.edu} Carl 22 \title{Introduction to Ipopt:\\ 23 A tutorial for downloading, installing, and using Ipopt.} 24 25 \author{Yoshiaki 26 Kawajiri\footnote{Department of Chemical Engineering, Carnegie Mellon 27 University, Pittsburgh, PA, 15213, Email: kawajiri@cmu.edu}, Carl 27 28 D. Laird\footnote{Department of Chemical Engineering, Carnegie Mellon 28 University, Pittsburgh, PA, 15213, Email: kawajiri@cmu.edu} 29 \end{small} 30 } 29 University, Pittsburgh, PA, 15213, Email: kawajiri@cmu.edu}} 31 30 32 31 %\date{\today} … … 36 35 Ipopt (\underline{I}nterior \underline{P}oint \underline{Opt}imizer) is an open 37 36 source software package for largescale nonlinear optimization. It can 38 be used to solve general nonlinear programming problems of the form ,37 be used to solve general nonlinear programming problems of the form 39 38 \begin{eqnarray} 40 \min_{x} & f(x) \label{obj} \\41 \mbox{s.t.} \; 42 39 \min_{x} &&f(x) \label{obj} \\ 40 \mbox{s.t.} \; &&g^L \leq g(x) \leq g^U \\ 41 &&x^L \leq x \leq x^U, \label{bounds} 43 42 \end{eqnarray} 44 where $x \in \Re^n$ are the optimization variables (possibly with 45 lower and upper bounds, $x^L$ and $x^U$), $f(x) \in \Re^1$ is the 46 objective function, and $g(x) \in \Re^m$ are the general nonlinear 47 constraints. The functions $f(x)$ and $g(x)$ can be linear or 48 nonlinear and convex or nonconvex. The constraints, $g(x)$ have lower 49 and upper bounds, $g^L$ and $g^U$. Note that equality constraints of 50 the form $g_i(x)=c$ can be specified by setting $g^L_{i}=g^U_{i}=c$. 43 where $x \in \RR^n$ are the optimization variables (possibly with 44 lower and upper bounds, $x^L\in(\RR\cup\{\infty\})^n$ and 45 $x^U\in(\RR\cup\{+\infty\})^n$), $f:\RR^n\longrightarrow\RR$ is the 46 objective function, and $g:\RR^n\longrightarrow \RR^m$ are the general 47 nonlinear constraints. The functions $f(x)$ and $g(x)$ can be linear 48 or nonlinear and convex or nonconvex (but are assumed to be twice 49 continuously differentiable). The constraints, $g(x)$, have lower and 50 upper bounds, $g^L\in(\RR\cup\{\infty\})^n$ and 51 $g^U\in(\RR\cup\{+\infty\})^m$. Note that equality constraints of the 52 form $g_i(x)=\bar g_i$ can be specified by setting $g^L_{i}=g^U_{i}=\bar g_i$. 51 53 52 54 Ipopt implements an interior point line search filter method. The 53 mathematical details of the algorithm can be found in the reports by 54 the main author of Ipopt, Andreas W\"achter 55 mathematical details of the algorithm can be found in the reports 55 56 \cite{AndreasPaper} \cite{AndreasThesis}. 56 57 … … 69 70 to look at the archives before posting a question. 70 71 71 \section{ New Ipopt versus OldIpopt}72 \section{History of Ipopt} 72 73 The original Ipopt (Fortran version) was a product of the dissertation 73 74 research of Andreas W\"achter \cite{AndreasThesis}, under Lorenz … … 102 103 \begin{enumerate} 103 104 \item{Create a directory to store the code}\\ 104 {\tt \$ mkdir ipopt}\\105 {\tt \$ mkdir Ipopt}\\ 105 106 Note the {\tt \$} indicates the command line 106 107 prompt, do not type {\tt \$}, only the text following it. 107 108 \item{Download the code to the new directory}\\ 108 {\tt \$ cd ipopt; svn co https://www.coinor.org/svn/ipoptdevel/trunk}109 {\tt \$ cd Ipopt; svn co https://www.coinor.org/svn/ipoptdevel/trunk} 109 110 \end{enumerate} 110 111 111 112 \subsection{Download External Code} 112 113 Ipopt uses a few external packages that are not included in the 113 distribution, namely ASL (the Ampl Solver Library), blas, lapack and 114 some routines from the Harwell Subroutine Library. 115 116 117 \subsubsection{Download ASL, blas, and lapack} 118 Retrieving ASL, blas, and lapack is straightforward using scripts 119 included with the ipopt distribution. These scripts download the required 120 files from the Netlib Repository \\ 121 (http://www.netlib.org).\\ 114 distribution, namely ASL (the Ampl Solver Library), BLAS, and 115 some routines from the Harwell Subroutine Library. 116 117 Note that you only need to obtain the ASL if you intend to use Ipopt 118 from AMPL. It is not required if you want to specify yout 119 optimization problem in a programming language (C++, C, or Fortran). 120 121 \subsubsection{Download BLAS and ASL} 122 If you have the download utility \texttt{wget} installed on your 123 system, retrieving ASL, and BLAS is straightforward using scripts 124 included with the ipopt distribution. These scripts download the 125 required files from the Netlib Repository (http://www.netlib.org).\\ 126 127 \noindent 122 128 {\tt \$ cd trunk/ipopt/Extern/blas; ./get.blas}\\ 123 {\tt \$ cd trunk/ipopt/Extern/lapack; ./get.lapack}\\ 124 {\tt \$ cd trunk/ipopt/Extern/ASL; ./get.asl}\\ 125 126 \subsubsection{Download HSL libraries} 129 {\tt \$ cd ../ASL; ./get.asl}\\ 130 131 \noindent 132 If you don't have \texttt{wget} installed on your system, please read 133 the \texttt{INSTALL.*} files in the \texttt{trunk/ipopt/Extern/blas} 134 and \texttt{trunk/ipopt/Extern/ASL} directories for alternative 135 instructions. 136 137 \subsubsection{Download HSL Subroutines} 127 138 In addition to the IPOPT source code, two additional subroutines have 128 139 to be downloaded from the Harwell Subroutine Library (HSL). The … … 150 161 \end{enumerate} 151 162 163 152 164 \section{Compiling and Installing Ipopt} \label{sec.comp_and_inst} 153 165 Ipopt can be easily compiled and installed with the usual {\tt configure}, … … 179 191 representation that should meet the needs of most users. 180 192 181 This tutorial will discuss four interfaces to Ipopt, the AMPL modeling182 language interface, and the C, C++, and Fortran code interfaces. AMPL 183 i s a 3rd party modeling language tool that allows users to write their184 optimization problem in a syntax that resembles the way the problem 185 would be written mathematically. Once the problem has been formulated 186 in AMPL, the problem can be easily solved using the (already compiled) 187 executable. Interfacing your problem by directly linking code requires 188 more effort to write, but can be far more efficient for large 189 problems.193 This tutorial will discuss four interfaces to Ipopt, namely the AMPL 194 modeling language interface, and the C, C++, and Fortran code 195 interfaces. AMPL is a 3rd party modeling language tool that allows 196 users to write their optimization problem in a syntax that resembles 197 the way the problem would be written mathematically. Once the problem 198 has been formulated in AMPL, the problem can be easily solved using 199 the (already compiled) executable. Interfacing your problem by 200 directly linking code requires more effort to write, but can be far 201 more efficient for large problems. 190 202 191 203 We will illustrate how to use each of the four interfaces using an … … 193 205 \begin{eqnarray} 194 206 \min_{x \in \Re^4} &x_1 x_4 (x_1 + x_2 + x_3) + x_3 \label{ex_obj} \\ 195 \mbox{s.t.} \; 196 197 207 \mbox{s.t.} \; &x_1 x_2 x_3 x_4 \ge 25 \label{ex_ineq} \\ 208 &x_1^2 + x_2^2 + x_3^2 + x_4^2 = 40 \label{ex_equ} \\ 209 &1 \leq x_1, x_2, x_3, x_4 \leq 5, \label{ex_bounds} 198 210 \end{eqnarray} 199 211 with the starting point, … … 209 221 Interfacing through the AMPL interface is by far the easiest way to 210 222 solve a problem with Ipopt. The user must simply formulate the problem 211 in AMPL syntax, and solve the problem through the AMPL 212 environment. There are drawbacks, however. AMPL is a 3rd party package 213 and, as such, must be appropriately licensed (a free, limited student 214 versionis available from the AMPL website,223 in AMPL syntax, and solve the problem through the AMPL environment. 224 There are drawbacks, however. AMPL is a 3rd party package and, as 225 such, must be appropriately licensed (a free, student version for 226 limited problem size is available from the AMPL website, 215 227 www.ampl.com). Furthermore, the AMPL environment may be prohibitive 216 228 for very large problems. Nevertheless, formulating the problem in AMPL … … 223 235 224 236 The problem presented in equations (\ref{ex_obj}\ref{ex_startpt}) can 225 be solved with ipopt with the following AMPL mod file.237 be solved with Ipopt with the following AMPL mod file. 226 238 \subsubsection{hs071\_ampl.mod} 227 239 \begin{verbatim} … … 239 251 # specify the objective function 240 252 minimize obj: 241 242 253 x1 * x4 * (x1 + x2 + x3) + x3; 254 243 255 # specify the constraints 244 256 s.t. 245 246 247 248 249 250 x1\^2 + x2\^2 + x3\^2 +x4\^2 = 40;251 252 # specify the starting point 257 258 inequality: 259 x1 * x2 * x3 * x4 >= 25; 260 261 equality: 262 x1^2 + x2^2 + x3^2 +x4^2 = 40; 263 264 # specify the starting point 253 265 let x1 := 1; 254 266 let x2 := 5; … … 273 285 \begin{verbatim} 274 286 $ ampl 275 > model hs071 \_ampl.mod;287 > model hs071_ampl.mod; 276 288 . 277 289 . … … 298 310 \caption{Information Required By Ipopt} 299 311 \begin{enumerate} 300 301 302 303 304 305 306 307 308 309 310 311 312 313 \item Initial values for the multipliers may 314 be given, but are not required 315 316 317 318 \item number of nonzeros in the jacobian of the constraints319 \item number of nonzeros in the hessian of the lagrangian320 321 322 323 324 325 ($x_k, \lambda_k$ coming from Ipopt)326 327 328 329 330 331 332 $\alpha_f \nabla^2_x f(x_k) + \nabla^2_x g(x_k)^T \lambda$333 312 \item Problem dimensions \label{it.prob_dim} 313 \begin{itemize} 314 \item number of variables 315 \item number of constraints 316 \end{itemize} 317 \item Problem bounds 318 \begin{itemize} 319 \item variable bounds 320 \item constraint bounds 321 \end{itemize} 322 \item Initial starting point 323 \begin{itemize} 324 \item Initial values for the primal $x$ variables 325 \item Initial values for the multipliers (only 326 required for a warm start option) 327 \end{itemize} 328 \item Problem Structure \label{it.prob_struct} 329 \begin{itemize} 330 \item number of nonzeros in the Jacobian of the constraints 331 \item number of nonzeros in the Hessian of the Lagrangian 332 \item Structure of the Jacobian of the constraints 333 \item Structure of the Hessian of the Lagrangian 334 \end{itemize} 335 \item Evaluation of Problem Functions \label{it.prob_eval} \\ 336 Information evaluated using a given point 337 ($x_k, \lambda_k, \sigma_f$ coming from Ipopt) 338 \begin{itemize} 339 \item Objective function, $f(x_k)$ 340 \item Gradient of the objective $\nabla f(x_k)$ 341 \item Constraint residuals, $g(x_k)$ 342 \item Jacobian of the constraints, $\nabla g(x_k)$ 343 \item Hessian of the Lagrangian, 344 $\sigma_f \nabla^2 f(x_k) + \sum_{i=1}^m\lambda_i\nabla^2 g_i(x_k)$ 345 \end{itemize} 334 346 \end{enumerate} 335 347 \label{fig.required_info} 336 348 \end{figure} 337 349 \vspace{0.1in} 338 The information required by Ipopt is shown in figure350 The information required by Ipopt is shown in Figure 339 351 \ref{fig.required_info}. The problem dimensions and bounds are 340 352 straightforward and come solely from the problem definition. The … … 363 375 (\ref{ex_obj}\ref{ex_bounds}). 364 376 365 The gradient of the objective is given by ,377 The gradient of the objective is given by 366 378 \begin{equation} 367 379 \left[ … … 374 386 \right], 375 387 \end{equation} 376 the jacobian of388 the Jacobian of 377 389 the constraints is, 378 390 \begin{equation} 379 391 \left[ 380 392 \begin{array}{cccc} 381 x_2 x_3 x_4 & x_1 x_3 x_4 & x_1 x_2 x_4 & x_1 x_2 x_3\\382 2 x_1 & 2 x_2 & 2 x_3& 2 x_4393 x_2 x_3 x_4 & x_1 x_3 x_4 & x_1 x_2 x_4 & x_1 x_2 x_3 \\ 394 2 x_1 & 2 x_2 & 2 x_3 & 2 x_4 383 395 \end{array} 384 396 \right]. 385 397 \end{equation} 386 398 387 We need to determine the hessian of the Lagrangian. The Lagrangian is399 We need to determine the Hessian of the Lagrangian. The Lagrangian is 388 400 given by $f(x) + g(x)^T \lambda$ and the Hessian of the Lagrangian is 389 technically, $ \nabla^2 _x f(x_k) + \nabla^2_x g(x_k)^T \lambda$,390 however, so that Ipopt can ask for the hessian of the objective or the401 technically, $ \nabla^2 f(x_k) + sum_{i=1}^m\lambda_i\nabla^2 g_i(x_k)$, 402 however, so that Ipopt can ask for the Hessian of the objective or the 391 403 constraints independently if required, we introduce a factor 392 404 ($\sigma_f$) in front of the objective term. The value for $\sigma_f$ … … 407 419 \sigma_f \left[ 408 420 \begin{array}{cccc} 409 2 x_4 & x_4 & x_4 & 2 x_1 + x_2 + x_3\\410 x_4 & 0 & 0 & x_1\\411 x_4 & 0 & 0 & x_1\\412 2 x_1+x_2+x_3 & x_1 & x_1& 0421 2 x_4 & x_4 & x_4 & 2 x_1 + x_2 + x_3 \\ 422 x_4 & 0 & 0 & x_1 \\ 423 x_4 & 0 & 0 & x_1 \\ 424 2 x_1+x_2+x_3 & x_1 & x_1 & 0 413 425 \end{array} 414 426 \right] 415 427 + 428 \lambda_1 416 429 \left[ 417 430 \begin{array}{cccc} 418 0 & x_3 x_4 & x_2 x_4 & x_2 x_3\\419 x_3 x_4 & 0 & x_1 x_4 & x_1 x_3\\420 x_2 x_4 & x_1 x_4 & 0 & x_1 x_2\\421 x_2 x_3 & x_1 x_3 & x_1 x_2& 0431 0 & x_3 x_4 & x_2 x_4 & x_2 x_3 \\ 432 x_3 x_4 & 0 & x_1 x_4 & x_1 x_3 \\ 433 x_2 x_4 & x_1 x_4 & 0 & x_1 x_2 \\ 434 x_2 x_3 & x_1 x_3 & x_1 x_2 & 0 422 435 \end{array} 423 \right] \lambda_1436 \right] 424 437 + 438 \lambda_2 425 439 \left[ 426 440 \begin{array}{cccc} 427 2 & 0 & 0 & 0\\428 0 & 2 & 0 & 0\\429 0 & 0 & 2 & 0\\430 0 & 0 & 0& 2441 2 & 0 & 0 & 0 \\ 442 0 & 2 & 0 & 0 \\ 443 0 & 0 & 2 & 0 \\ 444 0 & 0 & 0 & 2 431 445 \end{array} 432 \right] \lambda_2446 \right] 433 447 \end{equation} 434 where the first term comes from the hessian of the objective function, and the435 second and third term from the hessian of (\ref{ex_ineq}) and (\ref{ex_equ})448 where the first term comes from the Hessian of the objective function, and the 449 second and third term from the Hessian of (\ref{ex_ineq}) and (\ref{ex_equ}) 436 450 respectively. Therefore, the dual variables $\lambda_1$ and $\lambda_2$ 437 451 are then the multipliers for constraints (\ref{ex_ineq}) and (\ref{ex_equ}) … … 460 474 (\ref{ex_obj}\ref{ex_bounds}) using, first C++, then C, and finally 461 475 Fortran. Completed versions of these examples can be found in {\tt 462 ipopt/trunk/Examples} under {\tt hs071\_cpp}, {\tt hs071\_c}, {\tt476 Ipopt/trunk/Examples} under {\tt hs071\_cpp}, {\tt hs071\_c}, {\tt 463 477 hs071\_f}. 464 478 465 479 As a user, you are responsible for coding two sections of the program 466 that solves a problem using Ipopt ,the executable ({\tt main}) and the480 that solves a problem using Ipopt: the executable ({\tt main}) and the 467 481 problem representation. Generally, you will write an executable that 468 482 prepares the problem, and then passes control over to Ipopt through an 469 483 {\tt Optimize} call. In this {\tt Optimize} call, you will give Ipopt 470 484 everything that it requires to call back to your code whenever it 471 needs functions evaluated (like the objective, the jacobian, etc.).485 needs functions evaluated (like the objective, the Jacobian, etc.). 472 486 In each of the three sections that follow (C++, C, and Fortran), we 473 487 will first discuss how to code the problem representation, and then … … 500 514 the compiler that we are using the Ipopt namespace, and create the 501 515 declaration of the {\tt HS071\_NLP} class, inheriting off of {\tt 502 TNLP}. Have a look at the {\tt TNLP} class in {\tt IpTNLP.hpp}; you516 TNLP}. Have a look at the {\tt TNLP} class in {\tt IpTNLP.hpp}; you 503 517 will see eight pure virtual methods that we must implement. Declare 504 these methods in the header file. 505 Implement each of the methods in {\tt HS071\_NLP.cpp} using the descriptions 506 given below. In {\tt hs071\_nlp.cpp}, first include the header file for yoru class 507 and tell the compiler that you are using the Ipopt namespace. A full version of 508 these files can be found in the {\tt Examples/hs071\_cpp} directory. 518 these methods in the header file. Implement each of the methods in 519 {\tt HS071\_NLP.cpp} using the descriptions given below. In {\tt 520 hs071\_nlp.cpp}, first include the header file for your class and 521 tell the compiler that you are using the Ipopt namespace. A full 522 version of these files can be found in the {\tt Examples/hs071\_cpp} 523 directory. 509 524 510 525 \paragraph{virtual bool get\_nlp\_info(Index\& n, Index\& m, Index\& nnz\_jac\_g, \\ … … 516 531 \item {\tt n}: (out), the number of variables in the problem (dimension of {\tt x}). 517 532 \item {\tt m}: (out), the number of constraints in the problem (dimension of {\tt g}). 518 \item {\tt nnz\_jac\_g}: (out), the number of nonzero entries in the jacobian.519 \item {\tt nnz\_h\_lag}: (out), the number of nonzero entries in the hessian.533 \item {\tt nnz\_jac\_g}: (out), the number of nonzero entries in the Jacobian. 534 \item {\tt nnz\_h\_lag}: (out), the number of nonzero entries in the Hessian. 520 535 \item {\tt index\_style}: (out), the style used for row/col entries in the sparse matrix 521 536 format (C\_STYLE: 0based, FORTRAN\_STYLE: 1based). … … 523 538 Ipopt uses this information when allocating the arrays that 524 539 it will later ask you to fill with values. Be careful in this method 525 since incorrect values couldcause memory bugs which may be very540 since incorrect values will cause memory bugs which may be very 526 541 difficult to find. 527 542 528 Our example problem has 4 variables (n), and two 529 constraints (m). The jacobian for this small problem is actually dense 530 and has 8 nonzeros (we will still represent this jacobian using the 531 sparse matrix triplet format). The Hessian of the Lagrangian has 16 532 nonzeros. Keep in mind that the number of nonzeros is the total number of elements 533 that may \emph{ever} be nonzero, not just those that are nonzero at the starting 534 point. This information is set once for the entire problem. 543 Our example problem has 4 variables (n), and two constraints (m). The 544 Jacobian for this small problem is actually dense and has 8 nonzeros 545 (we will still represent this Jacobian using the sparse matrix triplet 546 format). The Hessian of the Lagrangian has 10 ``symmetric'' nonzeros. 547 Keep in mind that the number of nonzeros is the total number of 548 elements that may \emph{ever} be nonzero, not just those that are 549 nonzero at the starting point. This information is set once for the 550 entire problem. 535 551 536 552 \begin{verbatim} 537 553 bool HS071_NLP::get_nlp_info(Index& n, Index& m, Index& nnz_jac_g, 538 554 Index& nnz_h_lag, IndexStyleEnum& index_style) 539 555 { 540 556 // The problem described in HS071_NLP.hpp has 4 variables, x[0] through x[3] … … 544 560 m = 2; 545 561 546 // in this example the jacobian is dense and contains 8 nonzeros562 // in this example the Jacobian is dense and contains 8 nonzeros 547 563 nnz_jac_g = 8; 548 564 549 // the hessian is also dense and has 16 total nonzeros, but we565 // the Hessian is also dense and has 16 total nonzeros, but we 550 566 // only need the lower left corner (since it is symmetric) 551 567 nnz_h_lag = 10; … … 570 586 \item {\tt g\_u}: (out) the upper bounds for {\tt g}. 571 587 \end{itemize} 572 The values of {\tt n} and {\tt m} that you specified in {\tt get\_nlp\_info}573 are passed to you for debug checking. 574 Setting a lower bound to a value less than {\tt nlp\_lower\_bound\_inf\_} 575 (data member of {\tt TNLP}) will cause588 The values of {\tt n} and {\tt m} that you specified in {\tt 589 get\_nlp\_info} are passed to you for debug checking. Setting a 590 lower bound to a value less than or equal to the value of the option 591 {\tt nlp\_lower\_bound\_inf} (data member of {\tt TNLP}) will cause 576 592 Ipopt to assume no lower bound. Likewise, specifying the upper bound 577 above {\tt nlp\_upper\_bound\_inf\_} will cause Ipopt to assume no 578 upper bound. The variables, {\tt nlp\_lower\_bound\_inf\_} and {\tt 579 nlp\_upper\_bound\_inf\_}, are set to $1e^{19}$ and $1e^{19}$ 580 respectively, by default, but may be modified by changing the options 581 {\tt nlp\_lower\_bound\_inf} and {\tt nlp\_upper\_bound\_inf} (see 582 Section \ref{sec.options}). 593 above or equal to the value of the option {\tt nlp\_upper\_bound\_inf} 594 will cause Ipopt to assume no upper bound. These options, {\tt 595 nlp\_lower\_bound\_inf} and {\tt nlp\_upper\_bound\_inf}, are set to 596 $10^{19}$ and $10^{19}$ respectively, by default, but may be modified 597 by changing the options (see Section \ref{sec.options}). 583 598 584 599 In our example, the first constraint has a lower bound of $25$ and no upper 585 600 bound, so we set the lower bound of constraint {\tt [0]} to $25$ and 586 the upper bound to some number greater than $1 e^{19}$. The second601 the upper bound to some number greater than $10^{19}$. The second 587 602 constraint is an equality constraint and we set both bounds to 588 603 $40$. Ipopt recognizes this as an equality constraint and does not … … 626 641 \paragraph{virtual bool get\_starting\_point(Index n, bool init\_x, Number* x, \\ 627 642 bool init\_z, Number* z\_L, Number* z\_U, Index m, 628 643 bool init\_lambda, Number* lambda)} 629 644 $\;$ \\ 630 645 Give Ipopt the starting point before it begins iterating. … … 634 649 \item {\tt x}: (out), the initial values for the primal variables, {\tt x}. 635 650 \item {\tt init\_z}: (in), if true, this method must provide an initial value 636 651 for the bound multipliers {\tt z\_L}, and {\tt z\_U}. 637 652 \item {\tt z\_L}: (out), the initial values for the bound multipliers, {\tt z\_L}. 638 653 \item {\tt z\_U}: (out), the initial values for the bound multipliers, {\tt z\_U}. 639 654 \item {\tt m}: (in), the number of constraints in the problem (dimension of {\tt g}). 640 655 \item {\tt init\_lambda}: (in), if true, this method must provide an initial value 641 656 for the constraint multipliers, {\tt lambda}. 642 657 \item {\tt lambda}: (out), the initial values for the constraint multipliers, {\tt lambda}. 643 658 \end{itemize} … … 668 683 // Here, we assume we only have starting values for x, if you code 669 684 // your own NLP, you can provide starting values for the dual variables 670 // if you wish 685 // if you wish to use a warmstart option 671 686 assert(init_x == true); 672 687 assert(init_z == false); … … 684 699 685 700 \paragraph{virtual bool eval\_f(Index n, const Number* x, 686 701 bool new\_x, Number\& obj\_value)} 687 702 $\;$ \\ 688 703 Return the value of the objective function as calculated using {\tt x}. … … 690 705 \item {\tt n}: (in), the number of variables in the problem (dimension of {\tt x}). 691 706 \item {\tt x}: (in), the current values for the primal variables, {\tt x}. 692 \item {\tt new\_x}: (in), false if an evaluation method was previously called693 707 \item {\tt new\_x}: (in), false if any evaluation method was previously called 708 with the same values in {\tt x}, true otherwise. 694 709 \item {\tt obj\_value}: (out) the value of the objective function ($f(x)$). 695 710 \end{itemize} … … 698 713 any of the evaluation methods used the same $x$ values. This can be 699 714 helpful when users have efficient implementations that calculate 700 multiple outputs at once. Ipopt internally cache 's results from the715 multiple outputs at once. Ipopt internally caches results from the 701 716 {\tt TNLP} and generally, this flag can be ignored. 702 717 … … 719 734 720 735 \paragraph{virtual bool eval\_grad\_f(Index n, const Number* x, bool new\_x, 721 736 Number* grad\_f)} 722 737 $\;$ \\ 723 738 Return the gradient of the objective to Ipopt, as calculated by the values in {\tt x}. … … 725 740 \item {\tt n}: (in), the number of variables in the problem (dimension of {\tt x}). 726 741 \item {\tt x}: (in), the current values for the primal variables, {\tt x}. 727 \item {\tt new\_x}: (in), false if an evaluation method was previously called728 742 \item {\tt new\_x}: (in), false if any evaluation method was previously called 743 with the same values in {\tt x}, true otherwise. 729 744 \item {\tt grad\_f}: (out) the array of values for the gradient of the 730 745 objective function ($\nabla f(x)$). 731 746 \end{itemize} 732 747 733 The gradient array is in the same order as the $x$ variables (i.e. the748 The gradient array is in the same order as the $x$ variables (i.e.\ the 734 749 gradient of the objective with respect to {\tt x[2]} should be put in 735 {\tt grad\_f[2]} .750 {\tt grad\_f[2]}). 736 751 737 752 The boolean variable {\tt new\_x} will be false if the last call to 738 753 any of the evaluation methods used the same $x$ values. This can be 739 754 helpful when users have efficient implementations that caclulate 740 multiple outputs at once. Ipopt internally cache 's results from the755 multiple outputs at once. Ipopt internally caches results from the 741 756 {\tt TNLP} and generally, this flag can be ignored. 742 757 … … 745 760 get\_nlp\_info}. 746 761 747 In our example, we ignore the {\tt new\_x} flag and calculate the values for the748 gradient of the objective.762 In our example, we ignore the {\tt new\_x} flag and calculate the 763 values for the gradient of the objective. 749 764 \begin{verbatim} 750 765 bool HS071_NLP::eval_grad_f(Index n, const Number* x, bool new_x, Number* grad_f) … … 763 778 764 779 \paragraph{virtual bool eval\_g(Index n, const Number* x, 765 780 bool new\_x, Index m, Number* g)} 766 781 $\;$ \\ 767 782 Give Ipopt the value of the constraints as calculated by the values in {\tt x}. … … 769 784 \item {\tt n}: (in), the number of variables in the problem (dimension of {\tt x}). 770 785 \item {\tt x}: (in), the current values for the primal variables, {\tt x}. 771 \item {\tt new\_x}: (in), false if an evaluation method was previously called772 786 \item {\tt new\_x}: (in), false if any evaluation method was previously called 787 with the same values in {\tt x}, true otherwise. 773 788 \item {\tt m}: (in), the number of constraints in the problem (dimension of {\tt g}). 774 789 \item {\tt g}: (out) the array of constraint residuals. … … 781 796 any of the evaluation methods used the same $x$ values. This can be 782 797 helpful when users have efficient implementations that calculate 783 multiple outputs at once. Ipopt internally cache 's results from the798 multiple outputs at once. Ipopt internally caches results from the 784 799 {\tt TNLP} and generally, this flag can be ignored. 785 800 … … 788 803 {\tt get\_nlp\_info}. 789 804 805 In our example, we ignore the {\tt new\_x} flag and calculate the 806 values for the gradient of the objective. 807 \begin{verbatim} 808 bool HS071_NLP::eval_g(Index n, const Number* x, bool new_x, Index m, Number* g) 809 { 810 assert(n == 4); 811 assert(m == 2); 812 813 g[0] = x[0] * x[1] * x[2] * x[3]; 814 g[1] = x[0]*x[0] + x[1]*x[1] + x[2]*x[2] + x[3]*x[3]; 815 816 return true; 817 } 818 \end{verbatim} 819 790 820 \paragraph{virtual bool eval\_jac\_g(Index n, const Number* x, bool new\_x, \\ 791 821 Index m, Index nele\_jac, Index* iRow, 792 822 Index *jCol, Number* values)} 793 823 $\;$ \\ 794 Return either the structure of the jacobian of the constraints, or the values for the795 jacobian of the constraints as calculated by the values in {\tt x}.824 Return either the structure of the Jacobian of the constraints, or the values for the 825 Jacobian of the constraints as calculated by the values in {\tt x}. 796 826 \begin{itemize} 797 827 \item {\tt n}: (in), the number of variables in the problem (dimension of {\tt x}). 798 828 \item {\tt x}: (in), the current values for the primal variables, {\tt x}. 799 \item {\tt new\_x}: (in), false if an evaluation method was previously called800 829 \item {\tt new\_x}: (in), false if any evaluation method was previously called 830 with the same values in {\tt x}, true otherwise. 801 831 \item {\tt m}: (in), the number of constraints in the problem (dimension of {\tt g}). 802 832 \item {\tt n\_elel\_jac}: (in), the number of nonzero elements in the 803 jacobian (dimension of {\tt iRow}, {\tt jCol}, and {\tt values}).804 \item {\tt iRow}: (out), the row indices of entries in the jacobian of the constraints.805 \item {\tt jCol}: (out), the column indices of entries in the jacobian of the constraints.806 \item {\tt values}: (out), the values of the entries in the jacobian of the constraints.833 Jacobian (dimension of {\tt iRow}, {\tt jCol}, and {\tt values}). 834 \item {\tt iRow}: (out), the row indices of entries in the Jacobian of the constraints. 835 \item {\tt jCol}: (out), the column indices of entries in the Jacobian of the constraints. 836 \item {\tt values}: (out), the values of the entries in the Jacobian of the constraints. 807 837 \end{itemize} 808 838 809 The jacobian is the matrix of derivatives where839 The Jacobian is the matrix of derivatives where 810 840 the derivative of constraint $i$ with respect to variable $j$ is 811 841 placed in row $i$ and column $j$. See Appendix \ref{app.triplet} for a discussion of … … 813 843 814 844 If the {\tt iRow} and {\tt jCol} arguments are not NULL, then Ipopt 815 wants you to fill in the structure of the jacobian (the row and column845 wants you to fill in the structure of the Jacobian (the row and column 816 846 indices only). At this time, the {\tt x} argument and the {\tt values} 817 847 argument will be NULL. 818 848 819 849 If the {\tt x} argument and the {\tt values} argument are not NULL, 820 then Ipopt wants you to fill in the values of the jacobian as850 then Ipopt wants you to fill in the values of the Jacobian as 821 851 calculated from the array {\tt x} (using the same order as you used 822 852 when specifying the structure). At this time, the {\tt iRow} and {\tt … … 826 856 any of the evaluation methods used the same $x$ values. This can be 827 857 helpful when users have efficient implementations that caclulate 828 multiple outputs at once. Ipopt internally cache 's results from the858 multiple outputs at once. Ipopt internally caches results from the 829 859 {\tt TNLP} and generally, this flag can be ignored. 830 860 … … 833 863 values you specified in {\tt get\_nlp\_info}. 834 864 835 In our example, the jacobian is actually dense, but we still865 In our example, the Jacobian is actually dense, but we still 836 866 specify it using the sparse format. 837 867 … … 842 872 { 843 873 if (values == NULL) { 844 // return the structure of the jacobian845 846 // this particular jacobian is dense874 // return the structure of the Jacobian 875 876 // this particular Jacobian is dense 847 877 iRow[0] = 0; jCol[0] = 0; 848 878 iRow[1] = 0; jCol[1] = 1; … … 855 885 } 856 886 else { 857 // return the values of the jacobian of the constraints887 // return the values of the Jacobian of the constraints 858 888 859 889 values[0] = x[1]*x[2]*x[3]; // 0,0 … … 874 904 \paragraph{virtual bool eval\_h(Index n, const Number* x, bool new\_x,\\ 875 905 Number obj\_factor, Index m, const Number* lambda, bool new\_lambda,\\ 876 906 Index nele\_hess, Index* iRow, Index* jCol, Number* values)} 877 907 $\;$ \\ 878 908 Return the structure of the Hessian of the Lagrangian or the values of the … … 882 912 \item {\tt n}: (in), the number of variables in the problem (dimension of {\tt x}). 883 913 \item {\tt x}: (in), the current values for the primal variables, {\tt x}. 884 \item {\tt new\_x}: (in), false if an evaluation method was previously called885 886 \item {\tt obj\_factor}: (in), factor in front of the objective term in the hessian.914 \item {\tt new\_x}: (in), false if any evaluation method was previously called 915 with the same values in {\tt x}, true otherwise. 916 \item {\tt obj\_factor}: (in), factor in front of the objective term in the Hessian. 887 917 \item {\tt m}: (in), the number of constraints in the problem (dimension of {\tt g}). 888 918 \item {\tt lambda}: (in), the current values of the equality multipliers to use 889 for each constraint in the evaluation of the hessian.919 for each constraint in the evaluation of the Hessian. 890 920 \item {\tt n\_elel\_jac}: (in), the number of nonzero elements in the 891 jacobian (dimension of {\tt iRow}, {\tt jCol}, and {\tt values}).892 \item {\tt new\_lambda}: (in), false if an evaluation method was previously called893 894 \item {\tt nele\_hess}: (in), the number of nonzero elements in the hessian895 896 \item {\tt iRow}: (out), the row indices of entries in the hessian.897 \item {\tt jCol}: (out), the column indices of entries in the hessian.898 \item {\tt values}: (out), the values of the entries in the hessian.921 Jacobian (dimension of {\tt iRow}, {\tt jCol}, and {\tt values}). 922 \item {\tt new\_lambda}: (in), false if any evaluation method was previously called 923 with the same values in {\tt lambda}, true otherwise. 924 \item {\tt nele\_hess}: (in), the number of nonzero elements in the Hessian 925 (dimension of {\tt iRow}, {\tt jCol}, and {\tt values}. 926 \item {\tt iRow}: (out), the row indices of entries in the Hessian. 927 \item {\tt jCol}: (out), the column indices of entries in the Hessian. 928 \item {\tt values}: (out), the values of the entries in the Hessian. 899 929 \end{itemize} 900 930 901 931 If the {\tt iRow} and {\tt jCol} arguments are not NULL, then Ipopt 902 wants you to fill in the structure of the hessian (the row and column932 wants you to fill in the structure of the Hessian (the row and column 903 933 indices only). In this case, the {\tt x}, {\tt lambda}, and {\tt 904 934 values} arrays will be NULL. 905 935 906 936 If the {\tt x}, {\tt lambda}, and {\tt values} arrays are not NULL, 907 then Ipopt wants you to fill in the values of the hessian as937 then Ipopt wants you to fill in the values of the Hessian as 908 938 calculated using {\tt x} and {\tt lambda} (using the same order as you 909 939 used when specifying the structure). In this case, the {\tt iRow} and … … 913 943 false if the last call to any of the evaluation methods used the same 914 944 values. This can be helpful when users have efficient implementations 915 that caclulate multiple outputs at once. Ipopt internally cache 's945 that caclulate multiple outputs at once. Ipopt internally caches 916 946 results from the {\tt TNLP} and generally, this flag can be ignored. 917 947 … … 920 950 values you specified in {\tt get\_nlp\_info}. 921 951 922 In our example, the hessian is dense, but we still specify it using the923 sparse matrix format. Because the hessian is symmetric, we only need to952 In our example, the Hessian is dense, but we still specify it using the 953 sparse matrix format. Because the Hessian is symmetric, we only need to 924 954 specify the lower left corner. 925 955 \begin{verbatim} … … 933 963 // triangle only. 934 964 935 // the hessian for this problem is actually dense965 // the Hessian for this problem is actually dense 936 966 Index idx=0; 937 967 for (Index row = 0; row < 4; row++) { 938 968 for (Index col = 0; col <= row; col++) { 939 940 941 969 iRow[idx] = row; 970 jCol[idx] = col; 971 idx++; 942 972 } 943 973 } … … 953 983 954 984 values[1] = obj_factor * (x[3]); // 1,0 955 values[2] = 0; // 1,1985 values[2] = 0; // 1,1 956 986 957 987 values[3] = obj_factor * (x[3]); // 2,0 958 values[4] = 0; // 2,1959 values[5] = 0; // 2,2988 values[4] = 0; // 2,1 989 values[5] = 0; // 2,2 960 990 961 991 values[6] = obj_factor * (2*x[0] + x[1] + x[2]); // 3,0 962 values[7] = obj_factor * (x[0]); // 3,1963 values[8] = obj_factor * (x[0]); // 3,2964 values[9] = 0; // 3,3992 values[7] = obj_factor * (x[0]); // 3,1 993 values[8] = obj_factor * (x[0]); // 3,2 994 values[9] = 0; // 3,3 965 995 966 996 … … 998 1028 \begin{itemize} 999 1029 \item {\tt status}: (in), gives the status of the algorithm 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 that was converged, not to desiredtolerances,1010 but to acceptable tolerances.1011 1012 1013 1014 1015 1016 1017 1030 as specified in {\tt IpAlgTypes.hpp}, 1031 \begin{itemize} 1032 \item {\tt SUCCESS}: Algorithm terminated successfully 1033 at a locally optimal point. 1034 \item {\tt MAXITER\_EXCEEDED}: Maximum number of iterations 1035 exceeded (can be specified by an option). 1036 \item {\tt STOP\_AT\_TINY\_STEP}: Algorithm proceeds with very little 1037 progress. 1038 \item {\tt STOP\_AT\_ACCEPTABLE\_POINT}: Algorithm stopped at a point 1039 that was converged, not to ``desired'' tolerances, 1040 but to ``acceptable'' tolerances (see the ??? options). 1041 \item {\tt LOCAL\_INFEASIBILITY}: Algorithm converged to a point 1042 of local infeasibility. Problem may be infeasible. 1043 \item {\tt RESTORATION\_FAILURE}: Restoration phase was called, but 1044 failed to find a more feasible point. 1045 \item {\tt INTERNAL\_ERROR}: An unknown internal error occurred. Please 1046 contact the Ipopt Authors through the mailing list. 1047 \end{itemize} 1018 1048 \item {\tt n}: (in), the number of variables in the problem (dimension of {\tt x}). 1019 1049 \item {\tt x}: (in), the current values for the primal variables, {\tt x}. … … 1045 1075 printf("\n\nSolution of the primal variables, x\n"); 1046 1076 for (Index i=0; i<n; i++) { 1047 printf("x[%d] = % g\n", i, x[i]);1077 printf("x[%d] = %e\n", i, x[i]); 1048 1078 } 1049 1079 1050 1080 printf("\n\nSolution of the bound multipliers, z_L and z_U\n"); 1051 1081 for (Index i=0; i<n; i++) { 1052 printf("z_L[%d] = % g\n", i, z_L[i]);1082 printf("z_L[%d] = %e\n", i, z_L[i]); 1053 1083 } 1054 1084 for (Index i=0; i<n; i++) { 1055 printf("z_U[%d] = % g\n", i, z_U[i]);1085 printf("z_U[%d] = %e\n", i, z_U[i]); 1056 1086 } 1057 1087 1058 1088 printf("\n\nObjective value\n"); 1059 printf("f(x*) = % g\n", obj_value);1089 printf("f(x*) = %e\n", obj_value); 1060 1090 } 1061 1091 \end{verbatim} … … 1076 1106 used in Ipopt, see Appendix \ref{app.smart_ptr}. 1077 1107 1078 Create the file {\tt MyExample.cpp} in the MyExample 1079 directory. Include {\tt HS071\_NLP.hpp} and {\tt IpSmartPtr.hpp}, tell the 1080 compiler to use the Ipopt namespace, and implement the {\tt main}1108 Create the file {\tt MyExample.cpp} in the MyExample directory. 1109 Include {\tt HS071\_NLP.hpp} and {\tt IpIpoptApplication.hpp}, tell 1110 the compiler to use the Ipopt namespace, and implement the {\tt main} 1081 1111 function. 1082 1112 \begin{verbatim} 1113 #include "IpIpoptApplication.hpp" 1083 1114 #include "HS071_NLP.hpp" 1084 #include "IpSmartPtr.hpp"1085 #include <iostream>1086 1115 1087 1116 using namespace Ipopt; … … 1089 1118 int main() 1090 1119 { 1091 1092 1093 1094 1095 1096 1097 1098 1099 1100 1101 1102 1103 1104 1105 1106 1107 1108 1109 1120 // use a SmartPtr to point the new HS071_NLP 1121 SmartPtr<TNLP> mynlp = new HS071_NLP(); 1122 1123 // use a SmartPtr to point to new IpoptApplication 1124 SmartPtr<IpoptApplication> app = new IpoptApplication(); 1125 1126 // Ask Ipopt to solve the problem 1127 SolverReturn status = app>OptimizeTNLP(mynlp); 1128 if (status == SUCCESSFUL) { 1129 std::cout << "SOLVED :)" << std::endl; 1130 return 0; 1131 } 1132 else { 1133 std::cout << "FAILED! :(" << std::endl; 1134 return 1; 1135 } 1136 1137 // As the SmartPtr's go out of scope, the reference counts will be decremented 1138 // and the mynlp and app objects will automatically be deleted. 1110 1139 } 1111 1140 \end{verbatim} … … 1121 1150 with the compiler and linker used on your system, you can build the 1122 1151 code, including the ipopt library (and other necessary libraries). If 1123 you are using Linux , then a sample makefile exists already that was1152 you are using Linux/UNIX, then a sample makefile exists already that was 1124 1153 created by configure. Copy {\tt Examples/hs071\_cpp/Makefile} into 1125 1154 your {\tt MyExample} directory. This makefile was created for the … … 1224 1253 \bibitem{AndreasPaper} 1225 1254 W\"achter, A. and Biegler, L.T.:''On the Implementation of a PrimalDual 1226 1227 1228 1255 Interior Point Filter Line Search Algorithm for LargeScale 1256 Nonlinear Programming'', Research Report, IBM T. J. Watson 1257 Research Center, Yorktown, USA (2004) 1229 1258 \bibitem{AndreasThesis} 1230 1259 W\"achter, A.:''An Interior Point Algorithm for LargeScale Nonlinear 1231 1232 1260 Optimization with Applications in Process Engineering'', 1261 Ph.D. Thesis, Carnegie Mellon University, Pittsburgh, USA (2002) 1233 1262 \bibitem{MargotClassText} 1234 1263 Margot, F.: Course material for \textit{47852 Open Source Software for 1235 1264 Optimization}, Carnegie Mellon University (2005) 1236 1265 \end{thebibliography} 1237 1266 … … 1245 1274 \left[ 1246 1275 \begin{array}{ccccccc} 1247 1.1 & 0 & 0 & 0 & 0 & 0& 0.5 \\1248 0 & 1.9 & 0 & 0 & 0 & 0& 0.5 \\1249 0 & 0 & 2.6 & 0 & 0 & 0& 0.5 \\1250 0 & 0 & 7.8 & 0.6 & 0 & 0& 0 \\1251 0 & 0 & 0 & 1.5 & 2.7 & 0 & 0\\1252 1.6 & 0 & 0 & 0 & 0.4 & 0 & 0\\1253 0 & 0 & 0 & 0 & 0 & 0.9& 1.7 \\1276 1.1 & 0 & 0 & 0 & 0 & 0 & 0.5 \\ 1277 0 & 1.9 & 0 & 0 & 0 & 0 & 0.5 \\ 1278 0 & 0 & 2.6 & 0 & 0 & 0 & 0.5 \\ 1279 0 & 0 & 7.8 & 0.6 & 0 & 0 & 0 \\ 1280 0 & 0 & 0 & 1.5 & 2.7 & 0 & 0 \\ 1281 1.6 & 0 & 0 & 0 & 0.4 & 0 & 0 \\ 1282 0 & 0 & 0 & 0 & 0 & 0.9 & 1.7 \\ 1254 1283 \end{array} 1255 1284 \right]. 1256 1285 \] 1257 1286 1258 A standard dense matrix representation would need to store $7 \cdot 7{=} 49$ floating point numbers, where many entries would be zero. In triplet format, however, only the nonzero entries are stored. The triplet format records the row number, the column number, and the value of all nonzero entries in the matrix. For the matrix above, this means storing $14$ integers for the rows, $14$ integers for the columns, and $14$ floating point numbers for the values. While this does not seem like a huge space savings over the $49$ floating point numbers stored in the dense representation, for larger matrices, the space savings are very dramatic\footnote{For an $n \times n$ matrix, the dense representation grows with the the square of $n$, while the sparse representation grows linearly in the number of nonzeros.} 1259 1260 In triplet format used in these Ipopt interfaces, the row and column numbers are 1based, and the above matrix is represented by, 1287 A standard dense matrix representation would need to store $7 \cdot 1288 7{=} 49$ floating point numbers, where many entries would be zero. In 1289 triplet format, however, only the nonzero entries are stored. The 1290 triplet format records the row number, the column number, and the 1291 value of all nonzero entries in the matrix. For the matrix above, this 1292 means storing $14$ integers for the rows, $14$ integers for the 1293 columns, and $14$ floating point numbers for the values. While this 1294 does not seem like a huge space savings over the $49$ floating point 1295 numbers stored in the dense representation, for larger matrices, the 1296 space savings are very dramatic\footnote{For an $n \times n$ matrix, 1297 the dense representation grows with the the square of $n$, while the 1298 sparse representation grows linearly in the number of nonzeros.} 1299 1300 In triplet format used in these Ipopt interfaces, the row and column 1301 numbers are 1based {\bf have table for both 0 and 1based}, and the 1302 above matrix is represented by 1261 1303 \[ 1262 1304 \begin{array}{ccc} 1263 row & col &value \\1264 1 & 1 & 1.1\\1265 1 & 7 & 0.5\\1266 2 & 2 & 1.9\\1267 2 & 7 & 0.5\\1268 3 & 3 & 2.6\\1269 3 & 7 & 0.5\\1270 4 & 3 & 7.8\\1271 4 & 4 & 0.6\\1272 5 & 4 & 1.5\\1273 5 & 5 & 2.7\\1274 6 & 1 & 1.6\\1275 6 & 5 & 0.4\\1276 7 & 6 & 0.9\\1277 7 & 7 &1.71305 row & col & value \\ 1306 1 & 1 & 1.1 \\ 1307 1 & 7 & 0.5 \\ 1308 2 & 2 & 1.9 \\ 1309 2 & 7 & 0.5 \\ 1310 3 & 3 & 2.6 \\ 1311 3 & 7 & 0.5 \\ 1312 4 & 3 & 7.8 \\ 1313 4 & 4 & 0.6 \\ 1314 5 & 4 & 1.5 \\ 1315 5 & 5 & 2.7 \\ 1316 6 & 1 & 1.6 \\ 1317 6 & 5 & 0.4 \\ 1318 7 & 6 & 0.9 \\ 1319 7 & 7 & 1.7 1278 1320 \end{array} 1279 1321 \] 1280 1322 1323 The individual elements of the matrix can be listed in any order, and 1324 if there are multiple items for the same nonzero position, the values 1325 provided for those positions are added. 1326 1327 MISSING: Symmetric matrices 1328 1329 1281 1330 \section{The Smart Pointer Implementation: SmartPtr} 1331 1282 1332 The SmartPtr class is described in {\tt IpSmartPtr.hpp}. It is a 1283 1333 template class that takes care of deleting objects for us so we need 1284 1334 not be concerned about memory leaks. Instead of pointing to an object 1285 with a raw C++ pointer (e.g. {\tt HS071\_NLP*}), we use a SmartPtr. Every time a1286 SmartPtr is set to reference an object, it increments a counter in 1287 that object (see the ReferencedObject base class if you are1335 with a raw C++ pointer (e.g. {\tt HS071\_NLP*}), we use a SmartPtr. 1336 Every time a SmartPtr is set to reference an object, it increments a 1337 counter in that object (see the ReferencedObject base class if you are 1288 1338 interested). If a SmartPtr is done with the object, either by leaving 1289 1339 scope or being set to point to another object, the counter is
Note: See TracChangeset
for help on using the changeset viewer.