IPOPT has an option to approximate the Hessian of the Lagrangian by a limited-memory quasi-Newton method (L-BFGS). You can use this feature using the hessian_approximation=limited-memory option. In this case, it is not necessary to implement the Hessian computation method eval_h in TNLP. If you are using the C or Fortran interface, you still need to implement these functions, but they should return false or IERR=1, respectively, and don't need to do anything else.
In general, when second derivatives can be computed with reasonable computational effort, it is usually a good idea to use them, since then IPOPT normally converges in fewer iterations and is more robust. An exception here might be the case, where your optimization problem has a dense Hessian or a large percentage of non-zero entries in the Hessian, and then using the quasi-Newton approximation might be better, even if it the number of iterations increases, since the computation time per iteration might be significantly higher due to the very large number of non-zero elements in the linear systems that IPOPT solves in order to compute the search direction, if exact second derivatives are used.
Since the Hessian of the Lagrangian is zero for all variables that appear only linearly in the objective and constraint functions, the Hessian approximation should only take place in the space of all nonlinear variables. By default, it is assumed that all variables are nonlinear, but you can tell IPOPT explicitly which variables are nonlinear, using the get_number_of_nonlinear_variables and get_list_of_nonlinear_variables method of the TNLP class, see Section 3.3.4. (Those methods have been implemented for the AMPL interface, so you would automatically only approximate the Hessian in the space of the nonlinear variables, if you are using the quasi-Newton option for AMPL models.) Currently, those two methods are not available through the C or Fortran interface.
Andreas Waechter 2010-12-22