Opened 19 months ago

Closed 3 months ago

#290 closed task (migrated)

objective function increasing instead of decreasing with each iteration

Reported by: kamilova Owned by: ipopt-team
Priority: highest Component: Ipopt
Version: 3.12 Severity: critical
Keywords: increasing objective function fortran Cc:

Description

I am using Ipopt for my NLP, with the BFGS Hessian approximation option activated, since I have no second order information. I based my implementation on the example provided with the Ipopt installation. I am using Fortran 90 and MA27 as my linear solver. Furthermore, I have used the gradient based scaling for the problem.

Example of output is in the attached image.

Please let me know what other information is necessary. I am very pressed for time to get this optimisation running, and it seems like it's a supposedly simple thing but my coding skills have betrayed me.

Attachments (1)

Screenshot.png (55.3 KB) - added by kamilova 19 months ago.
example of output for Ipopt

Download all attachments as: .zip

Change History (4)

Changed 19 months ago by kamilova

example of output for Ipopt

comment:1 follow-up: Changed 19 months ago by stefan

It seems that Ipopt struggels to make improvements in primal and dual feasibility. It is ok if the objective value is not decreasing at that time.

You might want to enable the derivative checker to check whether your gradient implementation is correct. If so, then maybe try finding a better starting point.

comment:2 in reply to: ↑ 1 Changed 19 months ago by kamilova

I have the derivative checker activated and I get errors in most of them, but of order e-2. This problem was already solved with a different optimisation method, and it managed to achieve a suboptimal point despite the errors in the gradient, which is why I still thought it should reach some point.

With the previous optimisation, I get values of up to -9.1 e-1, whereas with Ipopt I start with about -8.8 e-1, and end with -6.7 e-1, which is why I thought that it's maximising instead of minimising?

Is there any particular reason it would have this error in Ipopt, but achieve a (much) better point with SQP optimisation (which is not the best method for this NLP but works "sometimes") ?

Replying to stefan:

It seems that Ipopt struggels to make improvements in primal and dual feasibility. It is ok if the objective value is not decreasing at that time.

You might want to enable the derivative checker to check whether your gradient implementation is correct. If so, then maybe try finding a better starting point.

Last edited 19 months ago by kamilova (previous) (diff)

comment:3 Changed 3 months ago by stefan

  • Resolution set to migrated
  • Status changed from new to closed

This ticket has been migrated to GitHub and will be followed up there: https://github.com/coin-or/Ipopt/issues/290

Note: See TracTickets for help on using tickets.