Reduction constraints for the global optimization of NLPs

Research output: Contribution to journalArticlepeer-review

Abstract

Convergence of branch-and-bound algorithms for the solution of NLPs is obtained by finding ever-nearer lower and upper bounds to the objective function. The lower bound is calculated by constructing a convex relaxation of the NLP. Reduction constraints are new linear problem constraints which are (a) linearly independent from the existing constraints; (b) redundant with reference to the original NLP formulation; (c) not redundant with reference to its convex relaxation. Thus, they can be successfully employed to reduce the feasible region of the convex relaxation without cutting the feasible region of the original NLP.

Original languageEnglish
Pages (from-to)33-41
Number of pages9
JournalInternational Transactions in Operational Research
Volume11
Issue number1
DOIs
Publication statusPublished - 1 Jan 2004
Externally publishedYes

Keywords

  • Branch-and-bound
  • Global optimization
  • NLP
  • Valid cut

Fingerprint

Dive into the research topics of 'Reduction constraints for the global optimization of NLPs'. Together they form a unique fingerprint.

Cite this