liu.seSearch for publications in DiVA
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Alternating Mixed-Integer Programming and Neural Network Training for Approximating Stochastic Two-Stage Problems
KTH, Optimeringslära och systemteori.ORCID iD: 0000-0003-0299-5745
ABB Corporate Research Center, Ladenburg, Germany.
KTH, Optimeringslära och systemteori.ORCID iD: 0000-0002-5415-1715
KTH, Optimeringslära och systemteori.ORCID iD: 0000-0001-6352-0968
2024 (English)In: Machine Learning, Optimization, and Data Science - 9th International Conference, LOD 2023, Revised Selected Papers, Springer Nature , 2024, p. 124-139Conference paper, Published paper (Refereed)
Abstract [en]

The presented work addresses two-stage stochastic programs (2SPs), a broadly applicable model to capture optimization problems subject to uncertain parameters with adjustable decision variables. In case the adjustable or second-stage variables contain discrete decisions, the corresponding 2SPs are known to be NP-complete. The standard approach of forming a single-stage deterministic equivalent problem can be computationally challenging even for small instances, as the number of variables and constraints scales with the number of scenarios. To avoid forming a potentially huge MILP problem, we build upon an approach of approximating the expected value of the second-stage problem by a neural network (NN) and encoding the resulting NN into the first-stage problem. The proposed algorithm alternates between optimizing the first-stage variables and retraining the NN. We demonstrate the value of our approach with the example of computing operating points in power systems by showing that the alternating approach provides improved first-stage decisions and a tighter approximation between the expected objective and its neural network approximation.

Place, publisher, year, edition, pages
Springer Nature , 2024. p. 124-139
Series
Lecture Notes in Computer Science, ISSN 0302-9743, E-ISSN 1611-3349 ; 14506
Keywords [en]
Neural Network, Power Systems, Stochastic Optimization
National Category
Computational Mathematics
Identifiers
URN: urn:nbn:se:liu:diva-213762DOI: 10.1007/978-3-031-53966-4_10ISI: 001217090300010Scopus ID: 2-s2.0-85186266492ISBN: 9783031539657 (print)ISBN: 9783031539664 (electronic)OAI: oai:DiVA.org:liu-213762DiVA, id: diva2:1959829
Conference
9th International Conference on Machine Learning, Optimization, and Data Science, LOD 2023, Grasmere, United Kingdom of Great Britain and Northern Ireland, Sep 22 2023 - Sep 26 2023
Available from: 2024-03-13 Created: 2025-05-21

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Kronqvist, JanRolfes, JanZhao, Shudian

Search in DiVA

By author/editor
Kronqvist, JanRolfes, JanZhao, Shudian
Computational Mathematics

Search outside of DiVA

GoogleGoogle Scholar

doi
isbn
urn-nbn

Altmetric score

doi
isbn
urn-nbn
Total: 43 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf