liu.seSearch for publications in DiVA
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
LLMOA: A novel large language model assisted hyper-heuristic optimization algorithm
Hokkaido Univ, Japan.
Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, Faculty of Science & Engineering. Fayoum Univ, Egypt.ORCID iD: 0000-0001-5394-0678
Niigata Univ, Japan.
Hokkaido Univ, Japan.
2025 (English)In: Advanced Engineering Informatics, ISSN 1474-0346, E-ISSN 1873-5320, Vol. 64, article id 103042Article in journal (Refereed) Published
Abstract [en]

This work presents a novel approach, the large language model assisted hyper-heuristic optimization algorithm (LLMOA), tailored to address complex optimization challenges. Comprising two essential components - the high-level component and the low-level component - LLMOA leverages the LLM (i.e., Gemini) with prompt engineering in its high-level component to construct optimization sequences automatically and intelligently. Furthermore, we propose novel elite-based local search operators as low-level heuristics (LLHs), which draw inspiration from the proximate optimality principle (POP). These local search operators cooperated with wellknown mutation and crossover operators from differential evolution (DE), at a total of ten efficient and versatile search operators, forming the whole LLHs. To assess the competitiveness of LLMOA, we conducted comprehensive numerical experiments across CEC2014, CEC2020, CEC2022, and ten engineering optimization problems, benchmarking against eleven state-of-the-art optimizers. Our experimental findings and statistical analyses underscore the powerfulness and effectiveness of LLMOA. Moreover, ablation experiments reveal the pivotal role of integrating the LLM Gemini and prompt engineering as the high-level component. Conclusively, this study provides a feasible avenue to introduce LLM to the evolutionary computation (EC) community. The research's source code is available for download at https://github.com/RuiZhong961230/LLMOA.

Place, publisher, year, edition, pages
ELSEVIER SCI LTD , 2025. Vol. 64, article id 103042
Keywords [en]
Hyper-heuristic algorithm (HHA); Large language model (LLM); Prompt engineering; Low-level heuristics (LLHs); Evolutionary computation (EC)
National Category
Computer graphics and computer vision
Identifiers
URN: urn:nbn:se:liu:diva-211170DOI: 10.1016/j.aei.2024.103042ISI: 001400279400001Scopus ID: 2-s2.0-85213863764OAI: oai:DiVA.org:liu-211170DiVA, id: diva2:1931518
Note

Funding Agencies|JSPS, Japan KAKENHI [21A402, 24K15098]; JST SPRING [JPMJSP2119]

Available from: 2025-01-27 Created: 2025-01-27 Last updated: 2025-01-27

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Search in DiVA

By author/editor
Hussien, Abdelazim
By organisation
Software and SystemsFaculty of Science & Engineering
In the same journal
Advanced Engineering Informatics
Computer graphics and computer vision

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 105 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf