Loading...
 
Skip to main content

Large Language Models for Metaheuristic Design: Exploring Challenges, Limitations, Tools, Benchmarking and Future Opportunities.

Description

Evolutionary Computation (EC) and the recent advances in large language models (LLMs) are two powerful fields within the computational intelligence (CI) universe that hold significant promise for complex optimization problems. The stable growth of EC and its growing integration of machine learning (ML) principles, along with diverse algorithms and hybrid approaches, pose challenges for researchers. They must identify effective strategies and design targeted solutions for both combinatorial and continuous optimization in specific use cases. LLMs are transforming the way we create and automate AI techniques/algorithms discoveries. This shift moves us beyond just hyperparameter tuning and automated selection into the realms of fully automated algorithm design (AAD), architecture and end-to-end pipelines discovery, effectively closing the loop between ideation and evaluation.

The tutorial will explore several ways in which these two fields are closely connected.

This tutorial will firstly introduce the general potential of LLMs to transform automated metaheuristic design and benchmarking, focusing on how they can support researchers in selecting suitable algorithms, explaining methods, suggesting novel hybrid techniques, generating codes, and iteratively improving the initially designed solutions or algorithm templates.

Another approach is represented by evolutionary search heuristics powered by LLMs, where the model repeatedly generates and refines candidate algorithms, operators, or configurations, while selected evolutionary principles - populations, selection, crossover, and mutation- guide their improvement. Some of the recent frameworks may fully/partially illustrate this setup: an LLM proposes a population of candidate heuristics (often in code), which are evaluated (on benchmark tasks), and high-performing candidates are recombined and mutated in subsequent rounds. A shared knowledge base of evaluated solutions and reusable components accumulates over time, while feedback signals steer both the evolutionary search and the orchestration of modules around the LLM. This tutorial provides an overview of the rapidly evolving landscape, including frameworks like EASE, LLaMEA, LHNS, MCTS-AHD, PartEVO, AlphaEvolve, and FunSearch. We specifically contrast two of our own-developed frameworks: the architecture of EASE (Effortless Algorithmic Solution Evolution) with the evolutionary-focused approach of LLaMEA (Large Language Model Evolutionary Algorithm). We will explore the EASE as a practical, fully modular framework for iterative, closed-loop generation and evaluation. Beyond just algorithm code, EASE can also iteratively generate text and graphics. Following that, the tutorial will shift its focus to the LLaMEA framework and its connection with benchmarking ecosystem IOH, recent advancements, including the hyperparameter optimization toolkit LLaMEA-HPO & LLaMEA-BO, and a unique benchmarking suite for automated algorithms discovery – BLADE.

Participants in this tutorial will have a unique opportunity to listen to two seemingly competing teams and learn about two frameworks in one place, and find out how to collaborate effectively in this rapidly developing area, and complement each other with partial knowledge, leading to greater efficiency and opportunities for global deployment of these frameworks for AAD.

We will also highlight key guardrails, including testing, analysis, and time/resource caps, as well as best practices in evaluation and benchmarking. Attendees will emerge with practical criteria for choosing between LLaMEA and EASE, methods for responsible evaluation, and steps for incorporating LLM-driven discovery into AI research.

Building upon these frameworks, we will further discuss the orchestration problem - how ensembles of small and large language models can cooperatively drive algorithmic discovery.
We will also demonstrate how human-in-the-loop co-discovery mechanisms can inject domain expertise and interpretability into this process, ensuring that automated exploration remains guided, explainable, and auditable. Together, these methods establish a practical path toward LLM-driven discovery that both generate and justify new algorithms.

Key aspects will include the role of LLMs in creating optimization algorithms specifically tuned for unique challenges and generating problem-tailored metaheuristics for global optimization. Attendees will gain a comprehensive view of the opportunities and challenges in leveraging LLMs within metaheuristics design, equipping them with insights to push the boundaries of optimization in research and industry.


Organizers

Image
Roman Senkerik

Tomas Bata University in Zlin, A.I.Lab, Faculty of Applied Informatics, Czech Republic

Roman Senkerik is a Head of A.I.Lab, Department of Informatics and Artificial Intelligence, Tomas Bata University in Zlín (https://ailab.fai.utb.cz/).

His current focus is generative AI—especially LLM-driven automated design, evaluation-in-the-loop workflows, and AutoML. He is the co-architect of EASE (Effortless Algorithmic Solution Evolution), an open, modular framework that automates the creation and refinement of solutions—algorithms, code, text, and images—using LLMs and other generators. Beyond GenAI, his research advances metaheuristics, with an emphasis on adaptive strategies and parameter control in Differential Evolution, benchmarking, and applications to real-world optimization tasks. Prof. Roman Senkerik has made significant contributions to the fields of evolutionary computation and applications of artificial intelligence.

He is the author of over 50 journal papers, 250 conference papers, and several book chapters, as well as editorial notes. He is a recognized reviewer for many leading journals in computer science/computational intelligence. He was a part of the organizing teams for tutorials, special sessions, workshops, or symposiums at GECCO, IEEE WCCI, CEC, or SSCI events.


Image
Niki van Stein

Leiden University, Netherlands

Niki van Stein is an Associate Professor at the Leiden Institute of Advanced Computer Science (LIACS), Leiden University, specializing in Explainable Artificial Intelligence (XAI). Since January 2022, Dr. van Stein has led the XAI research group and is a member of the management team of the Natural Computing cluster. Her research focuses on the intersection of machine learning, LLMs, optimization, and XAI, with applications in predictive maintenance, time-series analysis, and engineering design. Dr. van Stein obtained a PhD in Computer Science from Leiden University in 2018, under the supervision of Prof. Dr. Thomas Bäck, with a thesis on data-driven modelling and optimization of industrial processes.
With over 90 peer-reviewed publications and multiple awards, including best paper recognitions at GECCO and the IEEE Symposium Series on Computational Intelligence, Dr. van Stein has made significant contributions to the fields of evolutionary computing and explainable artificial intelligence.


Image
Adam Viktorin

Tomas Bata University in Zlin, Czech Republic

Adam Viktorin is an AI Researcher at the A.I.Lab, Department of Informatics and Artificial Intelligence, Tomas Bata University in Zlín (https://ailab.fai.utb.cz/). His current focus is generative AI—especially LLM-driven automated design, evaluation-in-the-loop workflows, and AutoML. He is the principal architect of EASE (Effortless Algorithmic Solution Evolution), an open, modular framework that automates the creation and refinement of solutions—algorithms, code, text, and images—using LLMs and other generators. Beyond GenAI, his research advances metaheuristics, with an emphasis on adaptive strategies and parameter control in Differential Evolution, benchmarking, and applications to real-world optimization tasks. He received his Ph.D. in 2021 from Tomas Bata University in Zlín with the thesis Control Parameter Adaptation in Differential Evolution. His broader interests span machine learning, data science, and interdisciplinary applications of soft computing.


Image
Michal Pluhacek

Center of Excellence in Artificial Intelligence, AGH University of Krakow, Poland

Michal Pluhacek is the ARTIQ project leader and professor at the AGH University of Krakow. His research interests include diverse branches of artificial intelligence, e.g. evolutionary computation, swarm intelligence, and, more recently, the applications of large language models. He has extensive international experience and numerous publications at world-leading congresses, conferences, and in respected journals. Michal Pluhacek received his Ph.D. degree in Information Technologies in 2016. His dissertation topic was “Modern Method of Development and Modifications of Evolutionary Computational Techniques.” Later, he was awarded the permanent associate. prof. title in 2023.


Thomas Bäck

Leiden Institute of Advanced Computer Science (LIACS), Leiden University, Netherlands

Thomas Back received his Diploma and Ph.D. degrees in Computer Science from the University of Dortmund, Germany, in 1990 and 1994, respectively. He is a Professor of Computer Science at the Leiden Institute of Advanced Computer Science (LIACS), Leiden University, Netherlands, with research interests in evolutionary computation, machine learning, and their applications in sustainable industry and healthcare. He is a member of the Royal Netherlands Academy of Arts and Sciences (KNAW, 2021), IEEE Fellow (2022), and Academia Europaea (2022). He has received several awards, including the IEEE CIS Evolutionary Computation Pioneer Award (2015) and the best Ph.D. thesis award from the German Computer Science Society (1995). Dr. Bäck serves as Editor in Chief of the Evolutionary Computation Journal, and holds editorial roles with several other journals. He has co-edited major handbooks and authored notable works in evolutionary computation.