Large Language Models for and with Evolutionary Computation
Webpage: TBA (in case of acceptance)
Description
Large language models (LLMs) are rapidly transforming how we create, reason about, and automate AI techniques and algorithmic discoveries. Instead of merely tuning hyperparameters or selecting among existing methods, LLMs enable fully automated algorithm design (AAD), the discovery of new architectures, and even end-to-end pipeline construction, effectively closing the loop between problem description, ideation, implementation, and evaluation in natural language. In parallel, evolutionary computation (EC) offers a powerful toolbox for exploring vast, high-dimensional and multi-modal search spaces, evolving not only solutions but also operators, representations, and full algorithmic workflows, often under noisy, constrained, or partially known conditions. EC has a long tradition of open-ended search, robustness, and creativity in optimization and design, making it an ideal counterpart to LLM-driven generation and reasoning. Together, these developments invite a rethinking of how we design, adapt, and explain optimization and learning systems—moving from hand-crafted pipelines toward hybrid, self-improving AI design ecosystems. What connects these two together?
One answer is evolutionary search heuristics powered by LLMs (“LLMs with EC”), where the model repeatedly generates and refines candidate algorithms, operators, or configurations, while selected evolutionary principles - populations, selection, crossover, and mutation- guide their improvement. Some of the recent frameworks, such as LLaMEA, FunSearch, AlphaEvolve, EASE, and EOH, may fully/partially illustrate this setup: an LLM proposes a population of candidate heuristics (often in code), which are evaluated (on benchmark tasks), and high-performing candidates are recombined and mutated in subsequent rounds. A shared knowledge base of evaluated solutions and reusable components accumulates over time, while feedback signals steer both the evolutionary search and the orchestration of modules around the LLM. In this way, EC provides structured exploration and selection pressure, and the LLM acts as a versatile generator and refiner inside a closed, iterative loop. This hybridization overturns the conventional paradigm that ECs use, and in turn, sometimes yields high-performing and novel EC systems. Of course, there may be more views on LLMs with EC, spanning from benchmarking of discovered algorithms, evolutionary prompt engineering, optimization of LLM-based frameworks and architectures, and more.
Another approach is to use LLMs for EC. LLMs may help researchers select feasible candidates from the pool of algorithms based on user-specified goals, a description of optimization tasks, and a characterization of available features, and provide a basic description of the methods or propose novel hybrid methods. Furthermore, the models can help identify and describe distinct components suitable for adaptive enhancement or hybridization, and provide pseudo-code, implementation, and reasoning for the proposed methodology.
Ultimately, by combining both answers, LLMs have the potential to transform automated metaheuristic design as part of AAD and configuration by generating EC module codes, and iteratively improving the initially designed solutions or algorithm templates (with or without performance or other data-driven feedback).
This workshop aims to encourage innovative approaches that leverage the strengths of LLMs and EC techniques, thus enabling the creation of more adaptive, efficient, and scalable algorithms by integrating evolutionary mechanisms with advanced LLM capabilities. Thanks to the collaborative platform for researchers and practitioners, the workshop may inspire novel research directions that could reshape AI, specifically LLMs, and optimization fields through this hybridization and achieve a better understanding and explanation of how these two seemingly disparate fields are related and how knowledge of their functions and operations can be leveraged.
It includes (but is not restricted to the following topics):
- LLM-based iterative frameworks (with evolutionary principles)
- Benchmarking of generted metaheuristics
- Evolutionary Prompt Engineering
- Optimisation of LLM Architectures
- LLM-Guided Evolutionary Algorithms
- How can an EA using an LLM evolve different of units of evolution, e.g. code, strings, images, multi-modal candidates?
- How can an EA using an LLM solve prompt composition or other LLM development and use challenges?
- How can an EA using an LLM integrate design explorations related to cooperation, modularity, reuse, or competition?
- How can an EA using an LLM model biology?
- How can an EA using an LLM intrinsically, or with guidance, support open-ended evolution?
- What new variants hybridizing EC and/or another search heuristic are possible and in what respects are they advantageous?
- What are new ways of using LLMs for evolutionary operators, e.g. new ways of generating variation through LLMs, as with LMX or ELM, or new ways of using LLMs for selection, as with e.g. Quality-Diversity through AI Feedback)
- How well does an EA using an LLM scale with population size and problem complexity?
- What is the most accurate computational complexity of an EA using an LLM?
- What makes good EA plus LLM benchmark?
- LLMs for (automated) generation of EC.
- Understanding, fine-tuning, and adaptation of Large Language Models for EC. How large do LLMs need to be? Are there benefits for using larger/smaller ones? Ones trained on different datasets or in different ways?
- Implementing/generating methodology for population dynamics analysis, population diversity measures, control, and analysis and visualization.
- Generating rules for EC (boundary and constraints handling strategies).
- The performance improvement, testing, and efficiency of the improved algorithms.
- Reasoning for component-wise analysis of algorithms.
- Connection of LLM and other ML techniques for EC (Reinforcement learning, AutoML)
- Generation and reasoning for parallel approaches for EC algorithms.
- Benchmarking and Comparative Studies of LLM-generated algorithms.
- Applications of LLM and EC (not limited to):
- constrained optimization
- multi-objective optimization
- expensive and surrogate assisted optimization
- dynamic and uncertain optimization
- large-scale optimization
- combinatorial/discrete optimization
Submission format
Full papers and extended abstracts:
- Full papers (8 pages + references): Must cover the ACM Open APC (see below for more information)
- Extended Abstracts (up to 4 pages): Are not eligible for APC - no fee paid by the authors for ACM Open Access. An Extended Abstract provides a summary of a work-in-progress, typically just enough for readers to understand the idea, scope, and potential impact. It often lacks full methodology, detailed results, or extensive references.
Important dates
- Submission opening: February 2, 2026
- Submission deadline: March 27, 2026
- Notification: April 24, 2026
- Camera-ready: May 5, 2026
- Author's mandatory registration: May 11, 2026
ACMs new Open Access publishing model for 2026 ACM Conferences
Starting January 1, 2026, ACM will fully transition to Open Access. All ACM publications, including those from ACM-sponsored conferences, will be 100% Open Access. Authors will have two primary options for publishing Open Access articles with ACM: the ACM Open institutional model or by paying Article Processing Charges (APCs). With over 2,600 institutions already part of ACM Open, the majority of ACM-sponsored conference papers will not require APCs from authors or conferences (currently, around 76%).
Authors from institutions not participating in ACM Open will need to pay an APC to publish their papers, unless they qualify for a financial waiver. To find out whether an APC applies to your article, please consult the list of participating institutions in ACM Open and review the APC Waivers and Discounts Policy. Keep in mind that waivers are rare and are granted based on specific criteria set by ACM.
Understanding that this change could present financial challenges, ACM has approved a temporary subsidy for 2026 to ease the transition and allow more time for institutions to join ACM Open. The subsidy will offer:
- $250 APC for ACM/SIG members
- $350 for non-members
This represents a 65% discount, funded directly by ACM. Authors are encouraged to help advocate for their institutions to join ACM Open during this transition period.
This temporary subsidized pricing will apply to all conferences scheduled for 2026.
Additionally, SIGEVO will provide an additional subsidy of $125 to papers accepted to GECCO 2026 (and only for 2026) that are subject to APCs. This will make the final amounts to be paid:
- $125 (USD) for SIGEVO members
- $225 (USD) for non-members
It is IMPORTANT to mention that both forms of subsidy (by ACM and by SIGVO) only apply to GECCO 2026. Moreover, it is still to be determined how the SIGEVO subsidy will be implemented, either directly to the APC or in other forms.
Finally, we note that APC charges apply to accepted Full Papers, but Abstracts (1-2 pages), Extended Abstracts (1-4 pages) and Tutorials ARE NOT APC Eligible; i.e., an APC will not have to be paid for these types of contributions.
ACM Authorship and Peer Review Policies on Generative AI
GECCO follows the official ACM policies on authorship and peer review, including the use of generative AI tools.
Under ACM's Authorship policy, generative AI tools and technologies cannot be listed as authors of an ACM published Work. The use of generative AI tools and technologies for assistance must be fully disclosed in the manuscript's Acknowledgments section. Authors are fully accountable for the originality, accuracy, and integrity of all submitted material.
In accordance with ACM's Peer Review policy, reviewers must not upload or share submitted manuscripts or review materials with generative AI systems. Reviewers may use generative AI or tools with the sole purpose of improving the quality and readability of reviewer reports for the author.
ACM is actively developing tools to help identify improper AI use in submissions, and GECCO may employ available detection methods. Submissions found to violate ACM policies may be rejected.
-
Organizers
Roman Senkerik is a Head of A.I.Lab, Department of Informatics and Artificial Intelligence, Tomas Bata University in Zlín (https://ailab.fai.utb.cz/).
His current focus is generative AI—especially LLM-driven automated design, evaluation-in-the-loop workflows, and AutoML. He is the co-architect of EASE (Effortless Algorithmic Solution Evolution), an open, modular framework that automates the creation and refinement of solutions—algorithms, code, text, and images—using LLMs and other generators. Beyond GenAI, his research advances metaheuristics, with an emphasis on adaptive strategies and parameter control in Differential Evolution, benchmarking, and applications to real-world optimization tasks. Prof. Roman Senkerik has made significant contributions to the fields of evolutionary computation and applications of artificial intelligence.
He is the author of over 50 journal papers, 250 conference papers, and several book chapters, as well as editorial notes. He is a recognized reviewer for many leading journals in computer science/computational intelligence. He was a part of the organizing teams for tutorials, special sessions, workshops, or symposiums at GECCO, IEEE WCCI, CEC, or SSCI events.
Niki van Stein is an Associate Professor at the Leiden Institute of Advanced Computer Science (LIACS), Leiden University, specializing in Explainable Artificial Intelligence (XAI). Since January 2022, Dr. van Stein has led the XAI research group and is a member of the management team of the Natural Computing cluster. Her research focuses on the intersection of machine learning, LLMs, optimization, and XAI, with applications in predictive maintenance, time-series analysis, and engineering design. Dr. van Stein obtained a PhD in Computer Science from Leiden University in 2018, under the supervision of Prof. Dr. Thomas Bäck, with a thesis on data-driven modelling and optimization of industrial processes.
With over 90 peer-reviewed publications and multiple awards, including best paper recognitions at GECCO and the IEEE Symposium Series on Computational Intelligence, Dr. van Stein has made significant contributions to the fields of evolutionary computing and explainable artificial intelligence.