Loading...
 
Skip to main content

Large Language Models for and with Evolutionary Computation

Webpage: TBA (in case of acceptance)

Description

Large language models (LLMs) are rapidly transforming how we create, reason about, and automate AI techniques and algorithmic discoveries. Instead of merely tuning hyperparameters or selecting among existing methods, LLMs enable fully automated algorithm design (AAD), the discovery of new architectures, and even end-to-end pipeline construction, effectively closing the loop between problem description, ideation, implementation, and evaluation in natural language. In parallel, evolutionary computation (EC) offers a powerful toolbox for exploring vast, high-dimensional and multi-modal search spaces, evolving not only solutions but also operators, representations, and full algorithmic workflows, often under noisy, constrained, or partially known conditions. EC has a long tradition of open-ended search, robustness, and creativity in optimization and design, making it an ideal counterpart to LLM-driven generation and reasoning. Together, these developments invite a rethinking of how we design, adapt, and explain optimization and learning systems—moving from hand-crafted pipelines toward hybrid, self-improving AI design ecosystems. What connects these two together?

One answer is evolutionary search heuristics powered by LLMs (“LLMs with EC”), where the model repeatedly generates and refines candidate algorithms, operators, or configurations, while selected evolutionary principles - populations, selection, crossover, and mutation- guide their improvement. Some of the recent frameworks, such as LLaMEA, FunSearch, AlphaEvolve, EASE, and EOH, may fully/partially illustrate this setup: an LLM proposes a population of candidate heuristics (often in code), which are evaluated (on benchmark tasks), and high-performing candidates are recombined and mutated in subsequent rounds. A shared knowledge base of evaluated solutions and reusable components accumulates over time, while feedback signals steer both the evolutionary search and the orchestration of modules around the LLM. In this way, EC provides structured exploration and selection pressure, and the LLM acts as a versatile generator and refiner inside a closed, iterative loop. This hybridization overturns the conventional paradigm that ECs use, and in turn, sometimes yields high-performing and novel EC systems. Of course, there may be more views on LLMs with EC, spanning from benchmarking of discovered algorithms, evolutionary prompt engineering, optimization of LLM-based frameworks and architectures, and more.

Another approach is to use LLMs for EC. LLMs may help researchers select feasible candidates from the pool of algorithms based on user-specified goals, a description of optimization tasks, and a characterization of available features, and provide a basic description of the methods or propose novel hybrid methods. Furthermore, the models can help identify and describe distinct components suitable for adaptive enhancement or hybridization, and provide pseudo-code, implementation, and reasoning for the proposed methodology.

Ultimately, by combining both answers, LLMs have the potential to transform automated metaheuristic design as part of AAD and configuration by generating EC module codes, and iteratively improving the initially designed solutions or algorithm templates (with or without performance or other data-driven feedback).

This workshop aims to encourage innovative approaches that leverage the strengths of LLMs and EC techniques, thus enabling the creation of more adaptive, efficient, and scalable algorithms by integrating evolutionary mechanisms with advanced LLM capabilities. Thanks to the collaborative platform for researchers and practitioners, the workshop may inspire novel research directions that could reshape AI, specifically LLMs, and optimization fields through this hybridization and achieve a better understanding and explanation of how these two seemingly disparate fields are related and how knowledge of their functions and operations can be leveraged.

It includes (but is not restricted to the following topics):

  • LLM-based iterative frameworks (with evolutionary principles)
  • Benchmarking of generted metaheuristics
  • Evolutionary Prompt Engineering
  • Optimisation of LLM Architectures
  • LLM-Guided Evolutionary Algorithms
  • How can an EA using an LLM evolve different of units of evolution, e.g. code, strings, images, multi-modal candidates?
  • How can an EA using an LLM solve prompt composition or other LLM development and use challenges?
  • How can an EA using an LLM integrate design explorations related to cooperation, modularity, reuse, or competition?
  • How can an EA using an LLM model biology?
  • How can an EA using an LLM intrinsically, or with guidance, support open-ended evolution?
  • What new variants hybridizing EC and/or another search heuristic are possible and in what respects are they advantageous?
  • What are new ways of using LLMs for evolutionary operators, e.g. new ways of generating variation through LLMs, as with LMX or ELM, or new ways of using LLMs for selection, as with e.g. Quality-Diversity through AI Feedback)
  • How well does an EA using an LLM scale with population size and problem complexity?
  • What is the most accurate computational complexity of an EA using an LLM?
  • What makes good EA plus LLM benchmark?
  • LLMs for (automated) generation of EC.
  • Understanding, fine-tuning, and adaptation of Large Language Models for EC. How large do LLMs need to be? Are there benefits for using larger/smaller ones? Ones trained on different datasets or in different ways?
  • Implementing/generating methodology for population dynamics analysis, population diversity measures, control, and analysis and visualization.
  • Generating rules for EC (boundary and constraints handling strategies).
  • The performance improvement, testing, and efficiency of the improved algorithms.
  • Reasoning for component-wise analysis of algorithms.
  • Connection of LLM and other ML techniques for EC (Reinforcement learning, AutoML)
  • Generation and reasoning for parallel approaches for EC algorithms.
  • Benchmarking and Comparative Studies of LLM-generated algorithms.
  • Applications of LLM and EC (not limited to):
    • constrained optimization
    • multi-objective optimization
    • expensive and surrogate assisted optimization
    • dynamic and uncertain optimization
    • large-scale optimization
    • combinatorial/discrete optimization

Submission format

Full papers and extended abstracts:

  • Full papers (8 pages + references): Must cover the ACM Open APC (see below for more information)
  • Extended Abstracts (up to 4 pages): Are not eligible for APC - no fee paid by the authors for ACM Open Access. An Extended Abstract provides a summary of a work-in-progress, typically just enough for readers to understand the idea, scope, and potential impact. It often lacks full methodology, detailed results, or extensive references.

Important dates

  • Submission opening: February 2, 2026
  • Submission deadline: March 27, 2026
  • Notification: April 24, 2026
  • Camera-ready: May 5, 2026
  • Author's mandatory registration: May 11, 2026

ACMs new Open Access publishing model for 2026 ACM Conferences

Starting January 1, 2026, ACM will fully transition to Open Access. All ACM publications, including those from ACM-sponsored conferences, will be 100% Open Access. Authors will have two primary options for publishing Open Access articles with ACM: the ACM Open institutional model or by paying Article Processing Charges (APCs). With over 2,600 institutions already part of ACM Open, the majority of ACM-sponsored conference papers will not require APCs from authors or conferences (currently, around 76%).

Authors from institutions not participating in ACM Open will need to pay an APC to publish their papers, unless they qualify for a financial waiver. To find out whether an APC applies to your article, please consult the list of participating institutions in ACM Open and review the APC Waivers and Discounts Policy. Keep in mind that waivers are rare and are granted based on specific criteria set by ACM.

Understanding that this change could present financial challenges, ACM has approved a temporary subsidy for 2026 to ease the transition and allow more time for institutions to join ACM Open. The subsidy will offer:

  • $250 APC for ACM/SIG members
  • $350 for non-members

This represents a 65% discount, funded directly by ACM. Authors are encouraged to help advocate for their institutions to join ACM Open during this transition period.

This temporary subsidized pricing will apply to all conferences scheduled for 2026.

Additionally, SIGEVO will provide an additional subsidy of $125 to papers accepted to GECCO 2026 (and only for 2026) that are subject to APCs. This will make the final amounts to be paid:

  • $125 (USD) for SIGEVO members
  • $225 (USD) for non-members

It is IMPORTANT to mention that both forms of subsidy (by ACM and by SIGVO) only apply to GECCO 2026. Moreover, it is still to be determined how the SIGEVO subsidy will be implemented, either directly to the APC or in other forms.

Finally, we note that APC charges apply to accepted Full Papers, but Abstracts (1-2 pages), Extended Abstracts (1-4 pages) and Tutorials ARE NOT APC Eligible; i.e., an APC will not have to be paid for these types of contributions.

ACM Authorship and Peer Review Policies on Generative AI

GECCO follows the official ACM policies on authorship and peer review, including the use of generative AI tools.

Under ACM's Authorship policy, generative AI tools and technologies cannot be listed as authors of an ACM published Work. The use of generative AI tools and technologies for assistance must be fully disclosed in the manuscript's Acknowledgments section. Authors are fully accountable for the originality, accuracy, and integrity of all submitted material.

In accordance with ACM's Peer Review policy, reviewers must not upload or share submitted manuscripts or review materials with generative AI systems. Reviewers may use generative AI or tools with the sole purpose of improving the quality and readability of reviewer reports for the author.

ACM is actively developing tools to help identify improper AI use in submissions, and GECCO may employ available detection methods. Submissions found to violate ACM policies may be rejected.


-

Organizers

Roman Senkerik

Roman Senkerik is a Head of A.I.Lab, Department of Informatics and Artificial Intelligence, Tomas Bata University in Zlín (https://ailab.fai.utb.cz/).

His current focus is generative AI—especially LLM-driven automated design, evaluation-in-the-loop workflows, and AutoML. He is the co-architect of EASE (Effortless Algorithmic Solution Evolution), an open, modular framework that automates the creation and refinement of solutions—algorithms, code, text, and images—using LLMs and other generators. Beyond GenAI, his research advances metaheuristics, with an emphasis on adaptive strategies and parameter control in Differential Evolution, benchmarking, and applications to real-world optimization tasks. Prof. Roman Senkerik has made significant contributions to the fields of evolutionary computation and applications of artificial intelligence.

He is the author of over 50 journal papers, 250 conference papers, and several book chapters, as well as editorial notes. He is a recognized reviewer for many leading journals in computer science/computational intelligence. He was a part of the organizing teams for tutorials, special sessions, workshops, or symposiums at GECCO, IEEE WCCI, CEC, or SSCI events.

Niki van Stein

Niki van Stein is an Associate Professor at the Leiden Institute of Advanced Computer Science (LIACS), Leiden University, specializing in Explainable Artificial Intelligence (XAI). Since January 2022, Dr. van Stein has led the XAI research group and is a member of the management team of the Natural Computing cluster. Her research focuses on the intersection of machine learning, LLMs, optimization, and XAI, with applications in predictive maintenance, time-series analysis, and engineering design. Dr. van Stein obtained a PhD in Computer Science from Leiden University in 2018, under the supervision of Prof. Dr. Thomas Bäck, with a thesis on data-driven modelling and optimization of industrial processes.
With over 90 peer-reviewed publications and multiple awards, including best paper recognitions at GECCO and the IEEE Symposium Series on Computational Intelligence, Dr. van Stein has made significant contributions to the fields of evolutionary computing and explainable artificial intelligence.

Erik Hemberg

Erik Hemberg is a Research Scientist in the AnyScale Learning For All (ALFA) group at Massachusetts Institute of Technology Computer Science and Artificial Intelligence Lab, USA. He has a PhD in Computer Science from University College Dublin, Ireland and an MSc in Industrial Engineering and Applied Mathematics from Chalmers University of Technology, Sweden. His work focuses on developing autonomous, pro-active cyber defenses that are anticipatory and adapt to counter attacks. He is also interested in automated semantic parsing of law, and data science for education and healthcare.

Una-May O’Reilly

Dr. Una-May O'Reilly is the leader of ALFA Group at Massachusetts Institute of Technology's Computer Science and Artificial Intelligence Lab. An evolutionary computation researcher for 20+ years, she is broadly interested in adversarial intelligence — the intelligence that emerges and is recruited while learning and adapting in competitive settings. Her interest has led her to study settings where security is under threat, for which she has developed machine learning algorithms that variously model the arms races of tax compliance and auditing, malware and its detection, cyber network attacks and defenses, and adversarial paradigms in deep learning. She is passionately interested in programming and genetic programming. She is a recipient of the EvoStar Award for Outstanding Achievements in Evolutionary Computation in Europe and the ACM SIGEVO Award Recognizing Outstanding Achievements in Evolutionary Computation. Devoted to the field and committed to its growth, she served on the ACM SIGEVO executive board from SIGEVO's inception and held different officer positions before retiring from it in 2023. She co-founded the annual workshops for Women@GECCO and has proudly watched their evolution to Women+@GECCO. She was on the founding editorial boards and continues to serve on the editorial boards of Genetic Programming and Evolvable Machines, and ACM Transactions on Evolutionary Learning and Optimization. She has received a GECCO best paper award and an GECCO test of time award. She is honored to be a member of SPECIES, a member of the Julian Miller Award committee, and to chair the 2023 and 2024 committees selecting SIGEVO Awards Recognizing Outstanding Achievements in Evolutionary Computation.

 
Pier Luca Lanzi

Pier Luca Lanzi received the Laurea degree in computer science from the Université degli Studi di Udine and the Ph.D. degree in Computer and Automation Engineering from the Politecnico di Milano. He is an associate professor at the Politecnico di Milano, Dept. of Electronics and Information. His research areas include genetic and evolutionary computation, reinforcement learning, and machine learning. He is interested in applications to data mining and autonomous agents. He is member of the editorial board of the "Evolutionary Computation Journal" and Editor in chief of SIGEVOlution, the ACM Newsletter of SIGEVO, the Special Interest Group on Genetic and Evolutionary Computation.

Michal Pluhacek

Michal Pluhacek is the ARTIQ project leader and professor at the AGH University of Krakow. His research interests include diverse branches of artificial intelligence, e.g. evolutionary computation, swarm intelligence, and, more recently, the applications of large language models. He has extensive international experience and numerous publications at world-leading congresses, conferences, and in respected journals. Michal Pluhacek received his Ph.D. degree in Information Technologies in 2016. His dissertation topic was “Modern Method of Development and Modifications of Evolutionary Computational Techniques.” Later, he was awarded the permanent associate. prof. title in 2023.

 
Joel Lehman

Joel Lehman is a machine learning researcher interested in algorithmic creativity, evolutionary algorithms, artificial life, and AI for wellbeing. Most recently he was a research scientist at OpenAI co-leading the Open-Endedness team (studying algorithms that can innovate endlessly). Previously he was a founding member of Uber AI Labs, first employee of Geometric Intelligence (acquired by Uber), and a tenure track professor at the IT University of Copenhagen. He co-wrote with Kenneth Stanley a popular science book called "Why Greatness Cannot Be Planned" on what AI search algorithms imply for individual and societal accomplishment.

Tome Eftimov

Tome Eftimov is a researcher at the Computer Systems Department at the Jožef Stefan Institute, Ljubljana, Slovenia. He is a visiting assistant professor at the Faculty of Computer Science and Engineering, Ss. Cyril and Methodius University, Skopje. He was a postdoctoral research fellow at the Stanford University, USA, where he investigated biomedical relations outcomes by using AI methods. In addition, he was a research associate at the University of California, San Francisco, investigating AI methods for rheumatology concepts extraction from electronic health records. He obtained his PhD in Information and Communication Technologies (2018). His research interests include statistical data analysis, metaheuristics, natural language processing, representation learning, and machine learning. He has been involved in courses on probability and statistics, and statistical data analysis. The work related to Deep Statistical Comparison was presented as tutorial (i.e. IJCCI 2018, IEEE SSCI 2019, GECCO 2020, and PPSN 2020) or as invited lecture to several international conferences and universities. He is an organizer of several workshops related to AI at high-ranked international conferences. He is a coordinator of a national project “Mr-BEC: Modern approaches for benchmarking in evolutionary computation” and actively participates in European projects.