Loading...
 
Skip to main content

Evolutionary Computation and Explainable AI

Webpage: https://ecxai.github.io/ecxai/

Description

In recent years there has been increasing interest in the intersection between evolutionary computation and explainable AI. Starting from a discussion group at GECCO 2021, activity in the EC community around this topic has grown. The proposers of this ECXAI workshop have organised previous iterations at GECCO 2022-2025. Each of these sessions was exceptionally well-attended, with 50 - 100 delegates present physically and 10 - 20 digitally at each iteration. At each, additional informal meet-ups have attracted 15 - 20 attendees. Papers accepted for presentation at each workshop numbered 7 (2022), 3 (2023), 5 (2024), and 6 (2025).

This workshop aims to continue to raise the profile of this important and novel area at GECCO 2026 through discussion and demonstration. In line with previous workshops, we intend to invite a keynote speaker. The ECXAI workshop is part of a wider effort by the organisers to raise awareness in the wider EC community, most recently including a TELO special issue devoted to the topic. More information on this effort, including details of prior ECXAI workshops, can be found here: https://ecxai.github.io/ecxai/

‘Explainable AI’ is an umbrella term that covers research on methods designed to provide human-understandable explanations of the decisions made/knowledge captured by AI models. This is currently a very active research area within the AI field. Evolutionary Computation (EC) draws from concepts found in nature to drive development in evolution-based systems such as genetic algorithms and evolution systems. Alongside other nature-inspired metaheuristics, such as swarm intelligence, the path to a solution is driven by stochastic processes. This creates barriers to explainability: algorithms may return different solutions when re-run from the same input, and technical descriptions of these processes often hinder end-user understanding and acceptance. On the other hand, very often, XAI methods require the fitting of some kind of model, and hence EC methods have the potential to play a role in this area. This workshop will focus on the bidirectional interplay between XAI and EC. That is, discuss how XAI can help EC research and how EC can be used within XAI methods.

Recent growth in the adoption of black-box solutions, including EC-based methods into domains such as medical diagnosis, manufacturing, and transport & logistics, has led to greater attention being paid to generating explanations and their accessibility to end-users. This increased attention has helped create a fertile environment for applying XAI techniques in the EC domain for both end-user and researcher-focused explanation generation. Furthermore, many approaches to XAI in machine learning are based on search algorithms (e.g., Local Interpretable Model-Agnostic Explanations / LIME) that have the potential to draw on the expertise of the EC community. Finally, many of the broader questions (such as what kinds of explanations are most appealing or useful to end users) are faced by XAI researchers in general.

From an application perspective, important questions have arisen for which XAI may be crucial: Is the system biased? Has the problem been formulated correctly? Is the solution trustworthy and fair? The goal of XAI and related research is to develop methods to interrogate AI processes with the aim of answering these questions. This can support decision-makers while also building trust in AI decision-support through more readily understandable explanations.

Submission format

Full papers and extended abstracts:

  • Full papers (8 pages + references): Must cover the ACM Open APC (see below for more information)
  • Extended Abstracts (up to 4 pages): Are not eligible for APC - no fee paid by the authors for ACM Open Access. An Extended Abstract provides a summary of a work-in-progress, typically just enough for readers to understand the idea, scope, and potential impact. It often lacks full methodology, detailed results, or extensive references.

Important dates

  • Submission opening: February 2, 2026
  • Submission deadline: March 27, 2026 April 03, 2026
  • Notification: April 24, 2026
  • Camera-ready: May 5, 2026
  • Author's mandatory registration: May 11, 2026

ACMs new Open Access publishing model for 2026 ACM Conferences

Starting January 1, 2026, ACM will fully transition to Open Access. All ACM publications, including those from ACM-sponsored conferences, will be 100% Open Access. Authors will have two primary options for publishing Open Access articles with ACM: the ACM Open institutional model or by paying Article Processing Charges (APCs). With over 2,600 institutions already part of ACM Open, the majority of ACM-sponsored conference papers will not require APCs from authors or conferences (currently, around 76%).

Authors from institutions not participating in ACM Open will need to pay an APC to publish their papers, unless they qualify for a financial waiver. To find out whether an APC applies to your article, please consult the list of participating institutions in ACM Open and review the APC Waivers and Discounts Policy. Keep in mind that waivers are rare and are granted based on specific criteria set by ACM.

Understanding that this change could present financial challenges, ACM has approved a temporary subsidy for 2026 to ease the transition and allow more time for institutions to join ACM Open. The subsidy will offer:

  • $250 APC for ACM/SIG members
  • $350 for non-members

This represents a 65% discount, funded directly by ACM. Authors are encouraged to help advocate for their institutions to join ACM Open during this transition period.

This temporary subsidized pricing will apply to all conferences scheduled for 2026.

Additionally, SIGEVO will provide an additional subsidy of $125 to papers accepted to GECCO 2026 (and only for 2026) that are subject to APCs. This will make the final amounts to be paid:

  • $125 (USD) for SIGEVO members
  • $225 (USD) for non-members

It is IMPORTANT to mention that both forms of subsidy (by ACM and by SIGVO) only apply to GECCO 2026. Moreover, it is still to be determined how the SIGEVO subsidy will be implemented, either directly to the APC or in other forms.

Finally, we note that APC charges apply to accepted Full Papers, but Abstracts (1-2 pages), Extended Abstracts (1-4 pages) and Tutorials ARE NOT APC Eligible; i.e., an APC will not have to be paid for these types of contributions.

ACM Authorship and Peer Review Policies on Generative AI

GECCO follows the official ACM policies on authorship and peer review, including the use of generative AI tools.

Under ACM's Authorship policy, generative AI tools and technologies cannot be listed as authors of an ACM published Work. The use of generative AI tools and technologies for assistance must be fully disclosed in the manuscript's Acknowledgments section. Authors are fully accountable for the originality, accuracy, and integrity of all submitted material.

In accordance with ACM's Peer Review policy, reviewers must not upload or share submitted manuscripts or review materials with generative AI systems. Reviewers may use generative AI or tools with the sole purpose of improving the quality and readability of reviewer reports for the author.

ACM is actively developing tools to help identify improper AI use in submissions, and GECCO may employ available detection methods. Submissions found to violate ACM policies may be rejected.


-

Organizers

Jaume Bacardit

Jaume Bacardit is Professor of Artificial Intelligence at Newcastle University in the UK. He has received a BEng, MEng in Computer Engineering and a PhD in Computer Science from Ramon Llull University, Spain in 1998, 2000 and 2004, respectively. Bacardit’s research interests include the development of machine learning methods for large-scale problems, the design of techniques to extract knowledge and improve the interpretability of machine learning algorithms, known currently as Explainable AI, and the application of these methods to a broad range of problems, mostly in biomedical domains. He leads/has led the data analytics efforts of several large interdisciplinary consortiums: D-BOARD (EU FP7, €6M, focusing on biomarker identification), APPROACH (EI-IMI €15M, focusing on disease phenotype identification) and PORTABOLOMICS (UK EPSRC £4.3M focusing on synthetic biology). Within GECCO he has organised several workshops (IWLCS 2007-2010, ECBDL’14), been co-chair of the EML track in 2009, 2013, 2014, 2020 and 2021, and Workshops co-chair in 2010 and 2011. He has 100+ peer-reviewed publications that have attracted 8500+ citations and a H-index of 41 (Google Scholar).

Alexander Brownlee

Alexander (Sandy) Brownlee is an Associate Professor in the Division of Computing Science and Mathematics at the University of Stirling, where he leads the Data Science and Intelligent Systems research group. His main topics of interest are in search-based optimisation methods and machine learning, with a focus on decision support tools, and applications in civil engineering, transportation and software engineering. He has published over 85 peer-reviewed papers on these topics. He has worked with several leading businesses including BT, KLM, and IES on industrial applications of optimisation and machine learning. He serves as a reviewer for several journals and conferences in evolutionary computation, civil engineering and transportation, and is currently an Editorial Board member for the journals Complex And Intelligent Systems and Journal of Scheduling. He has also been an organiser of several workshops and tutorials at GECCO and CEC on genetic improvement of software, and on explainable AI for optimisation.

Stefano Cagnoni

Stefano Cagnoni received a master's degree in electronic engineering and a Ph.D. in biomedical engineering from the University of Florence, Florence, Italy, in 1988 and 1994, respectively. He was a visiting scientist at the Massachusetts Institute of Technology, Cambridge, MA, USA, from 1993 to 1994, and then a postdoctoral fellow at the University of Florence. Since 1997, he has been with the University of Parma, Parma, Italy, where he is currently an Associate Professor of Computer Engineering. His research principally regards evolutionary algorithms applications to image analysis and processing, machine learning, and pattern recognition. Dr. Cagnoni is on the editorial board of the journals "Evolutionary Computation" and "Genetic Programming and Evolvable Machines," as well as in the IEEE CIS Task Force on Evolutionary Computer Vision and Image Processing. In 2009, he earned the "Evostar Award" for his outstanding contribution to Evolutionary Computation.

 
Martin Fyvie

Martin Fyvie is a Research Fellow in Artificial Intelligence at Robert Gordon University, specialising in optimisation, explainable AI, and transparent decision-support systems for complex industrial and environmental applications. His PhD at RGU focused on developing novel methods for deriving post-hoc explanations from Genetic Algorithms and other evolutionary optimisation techniques, particularly interpreting black-box optimisation processes through high-volume algorithm trace data to analyse trade-offs and understand search behaviour. His research has expanded into applied AI, combining optimisation, data modelling, and socio-technical analysis across sectors including offshore decommissioning, net-zero skills planning, and rural mobility. As part of RGU's Complex Optimization Group, he develops optimisation and modelling solutions for real-world problems enhanced through automation, simulation, and data-driven approaches. He has collaborated with industry partners to develop low-emission operational planning tools incorporating real-world constraints and contributed to national initiatives such as Data for Net Zero. Recent work includes serving as Co-Investigator on the Horizon Europe TH2ISTLE hydrogen valley project and the Smart Mobility Solutions for Rural Transport Optimisation project in the Orkney Islands as part of the Horizon RURALITIES project.

Giovanni Iacca

Giovanni Iacca is an Associate Professor of Information Engineering at the Department of Information Engineering and Computer Science of the University of Trento, Italy, where he founded the Distributed Intelligence and Optimization Lab (DIOL). Previously, he worked as a postdoctoral researcher in Germany (RWTH Aachen, 2017-2018), Switzerland (University of Lausanne and EPFL, 2013-2016), and the Netherlands (INCAS3, 2012-2016), as well as in industry in the areas of software engineering and industrial automation. He is co-PI of the PATHFINDER-CHALLENGE project "SUSTAIN" (2022-2026). Previously, he was co-PI of the FET-Open project "PHOENIX" (2015-2019). He has received three best paper awards (D’IoT IEEE VTC-Spring 2025, EvoApps 2017, and UKCI 2012). His research focuses on computational intelligence, distributed systems, explainable AI, and analysis of biomedical data. In these fields, he co-authored more than 210 peer-reviewed publications. He is actively involved in organizing tracks and workshops at some of the top conferences on computational intelligence, and he regularly serves as a reviewer for several journals and conference committees. He is the General Chair of PPSN 2026. He is an Associate Editor for IEEE Transactions on Evolutionary Computation, Applied Soft Computing, Engineering Applications of Artificial Intelligence, Memetic Computing, Evolutionary Intelligence, and Frontiers in Robotics and AI.

David Walker

David Walker is a Senior Lecturer in Computer Science at the University of Exeter. He obtained a PhD in Computer Science in 2013 for work on visualising solution sets in many-objective optimisation. His research focuses on developing new approaches to solving optimisation problems with Evolutionary Algorithms (EAs), as well as identifying ways in which the use of evolutionary computation can be expanded within industry, and he has published journal papers in all of these areas. His recent work considers the visualisation of algorithm operation, providing a mechanism for visualising algorithm performance to simplify the selection of EA parameters, working on the interface between evolutionary computation and explainable AI. While working as a postdoctoral research associate at the University of Exeter his work involved the development of hyper-heuristics and, more recently, investigating the use of interactive EAs in the water industry. Dr Walker’s research group includes a number of PhD students working on optimisation and machine learning projects. He is active in the EC field, having run an annual workshop on visualisation within EC at GECCO since 2012 in addition to his work as a reviewer for journals such as IEEE Transactions on Evolutionary Computation, Applied Soft Computing, and the Journal of Hydroinformatics. He is a member of the IEEE Taskforce on Many-objective Optimisation.