Loading...
 
Skip to main content

BBSR - Benchmarking, Benchmarks, Software, and Reproducibility

Description

The Benchmarking, Benchmarks, Software, and Reproducibility track welcomes submissions that touch on all aspects of reproducibility, benchmarking, and software of genetic and evolutionary computation methods.

Scope:

  • Benchmarking methodologies for assessing the performance of evolutionary algorithms and related optimization techniques,
  • Benchmark problems and toolboxes for evaluating evolutionary computation methods or enabling the training of meta-learning techniques for these,
  • Statistical analysis and visualization techniques for understanding problem spaces or the performance and behavior of optimization techniques, including instance space analysis and landscape analysis,
  • Reproducibility studies that rigorously replicate published experiments with a substantial shift in confidence in the results of the original study,
  • Innovative software for deploying, evaluating, developing, or teaching genetic and evolutionary computation in original and unique ways.


This is a non-exhaustive list, and we invite the authors to get in touch with the track chairs if in doubt about the suitability of their submission to this track.

Requirements for reproducibility studies

For reproducibility studies, the reasons for the new findings must be clearly explained in order to ensure a meaningful and distinct contribution from the original study (e.g. different benchmarks, application scenarios, technical or implementation differences). The submission must follow the highest reproducibility standards by providing all implementation details, input data, parameters and hardware specifications. All artifacts must be made available in a public repository upon submission and must remain available after publication. The submission must also follow the usual standards in terms of plagiarism.
Of particular interest are replicability studies, defined as follows by the
ACM: "The measurement can be obtained with stated precision by a different team, a different measuring system, in a different location on multiple trials (different team, different experimental setup)."

Anonymization

We acknowledge that for some of the works fitting this track, it may be difficult to submit in completely anonymized form, e.g., when links to demos, data, or software are required to assess the suitability of the submission for GECCO. Whenever possible, we strongly encourage the authors to make use of anonymous repositories (available on Zenodo and for GitHub repositories, for example). In the ideal case, these repositories will be deanonymized only after the notification. Where it is impossible to anonymize repositories, the BBSR track allows to link resources that possibly reveal authors’ identity. However, also in this case, all other elements of the paper shall follow the standard anonymization guidelines. In particular, we require that author names, affiliations, and acknowledgments are suppressed and that, to the maximum extent possible, references to any of the author's own work should be made as if the work belonged to someone else. We strongly recommend the use of the following option:

\documentclass[dvipsnames,format=sigconf,anonymous=true,review=true]{acmart}

Track Chairs

Image

Mike Preuss

Leiden Institute of Advanced Computer Science

Mike Preuss is associate professor at the Leiden Institute of Advanced Computer Science and most interested in using modern AI algorithms to solve practical problems, most notably in ChemAI (as for retrosynthesis), but generally in contexts where human expertise and new AI methods meet. This encompasses LLM and image/video generation tools and how they can be integrated into human workflows meaningfully. Partly automated Procedural Content generation (PCG) is actually a well-known concept in game AI for a long time already and profits greatly from these new developments. Recently, Mike is also involved with quantum games (quantum versions of board games as Checkers) and drone research.
Mike received his PhD from TU Dortmund university, Germany, in 2013, under the supervision of Hans-Paul Schwefel in Evolutionary Computation, namely in deriving methods for complex multimodal optimization tasks, with a view to real-world applications as the design of ship propulsion engines. In the following years, he stayed with the information systems department of WWU Münster, Germany, before starting his current position at Leiden University where he established the Game Research Lab, bringing together topics from education games for teaching AI to (very recently) training Deep Reinforcement Learning algorithms to learn to play Pokemon Red in a fully automated fashion. He is always looking out for new interesting problems that can be solved by means of modern AI algorithms in and outside of computer games.


Image

Fabricio Olivetti de França

Federal University of ABC (UFABC), Brazil

Fabricio Olivetti de Franca: is a Professor of Computer Science at the Federal University of ABC, current head of the Heuristics, Analysis and Learning Laboratory (HAL) and the coordinator of the graduate program of computer science at the same university. His work in symbolic regression comprehends the creation of new techniques promoting interpretability and the integration of domain knowledge. He is also one of the main contributors of SRBench having helped to host multiple competitions for symbolic regression and the current version of SRBench. He co-organized Symbolic Regression workshops at GECCO for the past years together with Gabriel Kronberger and William La Cava. He is also part of the organization of the Genetic Programming Theory and Practice workshop.