Loading...
 

Tutorials

List of Tutorials

Introductory Tutorials

               
TitleOrganizers
A Gentle Introduction to Theory (for Non-Theoreticians)
  • Benjamin Doerr Ecole Polytechnique
Evolution of Neural Networks
  • Risto Miikkulainen The University of Texas at Austin and Cognizant Technology Solutions
NEW Evolutionary Computation and Games
  • Julian Togelius New York University
  • Sebastian Risi IT University of Copenhagen
  • Georgios N. Yannakakis The University of Malta
Evolutionary Computation: A Unified Approach
  • Kenneth De Jong George Mason University
Evolutionary Many-Objective Optimization
  • Hisao Ishibuchi Southern University of Science and Technology
  • Hiroyuki Sato The University of Electro-Communications, Tokyo
Evolutionary Multi-objective Optimization: Past, Present and Future
  • Kalyanmoy Deb Michigan State University, United States
Hyper-heuristics
  • John R. Woodward QUEEN MARY, UNIVERSITY OF LONDON
  • Daniel R. Tauritz Auburn University
Introduction to Genetic Programming
  • UnaMay OReilly MIT
  • Erik Hemberg MIT, CSAIL
Introductory Mathematical Programming for EC
  • Ofer Shir Tel-Hai College
Learning Classifier Systems: From Principles to Modern Systems
  • Anthony Stein University of Augsburg, Germany
  • Masaya Nakata The University of Electro-Communications, Japan
Model-based Evolutionary Algorithms
  • Dirk Thierens Utrecht University, The Netherlands
  • Peter A.N. Bosman Centre for Mathematics and Computer Science, The Netherlands
Neuroevolution for Deep Reinforcement Learning Problems
  • David Ha Google Brain
Representations for Evolutionary Algorithms
  • Franz Rothlauf Johannes Gutenberg-Universtität Mainz
Runtime Analysis of Population-based Evolutionary Algorithms
  • Per Kristian Lehre University of Birmingham, UK
  • Pietro Oliveto The University of Sheffield
NEW Theoretical Foundations of Evolutionary Computation for Beginners and Veterans
  • Darrell Whitley Colorado State University

Advanced Tutorials

                 
TitleOrganizers
NEW A Hands-on Guide to Distributed Computing Paradigms for Evolutionary Computation
  • Rui Wang Research Scientist
  • Jiale Zhi Uber AI
NEW Benchmarking and Analyzing Iterative Optimization Heuristics with IOHprofiler
  • Thomas Baeck Leiden Universiteit, Nederland
  • Carola Doerr CNRS & Sorbonne University, Paris, France
  • Ofer Shir Tel-Hai College
  • Hao Wang Leiden University
Decomposition Multi-Objective Optimisation: Current Developments and Future Opportunities
  • Ke Li University of Exeter
  • Qingfu Zhang City University of Hong-Kong
  • Saúl Zapotecas Autonomous Metropolitan University-Cuajimalpa, México
NEW Design Principles for Matrix Adaptation Evolution Strategies
  • Hans-Georg Beyer Vorarlberg University of Applied Sciences
Dynamic Control Parameter Choices in Evolutionary Computation
  • Gregor Papa Jozef Stefan Institute
Evolutionary Computation for Digital Art
  • Frank Neumann University of Adelaide, Australia
  • Aneta Neumann University of Adelaide
NEW Fitness landscape analysis to understand and predict algorithm performance for single- and multi-objective optimization
  • Sébastien Verel Univ. Littoral Côte d'Opale
  • Bilel Derbel University of Lille
NEW Genetic improvement: Taking real-world source code and improving it using genetic programming.
  • Saemundur O. Haraldsson University of Stirling
  • John R. Woodward QUEEN MARY, UNIVERSITY OF LONDON
  • Markus Wagner University of Adelaide
NEW Quality-Diversity Optimization
  • Antoine Cully Imperial College London
  • Jean-Baptiste Mouret Inria
  • Stéphane Doncieux Sorbonne Université
Recent Advances in Particle Swarm Optimization Analysis and Understanding
  • Andries Engelbrecht University Of Pretoria, South Africa
  • Christopher Cleghorn University of Pretoria
NEW Replicability and Reproducibility in Evolutionary Optimization
  • Luis Paquete University of Coimbra
  • Manuel López-Ibáñez University of Manchester, UK
Semantic Genetic Programming
  • Alberto Moraglio University of Exeter, UK
  • Krzysztof Krawiec Poznan University of Technology, Poland
Sequential Experimentation By Evolutionary Algorithms
  • Thomas Baeck Leiden Universiteit, Nederland
  • Ofer Shir Tel-Hai College
Solving Complex Problems with Coevolutionary Algorithms
  • Krzysztof Krawiec Poznan University of Technology, Poland
  • Malcolm Heywood Dalhousie University
NEW Statistical Analyses for Meta-heuristic Stochastic Optimization Algorithms
  • Tome Eftimov Stanford University / Jožef Stefan Institute
NEW Theory and Practice of Population Diversity in Evolutionary Computation
  • Dirk Sudholt The University of Sheffield
  • Giovanni Squillero Politecnico di Torino
Visualization in Multiobjective Optimization
  • Bogdan Filipic Jozef Stefan Institute, Ljubljana, Slovenia
  • Tea Tušar Jožef Stefan Institute, Ljubljana, Slovenia

Specialized Tutorials

             
TitleOrganizers
NEW Addressing Ethical Challenges within Evolutionary Computation Applications
  • Jim Torresen University of Oslo
Automated Algorithm Configuration and Design
  • Manuel López-Ibáñez University of Manchester, UK
  • Thomas Stützle IRIDIA laboratory, ULB, Belgium
NEW EA & ML, synergies and challenges
  • Giovanni Squillero Politecnico di Torino
  • Alberto Tonda UMR 782 GMPA, INRA, Thiverval-Grignon, France
NEW Evolutionary Algorithms in Biomedical Data Mining: Challenges, Solutions, and Frontiers
  • Ryan Urbanowicz University of Pennsylvania, USA
  • Moshe Sipper Ben-Gurion University
Evolutionary Computation and Evolutionary Deep Learning for Image Analysis, Signal Processing and Pattern Recognition
  • Mengjie Zhang Victoria University of Wellington
  • Stefano Cagnoni Universita' degli Studi di Parma, Italy
Evolutionary Computation and Machine Learning in Cryptology
  • Stjepan Picek KU Leuven, Belgium and Faculty of Electrical Engineering and Computing, Zagreb, Croatia
  • Domagoj Jakobovic University of Zagreb, Croatia
Evolutionary Computation for Feature Selection and Feature Construction
  • Bing Xue Victoria University of Wellington
  • Mengjie Zhang Victoria University of Wellington
Evolutionary Computer Vision
  • Gustavo Olague CICESE Research Center
NEW Multi-concept Optimization
  • Amiram Moshaiov Tel-Aviv University
Push
  • Lee Spector Amherst College, Hampshire College, and the University of Massachusetts, Amherst
NEW Search Based Software Engineering: challenges, opportunities and recent applications
  • Ali Ouni ETS Montreal, University of Quebec
NEW Swarm Intelligence in Cybersecurity
  • Roman Senkerik Tomas Bata University in Zlin
  • Ivan Zelinka Department of Computer Science, Faculty of Electrical Engineering and Computer Science, VŠB-TUO, Ostrava-Poruba, Czech Republic & IT4Innovations, National Supercomputing Centre, senior researcher, Big Data Analysis Lab www.it4i.cz
NEW Theory of Estimation-of-Distribution Algorithms
  • Carsten Witt Technical University of Denmark

Introductory Tutorials

A Gentle Introduction to Theory (for Non-Theoreticians)

This tutorial addresses GECCO attendees who do not regularly use
theoretical methods in their research. For these, we give a smooth
introduction to the theory of evolutionary computation.
Complementing other introductory theory tutorials, we do not discuss
mathematical methods or particular results, but explain

- what theory of evolutionary algorithms aims at,
- how research in theory of evolutionary computation is done,
- how to interpret statements from the theory literature,
- what are some important contributions of theory to our field,
- and what are the current hot topics.

Benjamin Doerr

Benjamin Doerr is a full professor at the Ecole Polytechnique (France). He also is an adjunct professor at Saarland University (Germany). His research area is the theory both of problem-specific algorithms and of randomized search heuristics like evolutionary algorithms. Major contributions to the latter include runtime analyses for evolutionary algorithms and ant colony optimizers, as well as the further development of the drift analysis method, in particular, multiplicative and adaptive drift. In the young area of black-box complexity, he proved several of the current best bounds.

Together with Frank Neumann and Ingo Wegener, Benjamin Doerr founded the theory track at GECCO, served as its co-chair 2007-2009 and served again in 2014. In 2016, he chaires the Hot-off-the-press track. He is a member of the editorial boards of "Evolutionary Computation", "Natural Computing", "Theoretical Computer Science" and "Information Processing Letters". Together with Anne Auger, he edited the book "Theory of Randomized Search Heuristics".

Evolution of Neural Networks

Evolution of artificial neural networks has recently emerged as a powerful technique in two areas. First, while the standard value-function based reinforcement learning works well when the environment is fully observable, neuroevolution makes it possible to disambiguate hidden state through memory. Such memory makes new applications possible in areas such as robotic control, game playing, and artificial life. Second, deep learning performance depends crucially on the network architecture and hyperparameters. While many such architectures are too complex to be optimized by hand, neuroevolution can be used to do so automatically. Such evolutionary AutoML can be used to achieve good deep learning performance even with limited resources, or state=of-the art performance with more effort. It is also possible to optimize other aspects of the architecture, like its size, speed, or fit with hardware. In this tutorial, I will review (1) neuroevolution methods that evolve fixed-topology networks, network topologies, and network construction processes, (2) methods for neural architecture search and evolutionary AutoML, and (3) applications of these techniques in control, robotics, artificial life, games, image processing, and language.

 

Risto Miikkulainen

Risto Miikkulainen is a Professor of Computer Science at the University of Texas at Austin and a CTO at Sentient Technologies, Inc. He received an M.S. in Engineering from the Helsinki University of Technology, Finland, in 1986, and a Ph.D. in Computer Science from UCLA in 1990. His research focuses on methods and applications of neuroevolution, as well as neural network models of natural language processing and self-organization of the visual cortex; he is an author of over 400 articles in these research areas. He is an IEEE Fellow, recipient of the 2020 IEEE CIS EC Pioneer Award, recent awards from INNS and ISAL, as well as nine Best-Paper Awards at GECCO.

NEW Evolutionary Computation and Games

In recent years, research in AI for Games and Games for AI has enjoyed
rapid progress and a sharp rise in popularity. In this field, various
kinds of AI algorithms are tested on benchmarks based on e.g. board
games and video games, and new AI-based solutions are developed for
problems in game development and game design. This tutorial will give
an overview of key research challenges and methods of choice in
evolutionary computation applied to games. The tutorial is divided into
two parts, where the second part builds on methods and results
introduced in the first part. The first part will focus on
evolutionary computation methods for playing games, including
neuroevolution and evolutionary planning. The second part will focus
on the use of evolutionary computation for game testing and procedural
content generation, as well as player experience prediction and game adaptation.

Julian Togelius

Julian Togelius is an Associate Professor in the Department of Computer Science and Engineering at New York University, and co-director of the NYU Game Innovation Lab. He works on all aspects of computational intelligence and games and on selected topics in evolutionary computation and evolutionary reinforcement learning. His current main research directions involve search-based procedural content generation in games, game adaptation through player modelling, automatic game design, and fair and relevant benchmarking of game AI through competitions. He is an author of a recent textbook on Procedural Content Generation in Games and an upcoming textbook on Artificial Intelligence and Games. Togelius holds a BA from Lund University, an MSc from the University of Sussex, and a PhD from the University of Essex.

Sebastian Risi

Sebastian Risi is an Associate Professor at the IT University of Copenhagen where he is part of the Center for Computer Games Research and the Robotics, Evolution and Art Lab (REAL). His interests include computational intelligence in games, neuroevolution, evolutionary robotics and human computation. Risi completed his PhD in computer science from the University of Central Florida. He has won several best paper awards at GECCO, EvoMusArt, IJCNN, and the Continual Learning Workshop at NIPS for his work on adaptive systems, the HyperNEAT algorithm for evolving complex artificial neural networks, and music generation.

 

Georgios N. Yannakakis

Evolutionary Computation: A Unified Approach

The field of Evolutionary Computation has experienced tremendous growth over the past 20 years, resulting in a wide variety of evolutionary algorithms and applications. The result poses an interesting dilemma for many practitioners in the sense that, with such a wide variety of algorithms and approaches, it is often hard to see the relationships between them, assess strengths and weaknesses, and make good choices for new application areas.
This tutorial is intended to give an overview of a general EC framework that can help compare and contrast approaches, encourages crossbreeding, and facilitates intelligent design choices. The use of this framework is then illustrated by showing how traditional EAs can be compared and contrasted with it, and how new EAs can be effectively designed using it.
Finally, the framework is used to identify some important open issues that need further research.

 

Kenneth De Jong

Evolutionary Many-Objective Optimization

The goal of the tutorial is clearly explaining difficulties of evolutionary many-objective optimization, approaches to the handling of those difficulties, and promising future research directions. Evolutionary multi-objective optimization (EMO) has been a very active research area in the field of evolutionary computation in the last two decades. In the EMO area, the hottest research topic is evolutionary many-objective optimization. The difference between multi-objective and many-objective optimization is simply the number of objectives. Multi-objective problems with four or more objectives are usually referred to as many-objective problems. It sounds that there exists no significant difference between three-objective and four-objective problems. However, the increase in the number of objectives significantly makes multi-objective problems difficult. In the first part (Part I: Difficulties), we clearly explain not only frequently-discussed well-known difficulties such as the weakening selection pressure towards the Pareto front and the exponential increase in the number of solutions for approximating the entire Pareto front but also other hidden difficulties such as the deterioration of the usefulness of crossover and the difficulty of performance evaluation of solution sets. The attendees of the tutorial will learn why many-objective optimization is difficult for EMO algorithms. After the clear explanations about the difficulties of many-objective optimization, we explain in the second part (Part II: Approaches and Future Directions) how to handle each difficulty. For example, we explain how to prevent the Pareto dominance relation from weakening its selection pressure and how to prevent a binary crossover operator from decreasing its search efficiently. We also explain some state-of-the-art many-objective algorithms. Some promising research directions are explained in detail in the tutorial.

Hisao Ishibuchi

Hiroyuki Sato

2009- Assistant professor, Faculty of Electro-Communications, The University of Electro-Communications
2010- Assistant professor, Graduate School of Informatics and Engineering, The University of Electro-Communications
2016- Associate Professor, Graduate School of Informatics and Engineering, The University of Electro-Communications

Evolutionary Multi-objective Optimization: Past, Present and Future

Genetic algorithms (GAs) were first formally demonstrated to solve optimization problems head-to-head with classical point-based methods in 1975 by Kenneth De Jong. Goldberg and Richardson demonstrated the use of GAs for finding multiple optimal solutions for a single-objective multi-modal function in 1987. All these early studies led to the suggestion of three evolutionary multi-objective optimization (EMO) algorithms during 1993-95, following a suggestion by Goldberg, which marked the beginning of the EMO field. Now, there are more than 7,000 papers and reports generated every year in EMO with more than 50% of them being applied to outside computer science and engineering area. About 25% of IEEE TEVC published papers are in EMO area. The most cited paper in the whole GEC field, having more than 30,000 Google Scholar citations, comes from EMO field and three of top five most popular papers of IEEE TEVC and four our of five most cited papers from MIT Press's Evol. Computation Journal are from EMO area. There are at least three major software companies which survive on EMO algorithms. Every day, EMO is attracting new researchers and practitioners into the field.
In this tutorial, we shall provide a systematic and chronological account of how EMO field was started, details of a few key EMO algorithms that made the field popular, and key applications which show-case their practical importance. We shall discuss in details key current research topics which will motivate new-comers to get started with directions. Some of the topics which would be covered are: evolutionary many-objective optimization, surrogate-assisted EMO, robust and reliability based EMO, EMO with decision-making, Multiobjectivization, EMO based knowledge extraction, theoretical advancements of EMO, and others. Finally, we shall discuss the presenter's account on immediate and futuristic research ideas of the field. Some of these topics will include EMO for dynamic problems, EMO for bilevel problems, EMO for machine learning including CNN and DNN architecture search and EMO for very large-scale problems. The tutorial will be concluded by pointing to number of different resources (books, public domain codes, key websites, and others) and by demonstrating working of a few public domain codes.

Kalyanmoy Deb

Kalyanmoy Deb is Koenig Endowed Chair Professor at Department of Electrical and Computer Engineering in Michigan State University. His research interests are in evolutionary optimization and their application in multi-criterion optimization, bilevel optimization, modeling, and machine learning. He was awarded IEEE CIS EC Pioneer award, Infosys Prize, TWAS Prize in Engineering Sciences, CajAstur Mamdani Prize, Distinguished Alumni Award from IIT Kharagpur, Edgeworth-Pareto award, Bhatnagar Prize in Engineering Sciences, and Bessel Research award from Germany. He is fellow of IEEE and ASME. He has published over 520 research papers with Google Scholar citation of over 137,000 with h-index 115. More information can be found from http://www.coin-lab.org.

Hyper-heuristics

Hyper-heuristics
The automatic design of algorithms has been an early aim of both machine learning and AI, but has proved elusive. The aim of this tutorial is to introduce hyper-heuristics as a principled approach to the automatic design of algorithms. Hyper-heuristics are metaheuristics applied to a space of algorithms; i.e., any general heuristic method of sampling a set of candidate algorithms. In particular, this tutorial will demonstrate how to mine existing algorithms to obtain algorithmic primitives for the hyper-heuristic to compose new algorithmic solutions from, and to employ various types of genetic programming to execute the composition process; i.e., the search of program space.

This tutorial will place hyper-heuristics in the context of genetic programming - which differs in that it constructs solutions from scratch using atomic primitives - as well as genetic improvement - which takes a program as starting point and improves on it (a recent direction introduced by William Langdon).

The approach proceeds from the observation that it is possible to define an invariant framework for the core of any class of algorithms (often by examining existing human-written algorithms for inspiration). The variant components of the algorithm can then be generated by genetic programming. Each instance of the framework therefore defines a family of algorithms. While this allows searches in constrained search spaces based on problem knowledge, it does not in any way limit the generality of this approach, as the template can be chosen to be any executable program and the primitive set can be selected to be Turing-complete. Typically, however, the initial algorithmic primitive set is composed of primitive components of existing high-performing algorithms for the problems being targeted; this more targeted approach very significantly reduces the initial search space, resulting in a practical approach rather than a mere theoretical curiosity. Iterative refining of the primitives allows for gradual and directed enlarging of the search space until convergence.

This leads to a technique for mass-producing algorithms that can be customised to the context of end-use. This is perhaps best illustrated as follows: typically a researcher might create a travelling salesperson algorithm (TSP) by hand. When executed, this algorithm returns a solution to a specific instance of the TSP. We will describe a method that generates TSP algorithms that are tuned to representative instances of interest to the end-user. This method has been applied to a growing number of domains including; data mining/machine learning, combinatorial problems including bin packing (on- and off-line), Boolean satisfiability, job shop scheduling, exam timetabling, image recognition, black-box function optimization, wind-farm layout, and the automated design of meta-heuristics themselves (from selection and mutation operators to the overall meta-heuristic architecture).

This tutorial will provide a step-by-step guide which takes the novice through the distinct stages of automatic design. Examples will illustrate and reinforce the issues of practical application. This technique has repeatedly produced results which outperform their manually designed counterparts, and a theoretical underpinning will be given to demonstrate why this is the case. Automatic design will become an increasingly attractive proposition as the cost of human design will only increase in-line with inflation, while the speed of processors increases in-line with Moore's law, thus making automatic design attractive for industrial application. Basic knowledge of genetic programming will be assumed.

John R. Woodward

John R. Woodward is head of the Operational Research Group (http://or.qmul.ac.uk/) at QMUL. He holds a BSc in Theoretical Physics, an MSc in Cognitive Science and a PhD in Computer Science, all from the University of Birmingham. His research interests include Automated Software Engineering, Artificial Intelligence/Machine Learning and in particular Genetic Programming. Publications are at (https://scholar.google.co.uk/citations?user=iZIjJ80AAAAJ&hl=en), and current EPSRC grants are at (https://gow.epsrc.ukri.org/NGBOViewPerson.aspx?PersonId=-485755). Public engagement articles are at (https://theconversation.com/profiles/john-r-woodward-173210/articles). He has worked in industrial, military, educational and academic settings, and been employed by EDS, CERN and RAF and three UK Universities (Birmingham, Nottingham, Stirling).

Daniel R. Tauritz

Daniel R. Tauritz is an Associate Professor in the Department of Computer Science and Software Engineering at Auburn University, a cyber consultant for Sandia National Laboratories, a Guest Scientist at Los Alamos National Laboratory (LANL), the founding director of AU's Biomemetic National Security Artificial Intelligence (BONSAI) Laboratory, founding academic director of the LANL/AU Cyber Security Sciences Institute, and the Chief Cyber AI Strategist of the Auburn Cyber Research Center. He received his Ph.D. in 2002 from Leiden University for Adaptive Information Filtering employing a novel type of evolutionary algorithm. He served previously as GECCO 2010 Late Breaking Papers Chair, GECCO 2012 & 2013 GA Track Co-Chair, GECCO 2015 ECADA Workshop Co-Chair, GECCO 2015 MetaDeeP Workshop Co-Chair, GECCO 2015 Hyper-heuristics Tutorial co-instructor, and GECCO 2015 CBBOC Competition co-organizer. For several years he has served on the GECCO GA track program committee, the Congress on Evolutionary Computation program committee, and a variety of other international conference program committees. His research interests include the design of hyper-heuristics and self-configuring evolutionary algorithms and the application of computational intelligence techniques in cyber security, critical infrastructure protection, and program understanding. He was granted a US patent for an artificially intelligent rule-based system to assist teams in becoming more effective by improving the communication process between team members.

Introduction to Genetic Programming

Genetic programming emerged in the early 1990's as one of the most exciting new evolutionary algorithm paradigms. It has rapidly grown into a thriving area of research and application. While sharing the evolutionary inspired algorithm principles of a genetic algorithm, it differs by exploiting an executable genome. Genetic programming evolves a 'program' to solve a problem rather than a single solution. This tutorial introduces the basic genetic programming paradigm. It explains how the powerful capability of genetic programming is derived from modular algorithmic components: executable representations such as a parse tree, variation operators that preserve syntax and explore a variable length, hierarchical solution space, appropriately chosen programming functions and fitness function specification. It provides demos and walks through an example of GP software.

 

UnaMay OReilly

Una-May O'Reilly is leader of the AnyScale Learning For All (ALFA) group at MIT CSAIL. ALFA focuses on evolutionary algorithms, machine learning and frameworks for large scale knowledge mining, prediction and analytics. The group has projects in cyber security using coevolutionary algorithms to explore adversarial dynamics in networks and malware detection. Una-May received the EvoStar Award for Outstanding Achievements in Evolutionary Computation in Europe in 2013. She is a Junior Fellow (elected before age 40) of the International Society of Genetic and Evolutionary Computation, which has evolved into ACM Sig-EVO. She now serves as Vice-Chair of ACM SigEVO. She served as chair of the largest international Evolutionary Computation Conference, GECCO, in 2005.

Erik Hemberg

Erik Hemberg is a Research Scientist in the AnyScale Learning
For All(ALFA) group at Massachusetts Institute of Technology Computer Science and Artificial Intelligence Lab, USA. He has a PhD in Computer Science from University College Dublin, Ireland and a MSc in Industrial Engineering and Applied Mathematics from Chalmers University of Technology, Sweden. He has 10 years of experience in EC focusing on the use of programs with grammatical representations, estimation of distribution and coevolution. His work has been applied to networks, tax avoidance, and Cyber Security.

Introductory Mathematical Programming for EC

Global optimization of complex models has been for several decades approached by means of formal algorithms as well as Mathematical Programming (MP) (often branded as Operations Research, yet strongly rooted at Theoretical CS), and simultaneously has been treated by a wide range of dedicated heuristics (frequently under the label of Soft Computing) – where EC resides. The former is applicable when explicit modeling is available, whereas the latter is typically utilized for simulation- or experimentation-based optimization (but also applicable for explicit models). These two branches complement each other, yet practically studied under two independent CS disciplines.
It is widely recognized nowadays that EC scholars become stronger, better-equipped researchers when obtaining knowledge on this so-called "optimization complement". In other words, the claim that our EC education should encompass basic MP is untenable at present times, and this tutorial aims at bridging the gap for our community's scholars.
The tutorial comprises three parts, aiming to introduce basic MP for EC audience.
The first part introduces the fundamentals of MP. It overviews mathematical optimization and outlines its taxonomy when classified by the underlying model: Convex Optimization (linear programs (pure-LP) and non-linear programs), versus Combinatorial Optimization (integer and mixed-integer linear programs (M)ILP, integer quadratic programs IQP). It then discusses some of the theoretical aspects, such as polyhedra and the duality theorem.
The second part focuses on MP in practice. The tutorial presents the principles of MP modeling, with emphasis on the roles of constraints and auxiliary/dummy decision variables in MP. It is compared to equivalent modeling for EC heuristics, which operate differently with respect to such components. It then covers selected algorithms for the major classes of problems (Dantzig's Simplex for pure-LP, Ellipsoid for convex models, and Branch-and-bound for ILP).
The third part constitutes an interactive demo session of problem-solving using IBM's CPLEX.

Ofer Shir

Ofer Shir is a Senior Lecturer at the Computer Science Department in Tel-Hai College, and a Principal Investigator at Migal-Galilee Research Institute – both located in the Upper Galilee, Israel.
Ofer Shir holds a BSc in Physics and Computer Science from the Hebrew University of Jerusalem, Israel, and both MSc and PhD in Computer Science from Leiden University, The Netherlands (PhD advisers: Thomas Bäck and Marc Vrakking). Upon his graduation, he completed a two-years term as a Postdoctoral Research Associate at Princeton University, USA (2008-2010), hosted by Prof. Herschel Rabitz – where he specialized in computer science aspects of experimental quantum systems. He then joined IBM-Research as a Research Staff Member (2010-2013), which constituted his second postdoctoral term, and where he gained real-world experience in convex and combinatorial optimization as well as in decision analytics.
His current fields of interest encompass Statistical Learning in Theory and in Practice, Experimental Optimization, Scientific Informatics, Natural Computing, Computational Intelligence in Physical Sciences, Quantum Control and Quantum Information.

Learning Classifier Systems: From Principles to Modern Systems

Learning Classifier Systems (LCSs) emerged in the late 1970s and since then have attracted a lot of research attention. Originally introduced as a technique to model adaptive agents within Holland’s notion of complex adaptive systems, various enhancements toward a full-fledged Machine Learning (ML) technique with an evolutionary component at its core have appeared. Nowadays, several variations capable of dealing with most modern ML tasks exist, including online and batch-wise supervised learning, single- and multi-step reinforcement learning, and even unsupervised learning. This great flexibility, which is due to their modular and clearly defined algorithmic structure paving the way for simple and straight-forward adaptations, is unique in the field of Evolutionary Machine Learning (EML) – yielding the LCS paradigm an indisputable permanent place in the EML field. Despite this well-known blueprint comprising the building blocks bringing LCSs to function, gaining theoretical insights regarding the interplay between them has been a crucial research topic for a long time, and this still constitutes a pursued subject of active research.

In this tutorial, the main goal is to introduce exactly these building blocks of LCS-based EML and to conceptually develop a modern, generic Michigan-style LCS step by step from scratch. The resulting generic LCS is then brought in line with the mostly investigated representative – Wilson’s XCS Classifier System (or simply XCS) – which serves as reference system for the subsequent parts.
In order to provide a holistic view on LCSs, the tutorial also provides a sketch on the lineage and historical developments in the first part, but will then quickly focus on the more prominent Michigan-style systems.

NEW: Recent theoretical advances in XCS research will be the subject of the tutorial’s second part. The attendees should get in touch with the fundamental challenges and also the bounds of what can be achieved by XCS under which circumstances.

The third part of the tutorial is then devoted to the state of the art in LCS research in terms of their real world applicability. The most recent advances that have led to modern systems such as XCSF for online function approximation or ExSTraCS for large-scale supervised data mining will thus be the subject of discussion.
The tutorial closes with a wrap-up regarding the distinctive potentials of LCS and with suggestions for a complementary (“traditional”) ML-centric view on this unique paradigm. With the intention to provide the audience with an impression about where LCS research stands these days and which open questions are still around, a brief review of most recent endeavors to tackle unsolved issues closes the tutorial.

Anthony Stein

Anthony Stein is a postdoctoral research associate with the Institute of Computer Science at the University of Augsburg, Germany. He received his bachelor's degree (B.Sc.) in Business Information Systems from the University of Applied Sciences Augsburg in 2012. He then moved on to the University of Augsburg for his master's degree (M.Sc.) in computer science with a minor in information economics which he received in 2014. He holds a doctorate (Dr. rer. nat.) in computer science since 2019. His research is concerned with the application of AI methodology and evolutionary machine learning algorithms to complex self-adaptive and self-organizing (SASO) systems. Dr. Stein is involved in the organization of workshops on intelligent systems and evolutionary machine learning. He serves as reviewer for international conferences and journals, including ACM GECCO or IEEE T-EVC.

Masaya Nakata

Masaya Nakata eceived the B.A. and M.Sc. degrees in informatics from the University of Electro- Communications, Chofu, Tokyo, Japan, in 2011 and 2013 respectively. He is the Ph.D. candidate in the University of Electro- Communications, the research fellow of Japan Society for the Promotion of Science, Chiyoda-ku, Tokyo, Japan, and a visiting student of the School of Engineering and Computer Science in Victoria University of Wellington from 2014. He was a visiting student of the Department of Electronics and Information, Politecnico di Milano, Milan, Italy, in 2013, and of the Department of Computer Science, University of Bristol, Bristol, UK, in 2014. His research interests are in evolutionary computation, reinforcement learning, data mining, more specifically, in learning classifier systems. He has received the best paper award and the IEEE Computational Intelligence Society Japan Chapter Young Researcher Award from the Japanese Symposium of Evolutionary Computation 2012. He is a co-organizer of International Workshop on Learning Classifier Systems (IWLCS) for 2015-2016.

Model-based Evolutionary Algorithms

In model-based evolutionary algorithms the variation operators are guided by the
use of a model that conveys problem-specific information so as to increase the
chances that combining the currently available solutions leads to improved
solutions. Such models can be constructed beforehand for a specific problem, or
they can be learnt during the optimization process. Well-known types of
algorithms of the latter type are Estimation-of-Distribution Algorithms (EDAs)
where probabilistic models of promising solutions are built. Samples are
subsequently drawn from these models to generate new solutions.

In general, replacing traditional crossover and mutation operators by building
and using models enables the use of machine learning techniques for automatic
discovery of problem regularities and subsequent exploitation of these
regularities, thereby enabling the design of optimization techniques that can
automatically adapt to a given problem. This is an especially useful feature
when considering optimization in a black-box setting. The use of models can
furthermore also have major implications for grey-box settings where not
everything about the problem is considered to be unknown a priori.

Successful applications include Ising spin glasses in 2D and 3D, graph
partitioning, MAXSAT, scheduling, feature construction, windfarm layouting, and
cancer radiation treatment optimization.

Although EDAs are the best known example of model-based EAs, other, including
more recent, approaches exist. Of particular interest is the family of Optimal
Mixing EAs (which includes the Linkage Tree Genetic Algorithm and various other
GOMEA variants). The tutorial will mainly focus on these types of MBEAs.

Dirk Thierens

Dirk Thierens is affiliated with the Department of Information and Computing Sciences at Utrecht University, the Netherlands, where he is teaching courses on Evolutionary Computation and on Computational Intelligence. He has been involved in genetic algorithm research since 1990. His current research interests are mainly focused on the design and application of model learning techniques to improve evolutionary search. Dirk is (has been) a member of the Editorial Board of the journals Evolutionary Computation, Evolutionary Intelligence, IEEE Transactions on Evolutionary Computation, and a member of the program committee of the major international conferences on evolutionary computation. He was elected member of the SIGEVO ACM board and contributed to the organization of several GECCO conferences: workshop co-chair (2003, 2004), track (co-)chair (2004, 2006, 2014), and Editor-in-Chief (2007).

Peter A.N. Bosman

Peter A. N. Bosman is a senior researcher in the Life Sciences research group at the Centrum Wiskunde & Informatica (CWI) (Centre for Mathematics and Computer Science) located in Amsterdam, the Netherlands. Peter was formerly affiliated with the Department of Information and Computing Sciences at Utrecht University, where also he obtained both his MSc and PhD degrees in Computer Science, more specifically on the design and application of estimation-of-distribution algorithms (EDAs). He has (co-)authored over 90 refereed publications on both algorithmic design aspects and real-world applications of evolutionary algorithms. At the GECCO conference, Peter has previously been track (co-)chair (EDA track, 2006, 2009), late-breaking-papers chair (2007), (co-)workshop organizer (OBUPM workshop, 2006; EvoDOP workshop, 2007; GreenGEC workshop, 2012-2014), (co-)local chair (2013) and general chair (2017).

Neuroevolution for Deep Reinforcement Learning Problems

In recent years, there has been a resurgence of interest in reinforcement learning (RL), particularly in the deep learning community. While much of the attention has been focused on using Value-function learning approaches (i.e. Q-Learning) or Estimated Policy Gradient-based approaches to train neural-network policies, little attention has been paid to Neuroevolution (NE) for policy search. The larger research community may have forgotten about previous successes of Neuroevolution 1.

Some of the most challenging reinforcement learning problems are those where reward signals are sparse and noisy. For many of these problems, we only know the outcome at the end of the task, such as whether the agent wins or loses, whether the robot arm picks up the object or not, or whether the agent has survived. Since NE only require the final cumulative reward that an agent gets at the end of its rollout in an environment, these are the types of problems where NE may have an advantage over traditional RL methods.

In this tutorial, I show how Neuroevolution can be successfully applied to Deep RL problems to help find a suitable set of model parameters for a neural network agent. Using popular modern software frameworks for RL (TensorFlow, OpenAI Gym, pybullet, roboschool), I will apply NE to continuous control robotic tasks, and show we can obtain very good results to control bipedal robot walkers, Kuka robot arm for grasping tasks, Minitaur robot, and also various existing baseline locomotion tasks common in the Deep RL literature. I will even show that NE can even obtain state-of-the-art results 2 over Deep RL methods, and highlight ways to use NE that can lead to more stable and robust policies compared to traditional RL methods. I will also describe how to incorporate NE techniques into existing RL research pipelines taking advantage of distributed processing on Cloud Compute 2, 3.

I will also discuss how to combine techniques from deep learning, such as the use of deep generative models, with Neuroevolution to solve more challenging Deep Reinforcement Learning problems that rely on high dimensional video inputs for continous robotics control, or for video game simulation tasks. We will look at combining model-based reinforcement learning approaches with Neuroevolution to tackle these problems, using TensorFlow, OpenAI Gym, and pybullet environments.

Lastly, we will cover recent developments where we have seen a lot of success in the past 2 years in areas where Neuroevolution has incorporated concepts from Deep Learning/RL, and vice versa. I will discuss recent topics that explore the use of Neural Network Topology evolution to find architectures that have strong inductive biases for RL tasks, and show that such architectures can still work without learning the weight parameters of such a network.

1 Risto Miikkulainen’s Slides on Neuroevolution.

http://nn.cs.utexas.edu/downloads/slides/miikkulainen.ijcnn13.pdf

2 Evolving Stable Strategies

http://blog.otoro.net/2017/11/12/evolving-stable-strategies/

3 Evolution Strategies as a Scalable Alternative to Reinforcement Learning

https://arxiv.org/abs/1703.03864

David Ha

David is a Research Scientist at Google Brain. His research interests include representation learning, artificial creativity, and evolutionary computing. Prior to joining Google, He worked at Goldman Sachs as a Managing Director, where he ran the fixed-income trading business in Japan. He obtained undergraduate and graduate degrees in Engineering Science and Applied Math from the University of Toronto.

Representations for Evolutionary Algorithms

Successful and efficient use of evolutionary algorithms (EA) depends on the choice of the genotype, the problem representation (mapping from genotype to phenotype) and on the choice of search operators that are applied to the genotypes. These choices cannot be made independently of each other. The question whether a certain representation leads to better performing EAs than an alternative representation can only be answered when the operators applied are taken into consideration. The reverse is also true: deciding between alternative operators is only meaningful for a given representation.

Research in the last few years has identified a number of key concepts to analyse the influence of representation-operator combinations on EA performance. Relevant concepts are the locality and redundancy of representations.

Locality is a result of the interplay between the search operator and the genotype-phenotype mapping. Representations have high locality if the application of variation operators results in new solutions similar to the original ones. Representations are redundant if the number of phenotypes exceeds the number of possible genotypes. Redundant representations can lead to biased encodings if some phenotypes are on average represented by a larger number of genotypes or search operators favor some kind of phenotypes.

The tutorial gives a brief overview about existing guidelines for representation design, illustrates the different aspects of representations, gives a brief overview of models describing the different aspects, and illustrates the relevance of the aspects with practical examples.

It is expected that the participants have a basic understanding of EA principles.

Franz Rothlauf

Franz Rothlauf received a Diploma in Electrical Engineering from the University of Erlangen, Germany, a Ph.D. in Information Systems from the University of Bayreuth, Germany, and a Habilitation from the University of Mannheim, Germany, in 1997, 2001, and 2007, respectively.

Since 2007, he is professor of Information Systems at the University of Mainz. He has published more than 90 technical papers in the context of planning and optimization, evolutionary computation, e-business, and software engineering, co-edited several conference proceedings and edited books, and is author of the books ""Representations for Genetic and Evolutionary Algorithms"" and ""Design of Modern Heuristics"". Since 2013, he is Academic Director of the Executive MBA program at the University of Mainz.

His main research interests are the application of modern heuristics in planning and optimization systems. He is a member of the Editorial Board of Evolutionary Computation Journal (ECJ) and Business & Information Systems Engineering (BISE). Since 2007, he is member of the Executive Committee of ACM SIGEVO. He serves as treasurer for ACM SIGEVO since 2011. He has been organizer of many workshops and tracks on heuristic optimization issues, chair of EvoWorkshops in 2005 and 2006, co-organizer of the European workshop series on ""Evolutionary Computation in Communications, Networks, and Connected Systems"", co-organizer of the European workshop series on ""Evolutionary Computation in Transportation and Logistics"", and co-chair of the program committee of the GA track at GECCO 2006. He was conference chair of GECCO 2009.

Runtime Analysis of Population-based Evolutionary Algorithms

Populations are at the heart of evolutionary algorithms (EAs). They
provide the genetic variation which selection acts upon. A complete
picture of EAs can only be obtained if we understand their population
dynamics. A rich theory on runtime analysis (also called
time-complexity analysis) of EAs has been developed over the last 20
years. The goal of this theory is to show, via rigorous mathematical
means, how the performance of EAs depends on their parameter settings
and the characteristics of the underlying fitness landscapes.
Initially, runtime analysis of EAs was mostly restricted to simplified
EAs that do not employ large populations, such as the (1+1) EA. This
tutorial introduces more recent techniques that enable runtime
analysis of EAs with realistic population sizes.

The tutorial begins with a brief overview of the population-based EAs
that are covered by the techniques. We recall the common stochastic
selection mechanisms and how to measure the selection pressure they
induce. The main part of the tutorial covers in detail widely
applicable techniques tailored to the analysis of populations. We
discuss random family trees and branching processes, drift and
concentration of measure in populations, and level-based analyses.

To illustrate how these techniques can be applied, we consider several
fundamental questions: When are populations necessary for efficient
optimisation with EAs? What is the appropriate balance between
exploration and exploitation and how does this depend on relationships
between mutation and selection rates? What determines an EA's
tolerance for uncertainty, e.g. in form of noisy or partially
available fitness?

Per Kristian Lehre

Per Kristian Lehre is a Senior Lecturer at the University of Birmingham, UK.

He received MSc and PhD degrees in Computer Science from the Norwegian University of Science and Technology (NTNU). After finishing his PhD in 2006, he held postdoctorial positions in the School of Computer Science at the University of Birmingham and at the Technical University of Denmark. From 2011, he was a Lecturer in the School of Computer Science at the University of Nottingham, until 2017, when he returned to Birmingham.

Dr Lehre's research interests are in theoretical aspects of nature-inspired search heuristics, in particular, runtime analysis of population-based evolutionary algorithms. His research has won several best paper awards, including at GECCO (2013, 2010, 2009, 2006), ICSTW (2008), and ISAAC (2014). He is editorial board member of Evolutionary Computation, and associate editor of IEEE Transactions on Evolutionary Computation. He was the coordinator of the successful 2M euro EU-funded project SAGE which brought together the theory of evolutionary computation and population genetics.

 

Pietro Oliveto

NEW Theoretical Foundations of Evolutionary Computation for Beginners and Veterans

While theory in the last 10 years has largely focused on runtime analysis for polynomial time problems, there exists more than 40 years of theoretical research in evolutionary computation, most of which has nothing to do with runtime analysis. For example, it is not widely known that the behavior of an Evolutionary Algorithm can be influenced by attractors that exists outside the space of the feasible population. The tutorial will mainly focus on the application of evolutionary algorithms to combinatorial problems.

This talk will cover some of the classic theoretical results from the field of evolutionary algorithms, as well as more general theory from operations research and mathematical methods for optimization. The tutorial will review pseudo-Boolean optimization, as well as the representation of functions as both multi-linear polynomials and Fourier polynomials. It will also explain how every pseudo-Boolean optimization problem can be converted into a k-bounded form. And for every k-bounded pseudo-Boolean optimization problem, the location of improving moves (i.e., bit flips) can be computed in constant time, making simple mutation operators unnecessary.

The tutorial will cover 1) No Free Lunch (NFL), 2) Sharpened No Free Lunch and 3) Focused No Free Lunch, and how the different theoretical proofs can lead to seemingly different and even contradictory conclusions. (What many researchers know about NFL might actually be wrong.)
The tutorial will also cover the relationship between functions and representations, the space of all possible representations and the duality between search algorithm and landscapes. This perspective is critical to understanding landscape analysis, landscape visualization, variable neighborhood search methods, memetic algorithms, and self-adaptive search methods.

Other topics include both infinite and finite models of population trajectories. The tutorial will explain both Elementary Landscapes and eigenvectors of search neighborhoods in simple terms and explain how the two are related. Example domains include classic NP-Hard problems such as Graph Coloring and MAX-kSAT.

Finally, the tutorial will also offer a cautionary critique of theory. While theoretical results are typically based on proofs, virtually all proofs are based on assumptions. And yet, theory is sometimes leveraged far beyond the supporting assumptions, which can lead to misleading claims and false conclusions. Multiple examples have occurred over and over again in the field of Evolutionary Computation. Every researcher in the field of Evolutionary Computation needs to be a wiser consumer of both theoretical and empirical results.

Darrell Whitley

Prof. Darrell Whitley has been active in Evolutionary Computation since 1986, and has published more than 200 papers. These papers have garnered more than 24,000 citations. Dr. Whitley’s H-index is 63. He introduced the first “steady state genetic algorithm” with rank based selection, published the first papers on neuroevolution, and has worked on dozens of real world applications of evolutionary algorithms. He has served as Editor-in-Chief of the journal Evolutionary Computation, and served as Chair of the Governing Board of ACM SIGEVO from 2007 to 2011.

Advanced Tutorials

NEW A Hands-on Guide to Distributed Computing Paradigms for Evolutionary Computation

Recent advances in machine learning are consistently enabled by increasing amounts of computation. When such computation is beyond the capacity of a single computer, people rely on distributed computing that scales large, parallelizable computing loads across multiple computers for better efficiency and performance. Evolutionary algorithms (EAs) often consist of components and submodules that can be executed in a parallel or asynchronous manner, and therefore, could benefit tremendously from following suitable distributed computing paradigms. However, architecting and setting up parallel algorithms to run over multiple computers are often non-trivial, which adds difficulty for researchers to efficiently utilize computational resources available over clusters and/or cloud.

This tutorial aims to address this gap between efficient algorithm development and scalable distributed execution of algorithms. It provides a hands-on guide on distributed computing paradigms that are suitable for EAs and reinforcement learning (RL) algorithms, and introduces tools and libraries that allow researchers to easily switch between developing algorithms on a local machine and launching the execution of their algorithms in a distributed fashion across multiple machines for best performance.

The tutorial begins with a review of common computational patterns in typical EAs and RL algorithms, and then highlights challenges that these algorithms pose to distributed computing frameworks in terms of reliability, efficiency, and user-friendliness. Next, we briefly review and compare a few classic general-purpose distributed computing frameworks (e.g., MPI, Ipyparallel, etc.) and explain their limitations when applied to EAs and RL. We then dive into more recent frameworks (such as Ray and Fiber, etc.), explain how they overcome the limitations of classic ones, review their design and usage, and compare their performance on benchmark examples.

The second part of the tutorial is a live demo/interactive session of Fiber, a Python library for distributed computing developed and used at Uber AI to support research in evolutionary computation and RL. We will demonstrate how to install and set up Fiber, how to architect and adapt Python programs that run on a single machine to run in a distributed manner over multiple machines with Fiber's APIs, how to visualize the progresses and monitor the health of distributed jobs. We will also demonstrate advanced use cases on how Fiber can be used along with an evolutionary computation framework (DEAP) and typical deep learning frameworks (PyTorch, TensorFlow, and JAX), and show examples of scaling the execution of some classic EAs (such as evolution strategies, genetic algorithms), RL algorithms, and other use cases, e.g., Population-Based Training.

 

Rui Wang

 

Jiale Zhi

Jiale Zhi is a Senior Software Engineer and Tech Lead at Uber AI in San Francisco. His area of interest is distributed computing, big data, scientific computation, evolutionary computing, and reinforcement learning. He is also interested in real-world applications of machine learning in traditional software engineering, such as reinforcement learning for computer cluster scheduling. He is the creator of the Fiber project, a scalable, distributed framework for large scale parallel computation applications. Before Uber AI, he was a Tech Lead in Uber's edge team, which manages Uber's global mobile network traffic and routing.

NEW Benchmarking and Analyzing Iterative Optimization Heuristics with IOHprofiler

IOHprofiler is a new benchmarking environment that has been developed for a highly versatile analysis of iterative optimization heuristics (IOHs) such as evolutionary algorithms, local search algorithms, model-based heuristics, etc. A key design principle of IOHprofiler is its highly modular setup, which makes it easy for its users to add algorithms, problems, and performance criteria of their choice. IOHprofiler is also useful for the in-depth analysis of the evolution of adaptive parameters, which can be plotted against fixed-targets or fixed-budgets. The analysis of robustness is also supported.

IOHprofiler supports all types of optimization problems, and is not restricted to a particular search domain. A web-based interface of its analysis procedure is available athttp://iohprofiler.liacs.nl/, the tool itself is available onGitHub (https://github.com/IOHprofiler/IOHanalyzer) and as CRAN package (https://cran.rstudio.com/web/packages/IOHanalyzer/index.html).

The tutorial addresses all GECCO participants interested in analyzing and comparing heuristic solvers. By the end of the tutorial, the participants will known how to benchmark different solvers with IOHprofiler, which performance statistics it supports, and how to contribute to its design.

 

Thomas Baeck

Thomas Bäck is Full Professor of Computer Science at the Leiden Institute of Advanced Computer Science (LIACS), Leiden University, The Netherlands, where he is head of the Natural Computing group since 2002. He received his PhD (adviser: Hans-Paul Schwefel) in computer science from Dortmund University, Germany, in 1994, and then worked for the Informatik Centrum Dortmund (ICD) as department leader of the Center for Applied Systems Analysis. From 2000 - 2009, Thomas was Managing Director of NuTech Solutions GmbH and CTO of NuTech Solutions, Inc. He gained ample experience in solving real-life problems in optimization and data mining through working with global enterprises such as BMW, Beiersdorf, Daimler, Ford, Honda, and many others. Thomas has more than 300 publications on natural computing, as well as two books on evolutionary algorithms: Evolutionary Algorithms in Theory and Practice (1996), Contemporary Evolution Strategies (2013). He is co-editor of the Handbook of Evolutionary Computation, and the Handbook of Natural Computing, and co-editor-in-chief of Springer's Natural Computing book series. He is also editorial board member and associate editor of a number of journals on evolutionary and natural computing. Thomas received the best dissertation award from the German Society of Computer Science (Gesellschaft f\""ur Informatik, GI) in 1995 and the IEEE Computational Intelligence Society Evolutionary Computation Pioneer Award in 2015.

Carola Doerr

Carola Doerr, formerly Winzen, is a permanent CNRS researcher at Sorbonne University in Paris, France. Carola's main research activities are in the mathematical analysis of randomized algorithms, with a strong focus on evolutionary algorithms and other black-box optimizers. She has been very active in the design and analysis of black-box complexity models, a theory-guided approach to explore the limitations of heuristic search algorithms. Most recently, she has used knowledge from these studies to prove superiority of dynamic parameter choices in evolutionary computation, a topic that she believes to carry huge unexplored potential for the community. Carola has received several awards for her work on evolutionary computation, among them the Otto Hahn Medal of the Max Planck Society and four best paper awards at GECCO. She is/was program chair of PPSN 2020, FOGA 2019 and the theory tracks of GECCO 2015 and 2017. Carola is an editor of two special issues in Algorithmica. She is also vice chair of the EU-funded COST action 15140 on ``Improving Applicability of Nature-Inspired Optimisation by Joining Theory and Practice (ImAppNIO)''.

Ofer Shir

Ofer Shir is a Senior Lecturer at the Computer Science Department in Tel-Hai College, and a Principal Investigator at Migal-Galilee Research Institute – both located in the Upper Galilee, Israel.
Ofer Shir holds a BSc in Physics and Computer Science from the Hebrew University of Jerusalem, Israel, and both MSc and PhD in Computer Science from Leiden University, The Netherlands (PhD advisers: Thomas Bäck and Marc Vrakking). Upon his graduation, he completed a two-years term as a Postdoctoral Research Associate at Princeton University, USA (2008-2010), hosted by Prof. Herschel Rabitz – where he specialized in computer science aspects of experimental quantum systems. He then joined IBM-Research as a Research Staff Member (2010-2013), which constituted his second postdoctoral term, and where he gained real-world experience in convex and combinatorial optimization as well as in decision analytics.
His current fields of interest encompass Statistical Learning in Theory and in Practice, Experimental Optimization, Scientific Informatics, Natural Computing, Computational Intelligence in Physical Sciences, Quantum Control and Quantum Information.

 

Hao Wang

Decomposition Multi-Objective Optimisation: Current Developments and Future Opportunities

Evolutionary multi-objective optimisation (EMO) has been a major research topic in the field of evolutionary computation for many years. It has been generally accepted that combination of evolutionary algorithms and traditional optimisation methods should be a next generation multi-objective optimisation solver. As the name suggests, the basic idea of the decomposition-based technique is to transform the original complex problem into simplified subproblem(s) so as to facilitate the optimisation. Decomposition methods have been well used and studied in traditional multi-objective optimisation. MOEA/D decomposes a multi-objective problem into a number of subtasks, and then solves them in a collaborative manner. MOEA/D provides a very natural bridge between multi-objective evolutionary algorithms and traditional decomposition methods. It has been a commonly used evolutionary algorithmic framework in recent years.

Within this tutorial, a comprehensive introduction to MOEA/D will be given and selected research results will be presented in more detail. More specifically, we are going to (i) introduce the basic principles of MOEA/D in comparison with other two state-of-the-art EMO frameworks, i.e., Pareto- and indicator-based frameworks; (ii) present a general overview of state-of-the-art MOEA/D variants and their applications; (iii) discuss the future opportunities for possible further developments.

The intended audience of this tutorial can be both novices and people familiar with EMO or MOEA/D. In particular, it is self-contained that foundations of multi-objective optimisation and the basic working principles of EMO algorithms will be included for those without experience in EMO to learn. Open questions will be posed and highlighted for discussion at the latter session of this tutorial.

 

Ke Li

Ke Li is a Senior Lecturer (Associate Professor) in Computer Science at the Department of Computer Science, University of Exeter. He earned his PhD from City University of Hong Kong. Afterwards, he spent a year as a postdoctoral research associate at Michigan State University. Then, he moved to the UK and took the post of research fellow at University of Birmingham. His current research interests include the evolutionary multi-objective optimisation, automatic problem solving, machine learning and applications in water engineering and software engineering. Recently, he has been awarded a prestigious UKRI Future Leaders Fellowship.

Qingfu Zhang

Qingfu Zhang is a Professor at the Department of Computer Science, City University of Hong Kong. His main research interests include evolutionary computation, optimization, neural networks, data analysis, and their applications. He is currently leading the Metaheuristic Optimization Research (MOP) Group in City University of Hong Kong. Professor Zhang is an Associate Editor of the IEEE Transactions on Evolutionary Computation and the IEEE Transactions Cybernetics. He was awarded the 2010 IEEE Transactions on Evolutionary Computation Outstanding Paper Award. He is on the list of the Thomson Reuters 2016 and 2017 highly cited researchers in computer science. He is an IEEE fellow.

Saúl Zapotecas

Saúl Zapotecas is a visiting Professor at Department of Applied Mathematics and Systems, Division of Natural Sciences and Engineering, Autonomous Metropolitan University, Cuajimalpa Campus (UAM-C). Saúl Zapotecas received the B.Sc. in Computer Sciences from the Meritorious Autonomous University of Puebla (BUAP). His M.Sc. and PhD in computer sciences from the Center for Research and Advanced Studies of the National Polytechnic Institute of Mexico (CINVESTAV-IPN). His current research interests include evolutionary computation, multi/many-objective optimization via decomposition, and multi- objective evolutionary algorithms assisted by surrogate models.

NEW Design Principles for Matrix Adaptation Evolution Strategies

Covariance Matrix Adaptation Evolution Strategies are regarded as
state-of-the-art in evolutionary real-valued parameter optimization.
These strategies learn the covariance matrix in order to generate
suitable mutations that allow for an efficient approximation of the
optimizer of the problem to be considered. Only recently it has been
shown that there is no need to learn the covariance matrix. Instead
the mutation matrix can be learned directly. That is, one can remove
the covariance matrix stuff from the Evolution Strategy (ES)
without performance degradation. As a result, the algorithms get
simpler and can be easily modified to also tackle large scale
optimization problems and constrained optimization problems.
One such ES was the winner in the 2018 CEC constrained optimization
benchmark competition regarding the high-dimensional problem instances.

This tutorial provides a gentle introduction into Matrix Adaptation
Evolution Strategies (MA-ES) explaining the design principles.
Being based on these principles, it will be shown how one can
modify the MA-ES in order to
a) incorporate self-adaptive behavior,
b) handle large scale optimization problems with thousands of variables
c) treat constrained optimization problems

The tutorial will also show some 2D and 3D graphical demonstrations of
the working of MA-ES handling restricted high-dimensional optimization
problems with non-linear equality constraints.

 

Hans-Georg Beyer

Hans-Georg Beyer received the Diploma degree in Theoretical Electrical
Engineering from the Ilmenau Technical University, Germany, in 1982 and the
Ph.D. in physics from Bauhaus-University Weimar, Weimar, Germany, in 1989,
and the Habilitation degree in computer science from the University of
Dortmund, Dortmund, Germany, in 1997.

He was a R&D Engineer with the Reliability Physics Department,
VEB Gleichrichterwerk, Stahnsdorf, Germany, from 1982 to 1984.
From 1984 to 1989, he was a Research and Teaching Assistant and
later on Post-Doctoral Research Fellow with the Physics Department and
the Computer Science Department, Bauhaus-University Weimar. From 1990 to
1992, he was a Senior Researcher with the Electromagnetic Fields Theory
Group, Darmstadt University of Technology, Darmstadt, Germany.
From 1993 to 2004, he was with the Computer Science Department, University
of Dortmund. In 1997, he became a DFG (German Research Foundation)
Heisenberg Fellow. He was leader of a working group and a Professor of
Computer Science from 2003 to 2004. Since 2004 he has been professor
with the Vorarlberg University of Applied Sciences, Dornbirn, Austria.
He authored the book "The Theory of Evolution Strategies"
(Heidelberg: Springer-Verlag, 2001) and authored/coauthored over 100 papers.

Dr. Beyer was the Editor-in-Chief of the MIT Press Journal "Evolutionary
Computation" from 2010 to 2016. He has been an Associate Editor for the IEEE
"Transactions on Evolutionary Computation" since 1997 and is a member of the
advisory board of the Elsevier journal "Swarm and Evolutionary Computation"
and of Springer's Natural Computing Series.

Dynamic Control Parameter Choices in Evolutionary Computation

One of the most challenging problems in solving optimization problems with evolutionary algorithms is the selection of the control parameters, which allow to adjust the behaviour of the algorithms to the problem at hand. Several control parameters need to be set, for the procedure of searching for the optimum of an objective function to be successful. Suitable control parameter values need to be found, for example, for the population size, the mutation strength, the crossover rate, the selective pressure, etc. The choice of these parameters can have a significant impact on the performance of the algorithm and need thus to be executed with care.

In the early years of evolutionary computation there had been a quest to determine universally "optimal" control parameter choices. At the same time, researchers have soon realized that different parameter settings can be optimal at different stages of the optimization process: at the beginning of an optimization process, one may want to allow a larger mutation rate to increase the chance of finding the most promising regions of the search space ("exploration" phase), while later on, a small mutation rate guarantees the search to stay focused within the promising area ("exploitation" phase). Such dynamic parameter choices are today standard in continuous optimization. However, the situation is much different in discrete optimization, where non-static parameter choices have never lived up to their impact, yet.

The ambition of this tutorial is to contribute to a paradigm change towards a more systematic use of dynamic parameter choices. To this end, we survey existing techniques to automatically select control parameter values on the fly. We will discuss both theoretical and experimental results that demonstrate the unexploited potential of non-static parameter choices. Our tutorial thereby addresses experimentally as well as theory-oriented researchers alike.

Gregor Papa

Gregor Papa (gregor.papa@ijs.si, http://cs.ijs.si/papa/) is a Senior researcher and a Head of Computer Systems Department at the Jožef Stefan Institute, Ljubljana, Slovenia, and an Associate Professor at the Jožef Stefan International Postgraduate School, Ljubljana, Slovenia. He received the PhD degree in Electrical engineering at the University of Ljubljana, Slovenia, in 2002.

Gregor Papa's research interests include meta-heuristic optimisation methods and hardware implementations of high-complexity algorithms, with a focus on dynamic setting of algorithms' control parameters. His work is published in several international journals and conference proceedings. He regularly organizes several conferences and workshops in the field of nature-inspired algorithms from the year 2004 till nowadays. He led and participated in several national and European projects.

Gregor Papa is a member of the Editorial Board of the Automatika journal (Taylor & Francis) for the field “Applied Computational Intelligence”. He is a Consultant at the Slovenian Strategic research and innovation partnership for Smart cities and communities.

Evolutionary Computation for Digital Art

Evolutionary algorithms have been used in various ways to create or guide the creation of digital art. Artificial intelligence is substantially changing the nature of creative processes.
In this tutorial we present techniques from the thriving field of biologically inspired art. We show how evolutionary computation methods and neural networks can be used to enhance artistic creativity and lead to software systems that help users to create artistic work.

We start by providing a general introduction into the use of evolutionary computation methods for digital art and highlight different application areas. This covers different evolutionary algorithms including genetic programming for the creation of artistic images.
Afterwards, we discuss evolutionary algorithms to create artistic artwork in the context of image transition and animation. We show how the connection between evolutionary computation methods and a professional artistic approach finds application in digital animation and new media art, and discuss the different steps of involving evolutionary algorithms for image transition into the creation of paintings. Furthermore, we show how neural networks can be utilised as an inspiration for creating original images.

Subsequently, we give an overview on the use of aesthetic features to evaluate digital art. The feature-based approach complements the existing evaluation through human judgments/analysis and allows to judge digital art in a quantitative way. Finally, we outline directions for future research and discuss some open problems.

Frank Neumann

Frank Neumann received his diploma and Ph.D. from the Christian-Albrechts-University of Kiel in 2002 and 2006, respectively. He is a professor and leader of the Optimisation and Logistics Group at the School of Computer Science, The University of Adelaide, Australia. Frank has been the general chair of the ACM GECCO 2016. With Kenneth De Jong he organised ACM FOGA 2013 in Adelaide and together with Carsten Witt he has written the textbook "Bioinspired Computation in Combinatorial Optimization - Algorithms and Their Computational Complexity" published by Springer. He is an Associate Editor of the journals "Evolutionary Computation" (MIT Press) and "IEEE Transactions on Evolutionary Computation" (IEEE). In his work, he considers algorithmic approaches in particular for combinatorial and multi-objective optimization problems and focuses on theoretical aspects of evolutionary computation as well as high impact applications in the areas of renewable energy, logistics, and mining.

Aneta Neumann

Aneta Neumann graduated from the Christian-Albrechts-University of Kiel, Germany in computer science and is currently undertaking her postgraduate research at the School of Computer Science, the University of Adelaide, Australia. She was a participant in the SALA 2016 and 2017 exhibitions in Adelaide and has presented invited talks at UCL London, Goldsmiths, University of London, the University of Nottingham and the University of Sheffield in 2016 and 2017. Aneta is a co-designer and co-lecturer for the EdX Big Data Fundamentals course in the Big Data MicroMasters® program. Her main research interest is understanding the fundamental link between bio-inspired computation and digital art.

NEW Fitness landscape analysis to understand and predict algorithm performance for single- and multi-objective optimization

Many evolutionary and general-purpose search algorithms have been proposed for solving a broad range of single- and multi-objective optimization problems. Despite their skillful design, it is still difficult to achieve a high-level fundamental understanding of why and when an algorithm can be expected to be successful. From an engineering point of view, setting up a systematic and principled methodology to select and/or design an effective search algorithm is also a challenging issue which is attracting more and more attention from the research community. In this context, fitness landscape analysis is a well-established field for understanding the relation between the structure underlying a given problem search space, and the algorithm(s), and the underlying components and/or parameters, being considered for tackled the problem.
Starting by presenting state-of-the-art tools from single-objective fitness landscapes, we identify their main differences and then new additional properties to be addressed for a deep understanding of multi-objective fitness landscapes.
We expose and contrast the impact of fitness landscape geometries on the performance of optimization algorithms for single- and multi-objective optimization problems. A sound and concise summary of features characterizing the structure of a problem instance are identified in particular for multi-objective optimization. We also review the fundamental principles which allows to design new relevant features, and we show the main methodologies to sample combinatorial, and continuous search spaces.
By providing effective tools and practical examples for both single- and multi-objective fitness landscape analysis, further insights are given on the importance of ruggedness, multimodality, and objectives correlation to predict the performances of an instance for optimization problems and algorithms.
At last, we conclude with guidelines for the design of randomized search heuristics based on the main fitness landscape features, and we identify a number of open challenges for the future of fitness landscapes and evolutionary algorithms.

Sébastien Verel

Sébastien Verel is a professor in Computer Science at the Université du Littoral Côte d'Opale, Calais, France, and previously at the University of Nice Sophia-Antipolis, France, from 2006 to 2013. He received a PhD in computer science from the University of Nice Sophia-Antipolis, France, in 2005. His PhD work was related to fitness landscape analysis in combinatorial optimization. He was an invited researcher in DOLPHIN Team at INRIA Lille Nord Europe, France from 2009 to 2011. His research interests are in the theory of evolutionary computation, multiobjective optimization, adaptive search, and complex systems. A large part of his research is related to fitness landscape analysis. He co-authored of a number of scientific papers in international journals, book chapters, book on complex systems, and international conferences. He is also involved in the co-organization EC summer schools, conference tracks, workshops, a special issue on EMO at EJOR, as well as special sessions in indifferent international conferences.

Bilel Derbel

Bilel Derbel is an associate Professor, having a habilitation to supervise research (Maître de Conférences HDR), at the Department of Computer Science at the University of Lille, France, since 2007. He received his PhD in computer science from the University of Bordeaux (LaBRI, France) in 2006. In 2007, he spent one year as an assistant professor at the university of Aix-Marseille. He is a permanent member and the vice-head of the BONUS ‘Big Optimisation aNd Ultra-Scale Computing’ research group at Inria Lille-Nord Europe and CRIStAL, CNRS. He is a co-founder member of the International Associated Lab (LIA-MODO) between Shinshu Univ., Japan, and Univ. Lille, France, on ‘Massive optimisation and Computational Intelligence’. He has been a program committee member of evolutionary computing conferences such as GECCO, CEC, EvoOP, PPSN, and a regular journal reviewer in a number of reference journal in the optimisation field. He is an associate editor of the IEEE Transactions on Systems Man and Cybernetics: Systems. He co-authored more than fifty scientific papers. He was awarded best paper awards in SEAL'17, ICDCN'11, and was nominated for the best paper award in PPSN'18 and PPSN'14. His research topics are focused on the design and the analysis of combinatorial optimisation algorithms and high-performance computing. His current interests are on the design of adaptive distributed evolutionary algorithms for single- and multi-objective optimisation.

NEW Genetic improvement: Taking real-world source code and improving it using genetic programming.

Genetic Programming (GP) has been on the scene for around 25 years. Genetic Improvement (GI) 1 is “the new kid on the block”. What does GI have to offer over GP? The operational difference is that GI deals with source code, rather than a simulation of code. In other words, GI operates directly on Java or C code, for example, whereas GP typically operates on some tiny subset of a programming language, defined by the function set and terminal set. Another fundamental difference is that GI starts with real-world software (which is in use), whereas GP typically tries to evolve programs from scratch (which is not in use).

These differences may not seem important, as we can still generate the same set of functions; however this subtle difference opens up a vast number of new possibilities for research and this will make GI attractive for industrial applications. Furthermore we can optimize the physical properties of code such as power consumption, size of code, bandwidth, and other non-functional properties, including execution time.

The aim of the tutorial is to

  • examine the motives for evolving source code directly, rather than a language built from a function set and terminal set which has to be interpreted after a program has been evolved
  • understand different approaches to implementing genetic improvement including operating directly on text files, and operating on abstract syntax trees
  • appreciate the new research questions that can be addressed while operating on actual source code
  • understand some of the issues regarding measuring non-functional properties such as execution time and power consumption
  • examine some of the early examples of genetic improvement and our flagship application will be the world’s first implementation of GI in a live system (this technique has found and fixed all 40 bugs in its first 6 months while operating in a medical facility)
  • understanding links between GI and other techniques such as hyper-heuristics, automatic parameter tuning, and deep parameter tuning
  • highlight some of the multi-objective research where programs have been evolved that lie on the Pareto front with axes representing different non-functional properties
  • give an introduction to GI in No Time - an open source simple micro-framework for GI (https://github.com/gintool/gin).

Saemundur O. Haraldsson

Saemundur O. Haraldsson is a Lecturer at the University of Stirling. He has multiple publications on Genetic Improvement, including two that have received best paper awards; in 2017’s GI and ICTS4eHealth workshops. Additionally, he co-authored the first comprehensive survey on GI which was published in 2017. He has been invited to give multiple talks on the subject, including three Crest Open Workshops and for an industrial audience in Iceland. His PhD thesis (submitted in May 2017) details his work on the world's first live GI integration in an industrial application. Saemundur has previously given a tutorial on GI at PPSN 2018.

John R. Woodward

John R. Woodward is head of the Operational Research Group (http://or.qmul.ac.uk/) at QMUL. He holds a BSc in Theoretical Physics, an MSc in Cognitive Science and a PhD in Computer Science, all from the University of Birmingham. His research interests include Automated Software Engineering, Artificial Intelligence/Machine Learning and in particular Genetic Programming. Publications are at (https://scholar.google.co.uk/citations?user=iZIjJ80AAAAJ&hl=en), and current EPSRC grants are at (https://gow.epsrc.ukri.org/NGBOViewPerson.aspx?PersonId=-485755). Public engagement articles are at (https://theconversation.com/profiles/john-r-woodward-173210/articles). He has worked in industrial, military, educational and academic settings, and been employed by EDS, CERN and RAF and three UK Universities (Birmingham, Nottingham, Stirling).

Markus Wagner

Markus Wagner is a Senior Lecturer at the School of Computer Science, University of Adelaide, Australia. He has done his PhD studies at the Max Planck Institute for Informatics in Saarbruecken, Germany and at the University of Adelaide, Australia. For the outcomes of his studies, he has received the university's Doctoral Research Medal - the first for this school.
His research topics range from mathematical runtime analysis of heuristic optimisation algorithms and theory-guided algorithm design to applications of heuristic methods to renewable energy production, professional team cycling and software engineering. So far, he has been a program committee member 30 times, and he has written over 100 articles with over 100 different co-authors. He is on SIGEVO's Executive Board and serves as the first ever Sustainability Officer. He has contributed to GECCOs as Workshop Chair and Competition Chair, and he has chaired several education-related committees within the IEEE CIS.

NEW Quality-Diversity Optimization

A fascinating aspect of natural evolution is its ability to produce a diversity of organisms that are all high performing in their niche. By contrast, the main artificial evolution algorithms are focused on pure optimization, that is, finding a single high-performing solution. 

Quality-Diversity optimization (or illumination) is a recent type of evolutionary algorithm that bridges this gap by generating large collections of diverse solutions that are all high-performing. This concept was introduced by the ``Generative and Developmental Systems community between 2011 (Lehman & Stanley, 2011) and 2015 (Mouret and Clune, 2015) with the ``Novelty Search with Local Competition and ``MAP-Elites'' evolutionary algorithms. The main differences with multi-modal optimization algorithms are that (1) Quality Diversity typically works in the behavioral space (or feature space), and not in the genotypic space, and (2) Quality Diversity attempts to fill the whole behavior space, even if the niche is not a peak in the fitness landscape. In the last 5 years, more than 75 papers have been written about quality diversity, many of them in the GECCO community (a non-exhaustive list is available at https://quality-diversity.github.io/papers).

The collections of solutions obtained by Quality Diversity algorithms open many new applications for evolutionary computation. In robotics, it was used to create repertoires of behaviors (Cully & Mouret, 2016), to allow robots to adapt to damage in a few minutes (Cully & et al. 2015, Nature); in engineering, it can be used to propose a diversity of optimized aerodynamic shapes (Gaier et al., 2018 --- best paper of the CS track); they were also recently used in video games (Khalifa et al., 2018) and for Workforce Scheduling and Routing Problems (WSRP) (Urquhart & Hart, 2018).


This tutorial will give an overview of the various questions addressed in Quality-Diversity optimization, relying on examples from the literature. Past achievements and major contributions, as well as specific challenges in Quality-Diversity optimization, will be described. The tutorial will in particular focus on:

  • what is Quality-Diversity optimization?
  • similarities and differences with traditional evolutionary algorithms;
  • existing variants of Quality-Diversity algorithms;
  • example of application: Learning behavioral repertoires in robotics, evolving 3D shapes for aerodynamic design;
  • open questions and future challenges.



The tutorial will effectively complement the Complex Systems track, which usually contains several papers about Quality-Diversity algorithms. For instance, 36% (4/11) of the papers accepted in the CS track in 2019 used Quality-Diversity optimization and 20% (3/15) in 2018. After the conference, the slides of the tutorial will be hosted on the quality-diversity website (https://quality-diversity.github.io/), which gathers papers and educational content related to quality-diversity algorithms. 

 

Antoine Cully

Jean-Baptiste Mouret

Dr. Jean-Baptiste Mouret is a senior researcher ("Directeur de recherche") at Inria, the French research institute dedicated to computer science and mathematics. He is currently the principal investigator of an ERC grant (ResiBots – Robots with animal-like resilience, 2015-2020). From 2009 to 2015, he was an assistant professor ("maître de conférences") at the Pierre and Marie Curie University (Paris, France). Overall, J.-B. Mouret conducts researches that intertwine evolutionary algorithms, neuro-evolution, and machine learning to make robots more adaptive. His work was recently featured on the cover of Nature (Cully et al., 2015) and it received 3 GECCO best paper awards (2011, GDS track, 2017 & 2018 CS track), the "Distinguished Young Investigator in Artificial Life 2017" award, the French "La Recherche" award (2016), and the IEEE CEC "best student paper" award (2009).

Stéphane Doncieux

Stephane Doncieux is Professor in Computer Science at ISIR (Institute of Intelligent Systems and Robotics), Sorbonne University, CNRS, in Paris, France. Since January 2018, he is deputy director of the ISIR, a multidisciplinary robotics laboratory with researchers in mechatronics, signal processing computer science and neuroscience. Until that date, he was in
charge of the AMAC multidisciplinary research team (Architectures and Models of Adaptation and Cognition). He was coordinator of the DREAM FET H2020 project from 2015 to 2018 (http://robotsthatdream.eu/). His research is in cognitive robotics, with a focus on learning and adaptation with evolutionary algorithms.

Recent Advances in Particle Swarm Optimization Analysis and Understanding

The main objective of this tutorial will be to inform particle swarm optimization (PSO) practitioners of the many common misconceptions and falsehoods that are actively hindering a practitioner’s successful use of PSO in solving challenging optimization problems. While the behaviour of PSO’s particles has been studied both theoretically and empirically since its inception in 1995, most practitioners unfortunately have not utilized these studies to guide their use of PSO. This tutorial will provide a succinct coverage of common PSO misconceptions, with a detailed explanation of why the misconceptions are in fact false, and how they are negatively impacting results. The tutorial will also provide recent theoretical results about PSO particle behaviour from which the PSO practitioner can now make better and more informed decisions about PSO and in particular make better PSO parameter selections. The tutorial will focus on the following important aspects of PSO behaviour
• Understanding why the random variables used in the velocity update should not be scalars, but rather vectors of random values
• Exploring the effects of different ways in which velocity can be initialized
• Clearing up issues with reference to velocity clamping
• The influence of social topology and different iteration strategies on performance is discussed
• Understanding PSO control parameters, and how to use them more efficiently
o The importance of parameter selection will be illustrated with an interactive demo where audience members
will vote for/suggest control parameters. From the set of audience selected control parameters relative
performance ranking, based on popular benchmark suites, of these configuration will be given in relation to a
very extensive set of possible configurations. This demo will clearly illustrate why the subsequent theoretical discussion on
control parameters is so import for effective PSO use.
• Existing theoretical PSO results and what they mean to a PSO practitioner
• Roaming behaviour of PSO particles
• Understanding why PSO struggles with large-scale optimization problems
• Known stability criteria of PSO algorithms
• Effects of particle stability of PSO’s performance
• How to derive new stability criteria for PSO variants and verify them
o The use of a general PSO stability theorem in addition to a simple to use specialization are demonstrated
o The newly derived specialization theorem allows for a non-restrictive relationship of PSO control coefficients, which is a first for
PSO stability theory
• Control parameter tuning, and self-adaptive control parameters
• Is the PSO a local optimizer or a global optimizer?

With the knowledge presented in this tutorial a PSO practitioner will gain up to date theoretical insights into PSO behaviour and as a result be able to make informed choices when utilizing PSO.

Andries Engelbrecht

Andries Engelbrecht received the Masters and PhD degrees in Computer Science from the University of Stellenbosch, South Africa, in 1994 and 1999 respectively. He is Professor in Computer Science at the University of Pretoria, and is appointed as the Director of the Institute for Big Data and Data Science. He holds the position of South African Research Chair in Artificial Intelligence, and leads the Computational Intelligence Research Group. His research interests include swarm intelligence, evolutionary computation, neural networks, artificial immune systems, and the application of these paradigms to data mining, games, bioinformatics, finance, and difficult optimization problems. He has published over 350 papers in these fields and is author of two books, Computational Intelligence: An Introduction and Fundamentals of Computational Swarm Intelligence.

Christopher Cleghorn

Christopher Cleghorn received his Masters and PhD degrees in Computer Science from the University of Pretoria, South Africa, in 2013 and 2017 respectively. He is a Senior lecturer in Computer Science at the University of Pretoria. His research interests include swarm intelligence, evolutionary computation, machine learning, and radio-astronomy with a strong focus of theoretical research. Dr Cleghorn annually serves as a reviewer for numerous international journals and conferences in domains ranging from swarm intelligence and neural networks to mathematical optimization.

NEW Replicability and Reproducibility in Evolutionary Optimization

The reproducibility of experimental results is one of the corner-stones of experimental science, yet often, published experimental results are neither replicable nor reproducible due to insufficient details, missing datasets or software implementations. The Association for Computing Machinery (ACM) distinguishes among “repeatability”, “replicability” and “reproducibility” and it has instituted different badges to be attached to research articles in ACM publications depending on the level of reproducibility. The new ACM journal “Transactions on Evolutionary Learning and Optimization (TELO)” uses these badges to encourage the publication of reproducible results.

The tutorial will be structured in two main parts. The first part will introduce basic concepts in reproducible research, the motivation behind it, and potential pitfalls illustrated from real-world examples. This part will also explain in detail the ACM standards for reproducibility and explain how the process runs at the ACM TELO journal. The second part will describe several techniques for improving reproducibility, ranging from trivial but surprisingly effective to somewhat more technical and laborious. Techniques presented will be demonstrated during the session step-by-step.

Luis Paquete

Luís Paquete is Associate Professor at the Department of Informatics Engineering, University of Coimbra, Portugal, since 2007. He received his Ph.D. in Computer Science from T.U. Darmstadt, Germany, in 2005 and a M.S. in Systems Engineering and Computer Science from the University of Algarve, Portugal, in 2001. His research interest is mainly focused on exact and heuristic solution methods for multiobjective combinatorial optimization problems.

Manuel López-Ibáñez

Dr. López-Ibáñez is a lecturer in the Decision and Cognitive Sciences Research Centre at the Alliance Manchester Business School, University of Manchester, UK. He received the M.S. degree in computer science from the University of Granada, Granada, Spain, in 2004, and the Ph.D. degree from Edinburgh Napier University, U.K., in 2009. He has published 17 journal papers, 6 book chapters and 36 papers in peer-reviewed proceedings of international conferences on diverse areas such as evolutionary algorithms, ant colony optimization, multi-objective optimization, pump scheduling and various combinatorial optimization problems. His current research interests are experimental analysis and the automatic configuration and design of stochastic optimization algorithms, for single and multi-objective problems. He is the lead developer and current maintainer of the irace software package for automatic algorithm configuration (http://iridia.ulb.ac.be/irace).

Semantic Genetic Programming

Semantic genetic programming is a rapidly growing trend in Genetic Programming (GP) that aims at opening the ‘black box’ of the evaluation function and make explicit use of more information on program behavior in the search. In the most common scenario of evaluating a GP program on a set of input-output examples (fitness cases), the semantic approach characterizes program with a vector of outputs rather than a single scalar value (fitness). The past research on semantic GP has demonstrated that the additional information obtained in this way facilitates designing more effective search operators. In particular, exploiting the geometric properties of the resulting semantic space leads to search operators with attractive properties, which have provably better theoretical characteristics than conventional GP operators. This in turn leads to dramatic improvements in experimental comparisons.

The aim of the tutorial is to give a comprehensive overview of semantic methods in genetic programming, illustrate in an accessible way the formal geometric framework for program semantics to design provably good mutation and crossover operators for traditional GP problem domains, and to analyze rigorously their performance (runtime analysis). A recent extension of this framework to Grammatical Evolution will be also presented. Other promising emerging approaches to semantics in GP will be reviewed. In particular, the recent developments in the behavioural programming and approaches that automatically acquire a multi-objective characterization of programs will be covered as well. Current challenges and future trends in semantic GP will be identified and discussed.

 

Alberto Moraglio

Krzysztof Krawiec

Krzysztof Krawiec is an Associate Professor in the Institute of Computing Science at Poznan University of Technology, Poland. His primary research areas are genetic programming, semantic genetic programming, and coevolutionary algorithms, with applications in program synthesis, modeling, pattern recognition, and games. Dr. Krawiec co-chaired the European Conference on Genetic Programming in 2013 and 2014, GP track at GECCO'16, and is an associate editor of Genetic Programming and Evolvable Machines journal.

Sequential Experimentation By Evolutionary Algorithms

This tutorial addresses applications of Evolutionary Algorithms (EAs) to global optimization tasks where the objective function cannot be calculated (no explicit model nor a simulation exist), but rather requires a measurement/assay ("wet experiment") in the real-world – e.g., in pharmaceuticals, biocatalyst design, protein expression, quantum processes – to mention only a few.
The use of EAs for experimental optimization is placed in its historical context with an overview of the landmark studies in this area carried out in the 1960s at the Technical University of Berlin. At the same time, statistics-based Design of Experiments (DoE) methodologies, rooted in the late 1950s, constitute a gold-standard in existing laboratory equipment, and are therefore reviewed as well at an introductory level to EC audience.
The main characteristics of experimental optimization work, in comparison to optimization of simulated systems, are discussed, and practical guidelines for real-world experiments with EAs are given. For example, experimental problems can constrain the evolution due to overhead considerations, interruptions, changes of variables, missing assays, imposed population-sizes, and target assays that have different evaluation times (in the case of multiple objective optimization problems).
Selected modern-day case studies show the persistence of experimental optimization problems today. These cover experimental quantum systems, combinatorial drug discovery, protein expression, and others. These applications can throw EAs out of their normal operating envelope, and raise research questions in a number of different areas ranging across constrained EAs, multiple objective EAs, robust and reliable methods for noisy and dynamic problems, and metamodeling methods for expensive evaluations.

 

Thomas Baeck

Thomas Bäck is Full Professor of Computer Science at the Leiden Institute of Advanced Computer Science (LIACS), Leiden University, The Netherlands, where he is head of the Natural Computing group since 2002. He received his PhD (adviser: Hans-Paul Schwefel) in computer science from Dortmund University, Germany, in 1994, and then worked for the Informatik Centrum Dortmund (ICD) as department leader of the Center for Applied Systems Analysis. From 2000 - 2009, Thomas was Managing Director of NuTech Solutions GmbH and CTO of NuTech Solutions, Inc. He gained ample experience in solving real-life problems in optimization and data mining through working with global enterprises such as BMW, Beiersdorf, Daimler, Ford, Honda, and many others. Thomas has more than 300 publications on natural computing, as well as two books on evolutionary algorithms: Evolutionary Algorithms in Theory and Practice (1996), Contemporary Evolution Strategies (2013). He is co-editor of the Handbook of Evolutionary Computation, and the Handbook of Natural Computing, and co-editor-in-chief of Springer's Natural Computing book series. He is also editorial board member and associate editor of a number of journals on evolutionary and natural computing. Thomas received the best dissertation award from the German Society of Computer Science (Gesellschaft f\""ur Informatik, GI) in 1995 and the IEEE Computational Intelligence Society Evolutionary Computation Pioneer Award in 2015.

Ofer Shir

Ofer Shir is a Senior Lecturer at the Computer Science Department in Tel-Hai College, and a Principal Investigator at Migal-Galilee Research Institute – both located in the Upper Galilee, Israel.
Ofer Shir holds a BSc in Physics and Computer Science from the Hebrew University of Jerusalem, Israel, and both MSc and PhD in Computer Science from Leiden University, The Netherlands (PhD advisers: Thomas Bäck and Marc Vrakking). Upon his graduation, he completed a two-years term as a Postdoctoral Research Associate at Princeton University, USA (2008-2010), hosted by Prof. Herschel Rabitz – where he specialized in computer science aspects of experimental quantum systems. He then joined IBM-Research as a Research Staff Member (2010-2013), which constituted his second postdoctoral term, and where he gained real-world experience in convex and combinatorial optimization as well as in decision analytics.
His current fields of interest encompass Statistical Learning in Theory and in Practice, Experimental Optimization, Scientific Informatics, Natural Computing, Computational Intelligence in Physical Sciences, Quantum Control and Quantum Information.

Solving Complex Problems with Coevolutionary Algorithms

Coevolutionary algorithms (CoEAs) go beyond the conventional paradigm of evolutionary computation in having the potential to answer questions about what to evolve against (competition) and / or establish how multi-agent behaviours can be discovered (cooperation). Competitive coevolution can be considered from the perspective of discovering tests that distinguish between the performance of candidate solutions. Cooperative coevolution implies that mechanisms are adopted for distributing fitness across more than one individual. In both these variants, the evolving entities engage in interactions that affect all the engaged parties, and result in search gradients that may be very different from those observed in conventional evolutionary algorithm, where fitness is defined externally. This allows CoEAs to model complex systems and solve problems that are difficult or not naturally addressed using conventional evolution.

This tutorial will begin by first establishing basic frameworks for competitive and cooperative coevolution and noting the links to related formalisms (interactive domains and test-based problems). We will identify the pathologies that potentially appear when assuming such coevolutionary formulations (disengagement, forgetting, mediocre stable states) and present the methods that address these issues. Compositional formulations will be considered in which hierarchies of development are explicitly formed leading to the incremental complexification of solutions. The role of system dynamics will also be reviewed with regards to providing additional insight into how design decisions regarding, say, the formulation assumed for cooperation, impact on the development of effective solutions. We will also present the concepts of coordinate systems and underlying objectives and how they can make search/learning more effective. Other covered developments will include hybridization with local search and relationships to shaping.

Krzysztof Krawiec

Krzysztof Krawiec is an Associate Professor in the Institute of Computing Science at Poznan University of Technology, Poland. His primary research areas are genetic programming, semantic genetic programming, and coevolutionary algorithms, with applications in program synthesis, modeling, pattern recognition, and games. Dr. Krawiec co-chaired the European Conference on Genetic Programming in 2013 and 2014, GP track at GECCO'16, and is an associate editor of Genetic Programming and Evolvable Machines journal.

Malcolm Heywood

Malcolm Heywood is a Professor of Computer Science at Dalhousie University, Canada. He has conducted research in genetic programming (GP) since 2000. He has a particular interest in scaling up the tasks that GP can potentially be applied to. His current research is attempting the appraise the utility of coevolutionary methods under non-stationary environments as encountered in streaming data applications, and coevolving agents for single and multi-agent reinforcement learning tasks. In the latter case the goal is to coevolve behaviours for playing soccer under the RoboSoccer environment (a test bed for multi-agent reinforcement learning). Dr. Heywood is a member of the editorial board for Genetic Programming and Evolvable Machines (Springer). He was a track co-chair for the GECCO GP track in 2014 and a co-chair for European Conference on Genetic Programming in 2015.

NEW Statistical Analyses for Meta-heuristic Stochastic Optimization Algorithms

Moving to the era of explainable AI, a comprehensive comparison of the performance of stochastic optimization algorithms has become increasingly important task. One of the most common ways to compare the performance of stochastic optimization algorithms is to apply statistical analyses. However, for performing them, there are still caveats that need to be addressed for acquiring relevant and valid conclusions. First of all, such statistical analyses require good knowledge from the user to apply them properly, which is often lacking and leads to incorrect conclusions. Secondly, the standard approaches can be influenced by outliers (e.g., poor runs) or some statistically insignificant differences (solutions within some ε-neighborhood) that exist in the data.
This tutorial will provide an overview of the current approaches for analyzing algorithms performance with special emphasis on caveats that are often overlooked. We will show how these can be easily avoided by applying simple principles that lead to Deep Statistical Comparison. The tutorial will not be based on equations, but mainly examples through which a deeper understanding of statistics will be achieved. Examples will be based on various comparisons scenarios including single- and multi-objective optimization algorithms. The tutorial will end with a demonstration of a web-service-based framework for statistical comparison of stochastic optimization algorithms.

 

Tome Eftimov

Tome Eftimov is a postdoctoral research fellow at Stanford University. He received his Ph.D. degree from the Jožef Stefan Postgraduate School, Ljubljana, Slovenia, in January 2018. Since 2014 he has been a researcher at the Computer Systems Department, Jožef Stefan Institute, Ljubljana. He is involved in courses on probability and statistics, and statistical data analysis. The work related to Deep Statistical Comparison was presented as tutorial (i.e. IJCCI 2018, IEEE SSCI 2019) or invited lecture to several international conferences and universities. His research interests include statistics, heuristic optimization, natural language processing, machine learning, and representational learning.

NEW Theory and Practice of Population Diversity in Evolutionary Computation

Divergence of character is a cornerstone of natural evolution. On the contrary, evolutionary optimization processes are plagued by an endemic lack of population diversity: all candidate solutions eventually crowd the very same areas in the search space. The problem is usually labeled with the oxymoron “premature convergence” and has very different consequences on the different applications, almost all deleterious. At the same time, case studies from theoretical runtime analyses irrefutably demonstrate the benefits of diversity.

This tutorial will give an introduction into the area of “diversity promotion”: we will define the term “diversity” in the context of Evolutionary Computation, showing how practitioners tried, with mixed results, to promote it.

Then, we will analyze the benefits brought by population diversity in specific contexts, namely global exploration, enhancing the power of crossover, multi-objective optimization, and dynamic optimization. To this end, we will survey recent results from rigorous runtime analysis on selected problems. The presented analyses rigorously quantify the performance of evolutionary algorithms in the light of population diversity, laying the foundation for a rigorous understanding of how search dynamics are affected by the presence or absence of diversity and the introduction of diversity mechanisms.

Dirk Sudholt

Dirk obtained his Diplom (Master's) degree in 2004 and his PhD in computer science in 2008 from the Technische Universitaet Dortmund, Germany, under the supervision of Prof. Ingo Wegener. He has held postdoc positions at the International Computer Science Institute in Berkeley, California, working in the Algorithms group led by Prof. Richard M. Karp and at the University of Birmingham, UK, working with Prof. Xin Yao. Since January 2012 he is a Lecturer at the University of Sheffield, UK, leading the newly established Algorithms research group.

His research focuses on the computational complexity of randomized search heuristics such as evolutionary algorithms and swarm intelligence algorithms like ant colony optimization and particle swarm optimization. He is an editorial board member of Evolutionary Computation and Natural Computing and receives funding from the EU's Future and Emerging Technologies scheme (SAGE project). He has more than 70 refereed publications in international journals and conferences, including 8 best paper awards at leading conferences, GECCO and PPSN. He has given 9 tutorials at ThRaSH, WCCI/CEC, GECCO, and PPSN.

Giovanni Squillero

Giovanni Squillero received his M.S. and Ph.D. in computer science in 1996 and 2001, respectively. He is an assistant professor in Politecnico di Torino, Torino, Italy. His research interests mix the whole spectrum of bio-inspired metaheuristics with electronic CAD, and selected topics in computational intelligence, games, and multi-agent systems. His activities focus on developing techniques able to achieve "good" solutions while requiring an "acceptable" amount of resources, with main applications in real, industrial problems. Squillero is a member of the *IEEE Computational Intelligence Society Games Technical Committee*. He organized the *EvoHOT* workshops on evolutionary hardware optimization techniques, and he is currently a member of the editorial board of *Genetic Programming and Evolvable Machines*. He is the coordinator of *EvoApplications* for 2016.
<http://www.cad.polito.it/~squillero/cv_squillero.pdf>

Visualization in Multiobjective Optimization

Visualization in evolutionary multiobjective optimization is relevant in many aspects, such as estimating the location, range, and shape of a solution set approximating the Pareto front (known as an approximation set), assessing conflicts and trade-offs between objectives, selecting preferred solutions, monitoring the progress or convergence of an optimization run, assessing the relative performance of different algorithms, and problem understanding through landscape analysis.

This tutorial will provide an overview of representative methods used in multiobjective optimization for visualizing: (1) individual approximation sets resulting from a single algorithm run, (2) multiple approximation sets stemming from repeated runs, and (3) multiobjective problem landscapes. The methods will be organized according to our recently proposed taxonomy that builds on the nature of the visualized data and the properties of visualization methods.

The methods for visualizing approximation sets will be analyzed according to a methodology that uses a list of requirements for visualization methods and benchmark approximation sets in a similar way as performance metrics and benchmark test problems are used for comparing optimization algorithms. The methods for visualizing multiobjective problem landscapes will be demonstrated on a collection of biobjective problems of increasing difficulty.

Bogdan Filipic

Bogdan Filipic is a senior researcher and head of Computational Intelligence Group at the Department of Intelligent Systems of the Jozef Stefan Institute, Ljubljana, Slovenia, and associate professor of Computer Science at the Jozef Stefan International Postgraduate School. He received his Ph.D. degree in Computer Science from the University of Ljubljana. His research interests are in stochastic optimization, evolutionary computation and intelligent data analysis. He focuses on evolutionary multiobjective optimization, including result visualization, constraint handling and use of surrogate models. He is also active in promoting evolutionary computation in practice and has led optimization projects for steel industry, car manufacturing and energy management. He co-chaired the biennial BIOMA conference from 2004 to 2012, and served as the general chair of PPSN 2014. He was a guest lecturer at the University of Oulu, Finland, and the VU University Amsterdam, The Netherlands, and was giving tutorials at recent CEC and GECCO conferences.

Tea Tušar

Tea Tusar is a research fellow at the Department of Intelligent Systems of the Jozef Stefan Institute in Ljubljana, Slovenia. She was awarded the PhD degree in Information and Communication Technologies by the Jozef Stefan International Postgraduate School for her work on visualizing solution sets in multiobjective optimization. She has completed a one-year postdoctoral fellowship at Inria Lille in France where she worked on benchmarking multiobjective optimizers. Her research interests include evolutionary algorithms for singleobjective and multiobjective optimization with emphasis on visualizing and benchmarking their results and applying them to real-world problems.

Specialized Tutorials

NEW Addressing Ethical Challenges within Evolutionary Computation Applications

Robots and artificial intelligence demonstrate to effectively contribute to an increasing number of different domains. At the same time, an increasing number of people – in the general public as well as in research – have started to consider a number of potential ethical challenges related to the development and use of such technology. There are also initiatives across countries like the European Commission appointed High-Level Expert Group on Artificial Intelligence (AI HLEG) that has as a general objective to support the implementation of the European Strategy on Artificial Intelligence (please add link as hidden: https://ec.europa.eu/digital-single-market/en/artificial-intelligence). This talk will give an overview of the most commonly expressed ethical challenges and ways being undertaken to reduce their impact using the findings in an earlier undertaken review (please add link as hidden: https://www.frontiersin.org/articles/10.3389/frobt.2017.00075/full) supplemented with recent work and initiatives.

Among the most important challenges are those related to privacy, safety and security. Countermeasures can be taken first at design time, second, when a user should decide where and when to apply a system and third, when a system is in use in its environment. In the latter case, there will be a need for the system by itself to perform some ethical reasoning if operating in autonomous mode. We are currently undertaking research in various projects where the challenges appear. The tutorial will introduce some examples from our own and others work and how the challenges can be addressed both from a technical and human side with special attention to problems relevant when working with evolutionary computation. Ethical issues should not be seen only as challenges but also as new research opportunities contributing to more useful services and systems.

Jim Torresen

Jim Torresen is a professor at University of Oslo where he leads the Robotics and Intelligent Systems research group. He received his M.Sc. and Dr.ing. (Ph.D) degrees in computer architecture and design from the Norwegian University of Science and Technology, University of Trondheim in 1991 and 1996, respectively. He has been employed as a senior hardware designer at NERA Telecommunications (1996-1998) and at Navia Aviation (1998-1999). Since 1999, he has been a professor at the Department of Informatics at the University of Oslo (associate professor 1999-2005). Jim Torresen has been a visiting researcher at Kyoto University, Japan for one year (1993-1994), four months at Electrotechnical laboratory, Tsukuba, Japan (1997 and 2000) and a visiting professor at Cornell University, USA for one year (2010-2011).

His research interests at the moment include artificial intelligence, ethical aspects of AI and robotics, machine learning, robotics, and applying this to complex real-world applications. Several novel methods have been proposed. He has published over 200 scientific papers in international journals, books and conference proceedings. 10 tutorials and a number of invited talks have been given at international conferences and research institutes. He is in the program committee of more than ten different international conferences, associate editor of three international scientific journals as well as a regular reviewer of a number of other international journals. He has also acted as an evaluator for proposals in EU FP7 and Horizon2020 and is currently project manager/principal investigator in four externally funded research projects/centres. He is a member of the Norwegian Academy of Technological Sciences (NTVA) and the National Committee for Research Ethics in Science and Technology (NENT) where he is a member of a working group on research ethics for AI.
More information and a list of publications can be found here: http://www.ifi.uio.no/~jimtoer

Automated Algorithm Configuration and Design

Most optimization algorithms, including evolutionary algorithms and
metaheuristics, and general-purpose solvers for integer or constraint
programming, have often many parameters that need to be properly
designed and tuned for obtaining the best results on a particular
problem. Automatic (offline) algorithm design methods help algorithm
users to determine the parameter settings that optimize the
performance of the algorithm before the algorithm is actually
deployed. Moreover, automatic offline algorithm design methods may
potentially lead to a paradigm shift in algorithm design because they
enable algorithm designers to explore much larger design spaces than
by traditional trial-and-error and experimental design
procedures. Thus, algorithm designers can focus on inventing new
algorithmic components, combine them in flexible algorithm frameworks,
and let final algorithm design decisions be taken by automatic
algorithm design techniques for specific application contexts.

This tutorial is structured into two main parts. In the first part, we
will give an overview of the algorithm design and tuning problem,
review recent methods for automatic algorithm design, and illustrate
the potential of these techniques using recent, notable applications
from the presenters' and other researchers work. In the second part of
the tutorial will focus on a detailed discussion of more complex
scenarios, including multi-objective problems, anytime algorithms,
heterogeneous problem instances, and the automatic generation of
algorithms from algorithm frameworks. The focus of this second part of
the tutorial is, hence, on practical but challenging applications of
automatic algorithm design. The second part of the tutorial will
demonstrate how to tackle algorithm design tasks using our irace
software (http://iridia.ulb.ac.be/irace), which implements the
iterated racing procedure for automatic algorithm design. We will
provide a practical step-by-step guide on using irace for the typical
algorithm design scenario.

Manuel López-Ibáñez

Dr. López-Ibáñez is a lecturer in the Decision and Cognitive Sciences Research Centre at the Alliance Manchester Business School, University of Manchester, UK. He received the M.S. degree in computer science from the University of Granada, Granada, Spain, in 2004, and the Ph.D. degree from Edinburgh Napier University, U.K., in 2009. He has published 17 journal papers, 6 book chapters and 36 papers in peer-reviewed proceedings of international conferences on diverse areas such as evolutionary algorithms, ant colony optimization, multi-objective optimization, pump scheduling and various combinatorial optimization problems. His current research interests are experimental analysis and the automatic configuration and design of stochastic optimization algorithms, for single and multi-objective problems. He is the lead developer and current maintainer of the irace software package for automatic algorithm configuration (http://iridia.ulb.ac.be/irace).

Thomas Stützle

Thomas Stützle is a senior research associate of the Belgian F.R.S.-FNRS working at the IRIDIA laboratory of Université libre de Bruxelles (ULB), Belgium. He received the Diplom (German equivalent of M.S. degree) in business engineering from the Universität Karlsruhe (TH), Karlsruhe, Germany in 1994, and his PhD and his habilitation in computer science both from the Computer Science Department of Technische Universität Darmstadt, Germany, in 1998 and 2004, respectively. He is the co-author of two books about ``Stochastic Local Search: Foundations and Applications and ``Ant Colony Optimization and he has extensively published in the wider area of metaheuristics including 20 edited proceedings or books, 8 journal special issues, and more than 190 journal, conference articles and book chapters, many of which are highly cited. He is associate editor of Computational Intelligence, Swarm Intelligence, and Applied Mathematics and Computation and on the editorial board of seven other journals including Evolutionary Computation and Journal of Artificial Intelligence Research. His main research interests are in metaheuristics, swarm intelligence, methodologies for engineering stochastic local search algorithms, multi-objective optimization, and automatic algorithm configuration. In fact, since more than a decade he is interested in automatic algorithm configuration and design methodologies and he has contributed to some effective algorithm configuration techniques such as F-race, Iterated F-race and ParamILS. His 2002 GECCO paper on "A Racing Algorithm For Configuring Metaheuristics" (joint work with M. Birattari, L. Paquete, and K. Varrentrapp) has received the 2012 SIGEVO impact award.

NEW EA & ML, synergies and challenges

While Machine Learning (ML) techniques enjoyed growing popularity in recent years, the role of Evolutionary Algorithms in this field is still marginal — quite a surprising fact considering how deeply the origins of the two fields are related.

In this tutorial we present success stories of EAs exploited in specific ML tasks, such as feature selection, adversarial ML, whitebox modeling, also mentioning the renowned neuroevolution. We show how similar concepts appear in both fields with different names.

At the same time, we show well-known and emerging challenges that EAs need to overcome to become widely adopted in ML. For instance, a reduced ability to scale or a general distrust toward stochasticity.

Finally, we point out opportunities arising for new research lines, that play on the strengths of EAs, such as potential improvements over currently-used optimization techniques; and the capability to go beyond simple model fitting, creating solutions that expand over the boundaries of the training data.

Giovanni Squillero

Giovanni Squillero is an associate professor of computer science at Politecnico di Torino, Department of Control and Computer Engineering. Nowadays Squillero’s research mixes the whole spectrum of bio-inspired metaheuristics, computational intelligence, and selected topics from machine learning; in more down-to-earth research lines, he develops approximate optimization techniques able to achieve acceptable solutions with limited amount of resources, tackling industrial problems, mostly related to electronic CAD. Up to October 2019, he is credited as an author in 3 books, 33 journal articles, 10 book chapters, and 143 papers in conference proceedings; he is also listed among the editors in 15 volumes. Squillero is a Senior Member of the IEEE and serves in the IEEE Computational Intelligence Society Games Technical Committee; he is a member of the editorial board of Genetic Programming and Evolvable Machines and a member of the executive board of SPECIES, the Society for the Promotion of Evolutionary Computation in Europe and its Surroundings. Squillero was the program chair of the European Conference on the Applications of Evolutionary Computation in 2016 and 2017, and he is now a member of the EvoApplications steering committee. In 2018 he co-organized EvoML, the workshop on Evolutionary Machine Learning at PPSN; in 2016 and 2017, MPDEA, the workshop on Measuring and Promoting Diversity in Evolutionary Algorithms at GECCO; and from 2004 to 2014, EvoHOT, the Workshops on Evolutionary Hardware Optimization Techniques.

Alberto Tonda

Alberto Tonda received his PhD in 2010, from Politecnico di Torino, Torino, Italy, with a thesis on real-world applications of evolutionary computation. After post-doctoral experiences on the same topics at the Institut des Systèmes Complexes of Paris and INRIA Saclay, France, he is now a permanent researcher at INRA, the French National Institute for Research in Agriculture and Agronomy. His current research topics include semi-supervised modeling of food processes, and stochastic optimization of processes for the industry.

NEW Evolutionary Algorithms in Biomedical Data Mining: Challenges, Solutions, and Frontiers

Evolutionary algorithms offer an open-ended, nature inspired set of strategies for solving real-world problems that set them apart from other methods found in machine learning and optimization. The domain of biomedical data mining must confront many unique challenges (e.g. data scale, noise, bias, data completeness, complex multivariate associations, heterogeneity, and the need for interpretability). This tutorial will highlight these challenges, paired with solutions that have been proposed and adopted by the research community as well as identify promising cutting-edge evolutionary computation strategies expected to be of benefit to biomedical analysis applications in the future. It will seek to distinguish challenges that are generalizable to the broader field of EC from those that are specific to subfamilies of methodological development and application (e.g. genetic algorithms, genetic programming, co-evolutionary algorithms, learning classifier systems, neuro-evolution, evolutionary programming, differential evolution). Themes for discussion will include, but not be limited to; stochasticity, parameter optimization, representability vs. evolvability, interpretability, evaluation metrics, maintaining diversity, generations and elitism, selection, genetic operators, defining fitness, and confounding the objective with the objective function. These broader EC themes will be punctuated with a variety of specific examples of application within the broader biomedical research domain

Ryan Urbanowicz

Dr. Urbanowicz is an Assistant Professor of Informatics in the Department of Biostatistics, Epidemiology, and Informatics at the Perelman School of Medicine of the University of Pennsylvania PA, USA. He is also a Senior Fellow in the Institute for Biomedical Informatics. His educational background is interdisciplinary, at the intersection of biology, engineering, computer science, and biostatistics. He completed a PhD in genetics (with a focus on computational biology) at Dartmouth College, proceeded by a Masters and Bachelors of Biological Engineering at Cornell University. Dr. Urbanowicz’s current research focuses on the development, evaluation, and application of bioinformatics, artificial intelligence, and machine learning methods in biomedical and clinical problems. His past research has focused largely on the development and application of evolutionary rule-based machine learning methods (most specifically, Learning Classifier Systems). Dr. Urbanowicz has published more than 30 original peer-reviewed papers and has led the development of three machine learning software packages (ExSTraCS, GAMETES, and ReBATE). Additionally, he pioneered statistical and visualization strategies to make knowledge discovery a practical reality for these types of algorithms that were previously viewed mainly as black-box classification/prediction machines. In 2017, he co-authored the book Introduction to Learning Classifier Systems, which was well received by the evolutionary computation community. Over the last 8 years he has served as a workshop organizer and/or tutorial presenter at GECCO on the topics of Learning Classifier Systems, Evolutionary Rule-based Machine learning, and Standards in Benchmarking Evolutionary Algorithms, and has receive two GECCO best-paper awards.

 

Moshe Sipper

Moshe Sipper is a professor of computer science at Ben-Gurion University of the Negev, Israel. He received the B.A. degree from the Technion — Israel Institute of Technology, and the M.Sc. and Ph.D. degrees from Tel Aviv University, all in computer science. During the years 1995–2001 he was a senior researcher at the Swiss Federal Institute of Technology in Lausanne. Since 2016 he has been a visiting professor at the Computational Genetics Laboratory, Perelman School of Medicine, University of Pennsylvania. His current research focuses on evolutionary computation, machine learning, and artificial intelligence. At some point or other he also did research in the following areas: bio-inspired computing, cellular automata, cellular computing, artificial self-replication, embryonic electronics, evolvable hardware, artificial life, artificial neural networks, fuzzy logic, and robotics.

Dr. Sipper has published close to 200 publications including three research-related books: Evolved to Win, Machine Nature: The Coming Age of Bio-Inspired Computing, and Evolution of Parallel Cellular Machines: The Cellular Programming Approach. He has supervised 25 graduate students. He is an associate editor (and area editor for games) of the journal Genetic Programming and Evolvable Machines, and was an associate editor of the journals: IEEE Transactions on Evolutionary Computation, IEEE Transactions on Computational Intelligence and AI in Games, and Memetic Computing. He organized and chaired several conferences and has served on the program committees of over 120. He has also served as a reviewer for 40 journals and funding agencies and taught numerous basic and advanced courses in computer science, both undergraduate and graduate.

Dr. Sipper won the 2015 IEEE CIS Outstanding TCIAIG Paper Award, the 2008 BGU Toronto Prize for Academic Excellence in Research, the 1999 EPFL Latsis Prize, and 6 HUMIE Awards — Human-Competitive Results Produced by Genetic and Evolutionary Computation (Gold, 2013; Gold, 2011; Bronze, 2009; Bronze, 2008; Silver, 2007; Bronze, 2005).

Evolutionary Computation and Evolutionary Deep Learning for Image Analysis, Signal Processing and Pattern Recognition

The intertwining disciplines of image analysis, signal processing and pattern recognition are major fields of computer science, computer engineering and electrical and electronic engineering, with past and on-going research covering a full range of topics and tasks, from basic research to a huge number of real-world industrial applications.

Among the techniques studied and applied within these research fields, evolutionary computation (EC) including evolutionary algorithms, swarm intelligence and other paradigms is playing an increasingly relevant role. Recently, evolutionary deep learning has also attracted very good attention to these fields. The terms Evolutionary Image Analysis and Signal Processing and Evolutionary Computer Vision are more and more commonly accepted as descriptors of a clearly defined research area and family of techniques and applications. This has also been favoured by the recent availability of environments for computer hardware and systems such as GPUs and grid/cloud/parallel computing systems, whose architecture and computation paradigm fit EC algorithms extremely well, alleviating the intrinsically heavy computational burden imposed by such techniques and allowing even for real-time applications.

The tutorial will introduce the general framework within which Evolutionary Image Analysis, Signal Processing and Pattern Recognition can be studied and applied, sketching a schematic taxonomy of the field and providing examples of successful real-world applications. The application areas to be covered will include edge detection, segmentation, object tracking, object recognition, motion detection, image classification and recognition. EC techniques to be covered will include genetic algorithms, genetic programming, particle swarm optimisation, evolutionary multi-objective optimisation as well as memetic/hybrid paradigms. In particular, we will discuss the detection of relevant set of features for classification based on an information-theoretical approach derived from complex system analysis. We take a focus on the use of evolutionary deep learning idea for image analysis --- this includes automatic learning architectures, learning parameters and transfer functions of convolutional neural networks (and autoencoders and genetic programming if time allows). The use of GPU boxes will be discussed for real-time/fast object classification. We will show how such EC techniques can be effectively applied to image analysis and signal processing problems and provide promising results.

Mengjie Zhang

Mengjie Zhang is a Fellow of Royal Society of New Zealand, a Fellow of IEEE, Professor of Computer Science at Victoria University of Wellington, where he heads the interdisciplinary Evolutionary Computation Research Group. He is a member of the University Academic Board, a member of the University Postgraduate Scholarships Committee, a member of the Faculty of Graduate Research Board at the University, Associate Dean (Research and Innovation) in the Faculty of Engineering, and Chair of the Research Committee of the Faculty of Engineering and School of Engineering and Computer Science.
His research is mainly focused on artificial intelligence (AI), machine learning and big data, particularly in evolutionary computation and learning (using genetic programming, particle swarm optimisation and learning classifier systems), feature selection/construction and big dimensionality reduction, computer vision and image processing, job shop scheduling and resource allocation, multi-objective optimisation,  classification with unbalanced data and missing data, and evolutionary deep learning and transfer learning. Prof
Zhang has published over 500 research papers in refereed international journals and conferences in these areas. He has been serving as an associated editor or editorial board member for over ten international journals including IEEE Transactions on Evolutionary Computation, IEEE Transactions on Cybernetics, IEEE Transactions on Emergent Topics in Computational
Intelligence, ACM Transactions on Evolutionary Learning and Optimisation, the Evolutionary Computation Journal (MIT Press), Genetic Programming and Evolvable Machines (Springer), Applied Soft Computing, Natural Computing, and Engineering Applications of Artificial Intelligence, and as a reviewer of over 30 international journals. He has been involving major AI and EC conferences such as GECCO, IEEE CEC, EvoStar, IJCAI, PRICAI, PAKDD, AusAI, IEEE SSCI and SEAL as a Chair. He has also been serving as a steering committee member and a program committee member for over 100 international conferences. Since 2007, he has been listed as one of the top ten (currently No. 4) world genetic programming researchers by the GP
bibliography (http://www.cs.bham.ac.uk/~wbl/biblio/gp-html/index.html www.cs.bham.ac.uk).
Prof Zhang is the (immediate) past Chair of the IEEE CIS Intelligent Systems Applications, the IEEE CIS Emergent Technologies Technical Committee and the IEEE CIS Evolutionary Computation Technical Committee, a vice-chair of the IEEE CIS Task Force on Evolutionary Feature Selection and Construction, a vice-chair of the IEEE CIS Task Force on Evolutionary Computer Vision and Image Processing, and the founding chair of the IEEE Computational Intelligence Chapter in New Zealand.

Stefano Cagnoni

Stefano Cagnoni graduated in Electronic Engineering at the University of Florence, Italy, where he has been a PhD student and a post-doc until 1997. In 1994 he was a visiting scientist at the Whitaker College Biomedical Imaging and Computation Laboratory at the Massachusetts Institute of Technology. Since 1997 he has been with the University of Parma, where he has been Associate Professor since 2004.

Recent research grants include: co-management of a project funded by Italian Railway Network Society (RFI) aimed at developing an automatic inspection system for train pantographs; a "Marie Curie Initial Training Network" grant, for a four-year research training project in Medical Imaging using Bio-Inspired and Soft Computing; a grant from "Compagnia diS. Paolo" on "Bioinformatic and experimental dissection of the signalling pathways underlying dendritic spine function".

He has been Editor-in-chief of the "Journal of Artificial Evolution and Applications" from 2007 to 2010. Since 1999, he has been chair of EvoIASP, an event dedicated to evolutionary computation for image analysis and signal processing, now a track of the EvoApplications conference. Since 2005, he has co-chaired MedGEC, workshop on medical applications of evolutionary computation at GECCO. Co-editor of special issues of journals dedicated to Evolutionary Computation for Image Analysis and Signal Processing. Member of the Editorial Board of the journals “Evolutionary Computation” and “Genetic Programming and Evolvable Machines”.

He has been awarded the "Evostar 2009 Award", in recognition of the most outstanding contribution to Evolutionary Computation.

Evolutionary Computation and Machine Learning in Cryptology

In recent years, the interplay between artificial intelligence (AI) and security is becoming more prominent and important. This comes naturally because of the need to improve security in a more automated way. One specific domain of security that steadily receives more AI applications is cryptology. There, we already see how AI techniques can improve implementation attacks, attacks on PUFs, hardware Trojan detection, etc.
This tutorial first gives a brief introduction into domains where AI (we concentrate on machine learning and evolutionary algorithms) is used to solve problems from the security domain.
Next, we focus on problems coming from the cryptology domain.
We look at several realistic crypto problems successfully tackled with EAs and machine learning and discuss why those problems are suitable to apply such techniques. Some representative examples of the problems we cover are the evolution of Boolean functions and S-boxes, machine learning and EA attacks on Physically Unclonable Functions, machine learning for side-channel attacks, EA/machine learning to improve fault injection, hardware Trojan detection/prevention/insertion, attacks on logic locking, neuroevolution applications, etc.
We finish this tutorial with a discussion about how experiences in solving problems in AI could help to solve problems in security and vice versa. To that end, we emphasize a collaborative approach for both communities.

Stjepan Picek

Stjepan Picek finished his PhD in 2015 as a double doctorate under the supervision of Lejla Batina, Elena Marchiori (Radboud University Nijmegen, The Netherlands) and Domagoj Jakobovic (Faculty of Electrical Engineering and Computing, Croatia). The topic of his research was cryptology and evolutionary computation techniques (EC) which resulted in a thesis "Evolutionary Computation in Cryptology".
Currently, Stjepan is working as a postdoc researcher at the KU Leuven, Belgium as a part of the COSIC group where he continues his research on the applications of EC in the field of cryptology. His research topics include evolutionary computation, cryptology, and machine learning.
Prior to that, Stjepan worked in industry and government.
He regularly publishes papers in both evolutionary computation and cryptographic conferences and journals.
Besides that, he is a member of several professional societies (ACM, IEEE, IACR).

 

Domagoj Jakobovic

Domagoj Jakobovic received his Ph.D. degree in 2005 at the Faculty of Electrical Engineering and Computing, University of Zagreb, on the subject of generating scheduling heuristics with genetic programming. He is currently a full professor at the Department of Electronics, Microelectronics, Computer and Intelligent Systems at the University of Zagreb. His research interests include evolutionary algorithms, optimization methods, and parallel algorithms. Most notable contributions are in the area of machine supported scheduling, optimization problems in cryptography, parallelization, and improvement of evolutionary algorithms. He has published more than 90 papers, lead several research projects and serves as a reviewer for many international journals and conferences. He has supervised four doctoral theses and more than 150 bachelor and master theses.

Evolutionary Computation for Feature Selection and Feature Construction

In data mining/big data and machine learning, many real-world problems such as bio-data classification and biomarker detection, image analysis, text mining often involve a large number of features/attributes. However, not all the features are essential since many of them are redundant or even irrelevant, and the useful features are typically not equally important. Using all the features for classification or other data mining tasks typically does not produce good results due to the big dimensionality and the large search space. This problem can be solved by feature selection to select a small subset of original (relevant) features or feature construction to create a smaller set of high-level features using the original low-level features.

Feature selection and construction are very challenging tasks due to the large search space and feature interaction problems. Exhaustive search for the best feature subset of a given dataset is practically impossible in most situations. A variety of heuristic search techniques have been applied to feature selection and construction, but most of the existing methods still suffer from stagnation in local optima and/or high computational cost. Due to the global search potential and heuristic guidelines, evolutionary computation techniques such as genetic algorithms, genetic programming, particle swarm optimisation, ant colony optimisation, differential evolution and evolutionary multi-objective optimisation have been recently used for feature selection and construction for dimensionality reduction, and achieved great success. Many of these methods only select/construct a small number of important features, produce higher accuracy, and generated small models that are easy to understand/interpret and efficient to run on unseen data. Evolutionary computation techniques have now become an important means for handling big dimensionality issues where feature selection and construction are required.


The tutorial will introduce the general framework within which evolutionary feature selection and construction can be studied and applied, sketching a schematic taxonomy of the field and providing examples of successful real-world applications. The application areas to be covered will include bio-data classification and biomarker detection, image analysis and pattern classification, symbolic regression, network security and intrusion detection, and text mining. EC techniques to be covered will include genetic algorithms, genetic programming, particle swarm optimisation, differential evolution, ant colony optimisation, artificial bee colony optimisation, and evolutionary multi-objective optimisation. We will show how such evolutionary computation techniques (with a focus on particle swarm optimisation and genetic programming) can be effectively applied to feature selection/construction and dimensionality reduction and provide promising results.

Bing Xue

Bing Xue received her PhD degree in 2014 at Victoria University of Wellington (VUW), New Zealand.  She is now working as an Associate Professor at VUW, and with the Evolutionary Computation Research Group at VUW, and her research focuses mainly on evolutionary computation, machine learning and
data mining, particularly, evolutionary computation for feature selection, feature construction, dimension reduction, symbolic regression, multi-objective optimisation, bioinformatics and big data. Bing is currently leading the strategic research direction on evolutionary feature selection and construction in Evolutionary Computation Research Group at VUW, and has been organising special sessions and issues on evolutionary computation for feature selection
and construction. She is also the Chair of IEEE CIS Task Force on Evolutionary Computation for Feature Selection and Construction, Chair 
of Data Mining and Big Data Analytics Technical Committee, Vice-Chair of Task Force on Evolutionary Deep Learning and Applications, IEEE CIS.
She has been serving as a Chair of Evolutionary Machine Learning Track at GECCO 2019, Chair of Women@GECCO 2018, guest editor, associated editor or editorial board member for international journals, and program chair, special session chair, symposium/special session organiser for a number of
international conferences, and as reviewer for top international journals and conferences in the field.

Mengjie Zhang

Mengjie Zhang is a Fellow of Royal Society of New Zealand, a Fellow of IEEE, Professor of Computer Science at Victoria University of Wellington, where he heads the interdisciplinary Evolutionary Computation Research Group. He is a member of the University Academic Board, a member of the University Postgraduate Scholarships Committee, a member of the Faculty of Graduate Research Board at the University, Associate Dean (Research and Innovation) in the Faculty of Engineering, and Chair of the Research Committee of the Faculty of Engineering and School of Engineering and Computer Science.
His research is mainly focused on artificial intelligence (AI), machine learning and big data, particularly in evolutionary computation and learning (using genetic programming, particle swarm optimisation and learning classifier systems), feature selection/construction and big dimensionality reduction, computer vision and image processing, job shop scheduling and resource allocation, multi-objective optimisation,  classification with unbalanced data and missing data, and evolutionary deep learning and transfer learning. Prof
Zhang has published over 500 research papers in refereed international journals and conferences in these areas. He has been serving as an associated editor or editorial board member for over ten international journals including IEEE Transactions on Evolutionary Computation, IEEE Transactions on Cybernetics, IEEE Transactions on Emergent Topics in Computational
Intelligence, ACM Transactions on Evolutionary Learning and Optimisation, the Evolutionary Computation Journal (MIT Press), Genetic Programming and Evolvable Machines (Springer), Applied Soft Computing, Natural Computing, and Engineering Applications of Artificial Intelligence, and as a reviewer of over 30 international journals. He has been involving major AI and EC conferences such as GECCO, IEEE CEC, EvoStar, IJCAI, PRICAI, PAKDD, AusAI, IEEE SSCI and SEAL as a Chair. He has also been serving as a steering committee member and a program committee member for over 100 international conferences. Since 2007, he has been listed as one of the top ten (currently No. 4) world genetic programming researchers by the GP
bibliography (http://www.cs.bham.ac.uk/~wbl/biblio/gp-html/index.html www.cs.bham.ac.uk).
Prof Zhang is the (immediate) past Chair of the IEEE CIS Intelligent Systems Applications, the IEEE CIS Emergent Technologies Technical Committee and the IEEE CIS Evolutionary Computation Technical Committee, a vice-chair of the IEEE CIS Task Force on Evolutionary Feature Selection and Construction, a vice-chair of the IEEE CIS Task Force on Evolutionary Computer Vision and Image Processing, and the founding chair of the IEEE Computational Intelligence Chapter in New Zealand.

Evolutionary Computer Vision

This tutorial, based on the book "Evolutionary Computer Vision," published with Springer in the Natural Computing series, presents the subject under the umbrella of goal-driven vision. Here the author explains the theory and application of evolutionary computer vision, a new paradigm where challenging vision problems can be approached using the techniques of evolutionary computing. This methodology achieves excellent results for defining fitness functions and representations for problems by merging evolutionary computation with mathematical optimization to produce automatic creation of emerging visual behaviors.
In the first part of the tutorial, the author surveys the literature in a concise form, defines the relevant terminology, and offers historical and philosophical motivations for the fundamental research problems in the field. The second part of the tutorial focuses on implementing evolutionary algorithms that solve given problems using working programs in the primary areas of low-, intermediate- and high-level computer vision.

Gustavo Olague

Gustavo Olague was born in Chihuahua, Chih., México, in 1969. He received the B.S. and M.S. degrees in industrial and electronics engineering from the Instituto Tecnológico de Chihuahua (ITCH), in 1992 and 1995, respectively, and the Ph.D. degree in computer vision, graphics, and robotics from the Institut Polytechnique de Grenoble (INPG) and the Institut National de Recherche en Informatique et Automatique (INRIA) in France. He is currently a Professor with the Department of Computer Science, Centro de Investigación Científica y de Educación Superior de Ensenada (CICESE), México, and also the Director of the EvoVisión Research Team. He is also an Adjunct Professor of engineering with the Universidad Autonóma de Chihuahua (UACH). He has authored over 100 conference proceedings papers and journal articles, co-edited special issues in Pattern Recognition Letters, Evolutionary Computation (MIT Press), and Applied Optics (OSA). He has authored the book Evolutionary Computer Vision (Springer) in the Natural Computing Series. His main research interests are evolutionary computing and computer vision. He is a member of the Editorial Team of the IEEE Access, Neural Computing and Applications (Springer), and served as the Co-Chair of the Real-World Applications track at the main international evolutionary computing conference, GECCO (ACM SIGEVO Genetic and Evolutionary Computation Conference), in 2012 and 2013. He has received numerous distinctions, among them the Talbert Abrams Award–first honorable mention 2003–presented by the American Society for Photogrammetry and Remote Sensing (ASPRS) for authorship and recording of current and historical engineering and scientific developments in photogrammetry; Best Paper Awards at major conferences such as GECCO, EvoIASP (European Workshop on Evolutionary Computation in Image Analysis, Signal Processing, and Pattern Recognition), and EvoHOT (European Workshop on Evolutionary Hardware Optimization); and twice the Bronze Medal at the Humies (GECCO award for Human-Competitive results produced by genetic and evolutionary computation).

NEW Multi-concept Optimization

The main goal is to introduce Multi-Concept Optimization (MCO) to the GECCO community of researches and practitioners. This goal will be achieved by way of the following seven tasks:
1. Providing background on what is a conceptual solution (as evident from the conceptual design stage during the process of engineering design)
2. Describing academic and real-life examples of conceptual solutions
3. Defining what is MCO and how it differs from a traditional definition of an optimization problem. The definitions will include both single and multi-objective MCO
4. Explaining the significance of MCO as an optimization methodology that is useful for the following three main reasons:
a. Supporting the selection of conceptual solutions (e.g., 1-3)
b. An alternative approach to multi-modal optimization (see explanation in 4)
c. A unique approach to design space exploration (e.g., 5)
5. Describing evolutionary algorithms for MCO and their benchmarking (e.g., 3, 5-6)
6. Describing the application of MCO to a real-life problem (joint work with Israel Aerospace Industries)
7. Discussing the research needs concerning evolutionary computation for MCO
Indicative Bibliography:
1 Mattson C. A., Messac, A. Pareto frontier based concept selection under uncertainty with visualization. Opt. and Eng., 6; p. 85–115, 2005.
2 Avigad, G., and Moshaiov, A. Set-based concept selection in multi-objective problems: optimality versus variability approach. J. of Eng. Design, Vol. 20, No. 3, pp. 217-242, 2009.
3 Avigad, G. and Moshaiov, A. Interactive evolutionary multiobjective search and optimization of set-based concepts. IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), 39(4), pp.1013-1027, 2009.
4 Moshaiov, A. The Paradox of Multimodal Optimization: Concepts vs. Species in Single and Multi-objective Problems. Proceedings of the IEEE Congress on Evolutionary Computation, 2016.
5 Moshaiov, A., Snir, A., and Samina, B. Concept-based evolutionary exploration of design spaces by a resolution-relaxation-Pareto approach, Proceedings of the IEEE Congress on Evolutionary Computation, pp. 1845-1852, 2015.
6 Farhi, E. and Moshaiov, A. Window-of-Interest-based Multi-objective Evolutionary Search for Satisficing Concepts. Proceedings of the IEEE Conference on Systems, Man and Cybernetics, 2017.

 

Amiram Moshaiov

Amiram Moshaiov is a faculty member of the School of Mechanical Engineering and a member of the Sagol School of Neuroscience at Tel-Aviv University. During the 80's he was a faculty member at MIT, USA.
He is an Associate Editor of the IEEE Trans. on Emerging Topics in Computational Intelligence, as well as of the Journal of Memetic Computing. In addition, he is a reviewer to many other scientific journals.
Moshaiov was a member of the Management Board of the European Network of Excellence in Robotics. He is currently a member of the Working Group on Artificial Life and Complex Adaptive Systems of IEEE and of the EURO Working Group on Multicriteria Decision Aiding.
He is and/or was a member and associate editor in many international program committees of conferences such as: The IEEE Int. Conf. on Systems, Man, and Cybernetics, The IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, The IEEE Congress on Evolutionary Computation, The IEEE World Congress on Computational Intelligence, The IEEE Sym. on Artificial Life, The IEEE Sym. on Comp. Intelligence for Security and Defense Applications, The IEEE Sym. on Comp. Intelligence in Multi-criteria Decision Making, The Int. Conference on Parallel Problem Solving from Nature, The International Conf. on Simulated Evolution And Learning, The European Robotic Symposium, The Int. IFAC Symposium on Robot Control, The Int. Symposium on Tools and Methods of Competitive Engineering, The Int. Conf. on Engineering Design, The Int. Conference on Mechatronics, The IEEE Int. Conference on Control Applications, and The IEEE Int. Conference on Computational Cybernetics.

His research interests are in methods such as: Computational Intelligence including Evolutionary Computation, Artificial Neural Networks, Fuzzy Logic and their hybridizations, Interactive Evolutionary Computation, Multi-criteria Decision Making, Multi-Objective Optimization and Adaptation, and Multi-Objective Games.
He is interested in application areas such as: Engineering Design, Planning, Operation Research, Behavioral and Cognitive Robotics, Mechatronics, Control, Bio-Mechanics, Complex Adaptive Systems, Cybernetics and Artificial Life (Bio-Plausible Simulations), Computer Vision, Data Science, Big Data and Defense (air, land, sea, and cyber).

Push

Push is a general purpose, multi-type, Turing complete programming language for programs that evolve in an evolutionary computation system. Initially developed in 2000, it has been used for a range of research projects and applications, including the production of human-competitive results and state-of-the-art work on general program synthesis.

Push and systems that evolve Push programs (such as PushGP) are available in a variety of host languages. However, the core concepts of Push are simple enough that programmers should also be able to build their own Push systems relatively quickly, and to incorporate them into their own evolutionary computation systems. This tutorial will present examples and best practices that will help participants to use Push effectively in their own work, whether they use existing libraries or develop their own.

Push has unusually simple syntax, which makes it particularly easy to randomly generate and vary valid programs. It nonetheless supports rich data and control structures, which in most other languages involve syntax restrictions that can be difficult to maintain during evolutionary change. Furthermore, the data and control structures used by a Push program can themselves emerge through the evolutionary process.

This tutorial will provide a detailed introduction to the Push programming language, and will demonstrate the use of the PushGP genetic programming system with open source libraries in Python (PyshGP) and Clojure (Clojush). Participants will be invited to install and interact with these libraries during the tutorial.

Among the features of Push that will be illustrated are those that support the evolution of programs that use multiple types, iteration, recursion, and modularity to solve problems that may involve multiple tasks. Push-based "autoconstructive evolution" systems, in which evolutionary methods co-evolve with solutions to problems, will also be briefly described.

 

Lee Spector

Lee Spector is a Visiting Professor of Computer Science at Amherst College, a Professor of Computer Science at Hampshire College, and an adjunct professor and member of the graduate faculty in the College of Information and Computer Sciences at the University of Massachusetts, all in Amherst Massachusetts. He received a B.A. in Philosophy from Oberlin College in 1984 and a Ph.D. from the Department of Computer Science at the University of Maryland in 1992. His areas of teaching and research include genetic and evolutionary computation, quantum computation, and a variety of intersections between computer science, cognitive science, evolutionary biology, and the arts. He is the Editor-in-Chief of the journal Genetic Programming and Evolvable Machines (published by Springer), and a member of the editorial board of Evolutionary Computation (published by MIT Press). He is also a member of the SIGEVO executive committee and he was named a Fellow of the International Society for Genetic and Evolutionary Computation. He has won several other awards and honors, including two gold medals in the Human Competitive Results contest of the Genetic and Evolutionary Computation Conference, and the highest honor bestowed by the National Science Foundation for excellence in both teaching and research, the NSF Director's Award for Distinguished Teaching Scholars.

NEW Swarm Intelligence in Cybersecurity

Information play in today's life crucial role. Thus, the topic of the cybersecurity and especially malicious code (also known as malware) become to be more important today, since such malicious codes are not directed only at the computer systems, but also at other electronic devices that may be open to attack and interfere with our private lives and contain personal or valuable data. Currently, Artificial Intelligence (AI) started to play a significant role in cybersecurity threads identification and defense. Evolutionary algorithms and swarm intelligence belonging to the AI paradigm has been successfully and widely used for solving various real-world problems. One domain with many open problems is cybersecurity. Recent developments in AI techniques point to a high potential for both cybersecurity defenders and cybercriminals. Antimalware solutions may utilize intelligent techniques to detect and prevent cyber threats, and at the same time, these intelligent techniques can be used for cybercriminal activities. This tutorial bridge the gap between the cybersecurity and AI community with the special emphasis on how intelligent techniques can be used to create and upgrade concepts of malicious code as a background for the future effective antimalware solutions. The tutorial covers two main parts.
The tutorial starts with a brief introduction to cybersecurity, including online thread monitoring demonstrations, general concepts/principles of viruses, malware, and cybernetic weapons (e.g., Stuxnet). The second part examines the mutual fusion of swarm intelligence/algorithms and both malware and antimalware as the very likely possible near future threads. We will discuss the choice of appropriate swarm intelligence techniques, visualization methods, and analyses for various case studies and evaluate the importance of that choice. The tutorial finish with a general overview of the development in the interconnected cybersecurity and AI fields, experiences with real-world experiments from our lab, live demos, exhibiting experimental swarm bot, and future concept of swarm antimalware.
The tutorial is based on the most actual state of the art, as well as on our original research and experiments published in various journals, leading conferences, and books.
The tutorial is designed for GECCO general audience; advanced or expert knowledge of cybersecurity is not expected.

 

Roman Senkerik

Ivan Zelinka

Ivan Zelinka is currently working at the Technical University of Ostrava (VSB-TU), Faculty of Electrical Engineering and Computer Science. He graduated consequently at Technical University in Brno (1995 – MSc.), UTB in Zlin (2001 – PhD) and again at Technical University in Brno (2004 – assoc. prof.) and VSB-TU (2010 - professor). Before academic career, he was an employed like TELECOM technician, computer specialist (HW+SW) and Commercial Bank (computer and LAN supervisor).
During his career at UTB, he proposed and opened 7 different lectures. He also has been invited for lectures at numerous universities in different EU countries plus the role of the keynote speaker at the Global Conference on Power, Control and Optimization in Bali, Indonesia (2009), Interdisciplinary Symposium on Complex Systems (2011), Halkidiki, Greece and IWCFTA 2012, Dalian China. ICAISC Poland, INTELS Russia. The field of his expertise if mainly on unconventional algorithms and cybersecurity.
He is and was the responsible supervisor of 3 grant of fundamental research of Czech grant agency GAČR, co-supervisor of grant FRVŠ - Laboratory of parallel computing. He was also working on numerous grants and two EU project like a member of the team (FP5 - RESTORM) and supervisor (FP7 - PROMOEVO) of the Czech team and supervisor of international research (founded by TACR agency) focused on the security of mobile devices (Czech - Vietnam).
Currently, he is a professor at the Department of Computer Science and in total, he has been the supervisor of more than 40 MSc. and 25 Bc. diploma thesis. Ivan Zelinka is also supervisor of doctoral students including students from the abroad.
He was awarded by Siemens Award for his PhD thesis, as well as by journal Software news for his book about artificial intelligence. Ivan Zelinka is a member of British Computer Society, Editor in chief of Springer book series: Emergence, Complexity and Computation (http://www.springer.com/series/10624), Editorial board of Saint Petersburg State University Studies in Mathematics, a few international program committees of various conferences and international journals. He is the author of journal articles as well as of books in Czech and English language and one of three founders of TC IEEE on big data http://ieeesmc.org/about-smcs/history/2014-archives/44-about-smcs/history/2014/technical-committees/204-big-data-computing/ . He is also head of research group NAVY http://navy.cs.vsb.cz.

NEW Theory of Estimation-of-Distribution Algorithms

Estimation-of-distribution algorithms (EDAs) are general
metaheuristics for optimization that represent a more recent and
popular alternative to classical approaches like evolutionary
algorithms. In a nutshell, EDAs typically do not directly evolve
populations of search points but build probabilistic models of
promising solutions by repeatedly sampling and selecting points from
the underlying search space. However, until recently the theoretical
knowledge of the working principles of EDAs was relatively limited.

In the last few years, there has been made significant progress in the
theoretical understanding of EDAs. This tutorial provides an
up-to-date overview of the most commonly analyzed EDAs and the most
important theoretical results in this area, ranging from convergence
to runtime analyses. Different algorithms and objective functions,
including optimization under uncertainty (i. e. noise) are
covered. The tutorial will present typical benchmark functions and
tools relevant in the theoretical analysis. It will be demonstrated
that some EDAs are very sensitive with respect to their parameter
settings and that their runtime can be optimized for very different
choices of learning parameters. Altogether, the tutorial with make
the audience familiar with the state of the art in the theory of EDAs
and the most useful techniques for the analysis. We will conclude with
open problems and directions for future research.

 

Carsten Witt

Carsten Witt is an associate professor at the Technical University of Denmark. He received his diploma and Ph.D. in Computer Science from the Technical University of Dortmund in 2000 and 2004, respectively. Carsten's main research interests are the theoretical aspects of randomized search heuristics, in particular evolutionary algorithms, ant colony optimization and estimation-of-distribution algorithms. He has given tutorials about the theoretical background of bioinspired search heuristics at several previous GECCOs. Carsten Witt is a member of the steering committee of the international Theory of Randomized Search Heuristics (ThRaSH) workshop, which he co-organized in 2011 and 2016, and a member of the editorial boards of Evolutionary Computation and Theoretical Computer Science. He was track (co-)chair of the GECCO Theory Track in 2010, 2011 and 2014 and co-organizer of FOGA 2017.