Neuroevolution
{{Short description|Form of artificial intelligence}}
{{Distinguish|Evolution of nervous systems|Neural development|Neural Darwinism}}
Neuroevolution, or neuro-evolution, is a form of artificial intelligence that uses evolutionary algorithms to generate artificial neural networks (ANN), parameters, and rules.{{Cite news|url=https://oreilly.com/ideas/neuroevolution-a-different-kind-of-deep-learning|title=Neuroevolution: A different kind of deep learning|last=Stanley|first=Kenneth O.|date=2017-07-13|work=O'Reilly Media|access-date=2017-09-04|language=en}} It is most commonly applied in artificial life, general game playing{{cite journal|last1=Risi |first1=Sebastian|last2=Togelius|first2=Julian |title= Neuroevolution in Games: State of the Art and Open Challenges |journal=IEEE Transactions on Computational Intelligence and AI in Games |volume=9|pages=25–41|year= 2017 |arxiv=1410.7326|doi=10.1109/TCIAIG.2015.2494596|s2cid=11245845}} and evolutionary robotics. The main benefit is that neuroevolution can be applied more widely than supervised learning algorithms, which require a syllabus of correct input-output pairs. In contrast, neuroevolution requires only a measure of a network's performance at a task. For example, the outcome of a game (i.e., whether one player won or lost) can be easily measured without providing labeled examples of desired strategies. Neuroevolution is commonly used as part of the reinforcement learning paradigm, and it can be contrasted with conventional deep learning techniques that use backpropagation (gradient descent on a neural network) with a fixed topology.
Features
Many neuroevolution algorithms have been defined. One common distinction is between algorithms that evolve only the strength of the connection weights for a fixed network topology (sometimes called conventional neuroevolution), and algorithms that evolve both the topology of the network and its weights (called TWEANNs, for Topology and Weight Evolving Artificial Neural Network algorithms).
A separate distinction can be made between methods that evolve the structure of ANNs in parallel to its parameters (those applying standard evolutionary algorithms) and those that develop them separately (through memetic algorithms).{{cite book |doi=10.1007/978-3-540-87700-4_61 |chapter=Countering Poisonous Inputs with Memetic Neuroevolution |title=Parallel Problem Solving from Nature – PPSN X |series=Lecture Notes in Computer Science |year=2008 |last1=Togelius |first1=Julian |last2=Schaul |first2=Tom |last3=Schmidhuber |first3=Jürgen |last4=Gomez |first4=Faustino |volume=5199 |pages=610–619 |isbn=978-3-540-87699-1 }}
Comparison with gradient descent
{{further|Gradient descent}}
Most neural networks use gradient descent rather than neuroevolution. However, around 2017 researchers at Uber stated they had found that simple structural neuroevolution algorithms were competitive with sophisticated modern industry-standard gradient-descent deep learning algorithms, in part because neuroevolution was found to be less likely to get stuck in local minima. In Science,
journalist Matthew Hutson speculated that part of the reason neuroevolution is succeeding where it had failed before is due to the increased computational power available in the 2010s.{{cite journal |last1=Hutson |first1=Matthew |title=Artificial intelligence can 'evolve' to solve problems |journal=Science |date=11 January 2018 |doi=10.1126/science.aas9715 }}
It can be shown that there is a correspondence between neuroevolution and gradient descent.{{cite journal |last1=Whitelam |first1=Stephen |last2=Selin |first2=Viktor |last3=Park |first3=Sang-Won |last4=Tamblyn |first4=Isaac |title=Correspondence between neuroevolution and gradient descent |journal=Nature Communications |date=2 November 2021 |volume=12 |issue=1 |pages=6317 |doi=10.1038/s41467-021-26568-2 |pmid=34728632 |pmc=8563972 |arxiv=2008.06643 |bibcode=2021NatCo..12.6317W }}
Direct and indirect encoding
Evolutionary algorithms operate on a population of genotypes (also referred to as genomes). In neuroevolution, a genotype is mapped to a neural network phenotype that is evaluated on some task to derive its fitness.
In direct encoding schemes the genotype directly maps to the phenotype. That is, every neuron and connection in the neural network is specified directly and explicitly in the genotype. In contrast, in indirect encoding schemes the genotype specifies indirectly how that network should be generated.{{Citation |last1=Kassahun|first1=Yohannes|last2=Sommer|first2=Gerald|last3=Edgington|first3=Mark|last4=Metzen|first4=Jan Hendrik|last5=Kirchner|first5=Frank|date=2007|contribution=Common genetic encoding for both direct and indirect encodings of networks|title=Genetic and Evolutionary Computation Conference |publisher=ACM Press|pages=1029–1036|citeseerx=10.1.1.159.705}}
Indirect encodings are often used to achieve several aims:{{citation|last=Gauci |first= Stanley |contribution=Generating Large-Scale Neural Networks Through Discovering Geometric Regularities |title=Genetic and Evolutionary Computation Conference|year=2007 |location=New York, NY |publisher=ACM |contribution-url=https://eplex.cs.ucf.edu/papers/gauci_gecco07.pdf}}{{Cite book|title=Neural Network Synthesis Using Cellular Encoding And The Genetic Algorithm.|last1=Gruau|first1=Frédéric|last2=I|first2=L'universite Claude Bernard-lyon|last3=Doctorat|first3=Of A. Diplome De|last4=Demongeot|first4=M. Jacques|last5=Cosnard|first5=Examinators M. Michel|last6=Mazoyer|first6=M. Jacques|last7=Peretto|first7=M. Pierre|last8=Whitley|first8=M. Darell|date=1994|citeseerx = 10.1.1.29.5939}}{{Cite journal|last1=Clune|first1=J.|last2=Stanley|first2=Kenneth O.|last3=Pennock|first3=R. T.|last4=Ofria|first4=C.|date=June 2011|title=On the Performance of Indirect Encoding Across the Continuum of Regularity|journal=IEEE Transactions on Evolutionary Computation|volume=15|issue=3|pages=346–367|doi=10.1109/TEVC.2010.2104157|issn=1089-778X|citeseerx=10.1.1.375.6731|s2cid=3008628}}{{cite journal |last1=Risi |first1=Sebastian |last2=Stanley |first2=Kenneth O. |title=An Enhanced Hypercube-Based Encoding for Evolving the Placement, Density, and Connectivity of Neurons |journal=Artificial Life |date=October 2012 |volume=18 |issue=4 |pages=331–363 |doi=10.1162/ARTL_a_00071 |pmid=22938563 |s2cid=3256786 |url=https://stars.library.ucf.edu/cgi/viewcontent.cgi?article=4196&context=facultybib2010 |doi-access=free }}
- modularity and other regularities;
- compression of phenotype to a smaller genotype, providing a smaller search space;
- mapping the search space (genome) to the problem domain.
=Taxonomy of embryogenic systems for indirect encoding=
Traditionally indirect encodings that employ artificial embryogeny (also known as artificial development) have been categorised along the lines of a grammatical approach versus a cell chemistry approach.{{cite journal |last1=Stanley |first1=Kenneth O. |last2=Miikkulainen |first2=Risto |title=A Taxonomy for Artificial Embryogeny |journal=Artificial Life |date=April 2003 |volume=9 |issue=2 |pages=93–130 |doi=10.1162/106454603322221487 |pmid=12906725 |s2cid=2124332 }} The former evolves sets of rules in the form of grammatical rewrite systems. The latter attempts to mimic how physical structures emerge in biology through gene expression. Indirect encoding systems often use aspects of both approaches.
Stanley and Miikkulainen propose a taxonomy for embryogenic systems that is intended to reflect their underlying properties. The taxonomy identifies five continuous dimensions, along which any embryogenic system can be placed:
- Cell (neuron) fate: the final characteristics and role of the cell in the mature phenotype. This dimension counts the number of methods used for determining the fate of a cell.
- Targeting: the method by which connections are directed from source cells to target cells. This ranges from specific targeting (source and target are explicitly identified) to relative targeting (e.g., based on locations of cells relative to each other).
- Heterochrony: the timing and ordering of events during embryogeny. Counts the number of mechanisms for changing the timing of events.
- Canalization: how tolerant the genome is to mutations (brittleness). Ranges from requiring precise genotypic instructions to a high tolerance of imprecise mutation.
- Complexification: the ability of the system (including evolutionary algorithm and genotype to phenotype mapping) to allow complexification of the genome (and hence phenotype) over time. Ranges from allowing only fixed-size genomes to allowing highly variable length genomes.
Examples
Examples of neuroevolution methods (those with direct encodings are necessarily non-embryogenic):
See also
- Automated machine learning (AutoML)
- Evolutionary computation
- NeuroEvolution of Augmenting Topologies (NEAT)
- HyperNEAT (A Generative version of NEAT)
- Evolutionary Acquisition of Neural Topologies (EANT/EANT2)
References
{{reflist|30em}}
External links
- {{Cite web|url=https://beacon-center.org/blog/2012/08/13/evolution-101-neuroevolution/|title=Evolution 101: Neuroevolution {{!}} BEACON|website=beacon-center.org|language=en-US|access-date=2018-01-14}}
- {{Cite web|url=https://nn.cs.utexas.edu/keyword?neuroevolution|title=NNRG Areas - Neuroevolution|website=nn.cs.utexas.edu|publisher=University of Texas |access-date=2018-01-14}} (has downloadable papers on NEAT and applications)
- {{Cite web|url=https://sharpneat.sourceforge.net/|title=SharpNEAT Neuroevolution Framework|website=sharpneat.sourceforge.net|language=en|access-date=2018-01-14}} mature open source neuroevolution project implemented in C#/.Net.
- [https://ANNEvolve.sourceforge.net ANNEvolve is an Open Source AI Research Project] (Downloadable source code in C and Python with a tutorial & miscellaneous writings and illustrations
- {{Cite web|url=https://siebel-research.de/evolutionary_learning/|title=Nils T Siebel – EANT2 – Evolutionary Reinforcement Learning of Neural Networks|website=siebel-research.de|access-date=2018-01-14}} Web page on evolutionary learning with EANT/EANT2] (information and articles on EANT/EANT2 with applications to robot learning)
- [https://nerd.x-bot.org/ NERD Toolkit.] The Neurodynamics and Evolutionary Robotics Development Toolkit. A free, open source software collection for various experiments on neurocontrol and neuroevolution. Includes a scriptable simulator, several neuro-evolution algorithms (e.g. ICONE), cluster support, visual network design and analysis tools.
- {{Cite web|url=https://github.com/CorticalComputer|title=CorticalComputer (Gene)|website=GitHub|access-date=2018-01-14}} Source code for the DXNN Neuroevolutionary system.
- {{Cite web|url=https://eplex.cs.ucf.edu/ESHyperNEAT|title=ES-HyperNEAT Users Page|website=eplex.cs.ucf.edu|language=en|access-date=2018-01-14}}
{{Evolutionary computation}}
{{Neuroscience}}
Category:Evolutionary algorithms and artificial neuronal networks