Memetic algorithm
In computer science and operations research, a memetic algorithm is an extension of the traditional genetic algorithm. It uses a local search technique to reduce the likelihood of the premature convergence.
Memetic algorithms represent one of the recent growing areas of research in evolutionary computation. The term MA is now widely used as a synergy of evolutionary or any population-based approach with separate individual learning or local improvement procedures for problem search. Quite often, MAs are also referred to in the literature as Baldwinian evolutionary algorithms, Lamarckian EAs, cultural algorithms, or genetic local search.
Introduction
Inspired by both Darwinian principles of natural evolution and Dawkins' notion of a meme, the term “Memetic Algorithm” was introduced by Pablo Moscato in his technical report in 1989where he viewed MA as being close to a form of population-based hybrid genetic algorithm coupled with an individual learning procedure capable of performing local refinements. The metaphorical parallels, on the one hand, to Darwinian evolution and, on the other hand, between memes and domain specific heuristics are captured within memetic algorithms thus rendering a methodology that balances well between generality and problem specificity. This two-stage nature makes them a special case of Dual-phase evolution.
In a more diverse context, memetic algorithms are now used under various names including Hybrid Evolutionary Algorithms, Baldwinian Evolutionary Algorithms, Lamarckian Evolutionary Algorithms, Cultural Algorithms, or Genetic Local Search. In the context of complex optimization, many different instantiations of memetic algorithms have been reported across a wide range of [|application domains], in general, converging to high-quality solutions more efficiently than their conventional evolutionary counterparts.
In general, using the ideas of memetics within a computational framework is called "Memetic Computing or Memetic Computation".
With MC, the traits of Universal Darwinism are more appropriately captured. Viewed in this perspective, MA is a more constrained notion of MC. More specifically, MA covers one area of MC, in particular dealing with areas of evolutionary algorithms that marry other deterministic refinement techniques for solving optimization problems. MC extends the notion of memes to cover conceptual entities of knowledge-enhanced procedures or representations.
The development of MAs
1st generation
The first generation of MA refers to hybrid algorithms, a marriage between a population-based global search coupled with a cultural evolutionary stage. This first generation of MA although encompasses characteristics of cultural evolution in the search cycle, it may not qualify as a true evolving system according to Universal Darwinism, since all the core principles of inheritance/memetic transmission, variation, and selection are missing. This suggests why the term MA stirred up criticisms and controversies among researchers when first introduced.;Pseudo code:
Procedure Memetic Algorithm
Initialize: Generate an initial population;
while Stopping conditions are not satisfied do
Evaluate all individuals in the population.
Evolve a new population using stochastic search operators.
, that should undergo the individual improvement procedure.
Perform individual learning using meme,
Proceed with Lamarckian or Baldwinian learning.
end for
end while
2nd generation
Multi-meme, Hyper-heuristicand Meta-Lamarckian MA are referred to as second generation MA exhibiting the principles of memetic transmission and selection in their design. In Multi-meme MA, the memetic material is encoded as part of the genotype. Subsequently, the decoded meme of each respective individual/chromosome is then used to perform a local refinement. The memetic material is then transmitted through a simple inheritance mechanism from parent to offspring. On the other hand, in hyper-heuristic and meta-Lamarckian MA, the pool
of candidate memes considered will compete, based on their past merits in generating local improvements through a reward mechanism, deciding on which meme to be selected to proceed for future local refinements. Memes with a higher reward have a greater chance of being replicated or copied. For a review on second generation MA; i.e., MA considering multiple individual learning methods within
an evolutionary system, the reader is referred to.
3rd generation
Co-evolution and self-generating MAs may be regarded as 3rd generation MA where all three principles satisfying the definitions of a basic evolving system have been considered. In contrast to 2nd generation MA which assumes that the memes to be used are known a priori, 3rd generation MA utilizes a rule-based local search to supplement candidate solutions within the evolutionary system, thus capturing regularly repeated features or patterns in the problem space.Some design notes
The frequency and intensity of individual learning directly define the degree of evolution againstindividual learning in the MA search, for a given fixed limited computational budget. Clearly, a more intense
individual learning provides greater chance of convergence to the local optima but limits the amount of evolution that
may be expended without incurring excessive computational resources. Therefore, care should be taken when setting
these two parameters to balance the computational budget available in achieving maximum search performance. When only a portion of the population individuals undergo learning, the issue of which subset of individuals to improve need to be considered to maximize the utility of MA search. Last but not least, the individual learning procedure/meme used also favors a different neighborhood structure, hence the need to decide which meme or memes to use for a given optimization problem at hand would be required.
How often should individual learning be applied?
One of the first issues pertinent to memetic algorithm design is to consider how often the individual learning should be applied; i.e., individual learning frequency. In one case, the effect of individual learning frequency on MA search performance was considered where various configurations of the individual learning frequency at different stages of the MA search were investigated. Conversely, it was shown elsewhere that it may be worthwhile to apply individual learning on every individual if the computational complexity of the individual learning is relatively low.On which solutions should individual learning be used?
On the issue of selecting appropriate individuals among the EA population that should undergo individual learning, fitness-based and distribution-based strategies were studied for adapting the probability of applying individual learning on the population of chromosomes in continuous parametric search problems with Land extending the work to combinatorial optimization problems. Bambha et al. introduced a simulated heating technique for systematically integrating parameterized individual learning into evolutionary algorithms to achieve maximum solution quality.How long should individual learning be run?
Individual learning intensity,, is the amount of computational budget allocated to an iteration of individual learning; i.e., the maximum computational budget allowable for individual learning to expend on improving a single solution.What individual learning method or meme should be used for a particular problem or individual?
In the context of continuous optimization, individual learning exists in the form of local heuristics or conventional exact enumerative methods. Examples of individual learning strategies include the hill climbing, Simplex method, Newton/Quasi-Newton method, interior point methods, conjugate gradient method, line search, and other local heuristics. Note that most of the common individual learning methods are deterministic.In combinatorial optimization, on the other hand, individual learning methods commonly exist in the form of heuristics that are tailored to a specific problem of interest. Typical heuristic procedures and schemes include the k-gene exchange, edge exchange, first-improvement, and many others.
Applications
Memetic algorithms have been successfully applied to a multitude of real-world problems. Although many people employ techniques closely related to memetic algorithms, alternative names such as hybrid genetic algorithms are also employed. Furthermore, many people term their memetic techniques as genetic algorithms.Researchers have used memetic algorithms to tackle many classical NP problems. To cite some of them: graph partitioning, multidimensional knapsack, travelling salesman problem, quadratic assignment problem, set cover problem, minimal graph coloring, max independent set problem, bin packing problem, and generalized assignment problem.
More recent applications include
business analytics and data science,
training of artificial neural networks, pattern recognition, robotic motion planning, beam orientation, circuit design, electric service restoration, medical expert systems, single machine scheduling, automatic timetabling, manpower scheduling, nurse rostering optimisation, processor allocation, maintenance scheduling, multidimensional knapsack problem, VLSI design, clustering of gene expression profiles, feature/gene selection, and multi-class, multi-objective feature selection.
Recent activities in memetic algorithms
- IEEE Workshop on Memetic Algorithms. Program Chairs: Jim Smith, University of the West of England, U.K.; Yew-Soon Ong, Nanyang Technological University, Singapore; Gustafson Steven, University of Nottingham; U.K.; Meng Hiot Lim, Nanyang Technological University, Singapore; Natalio Krasnogor, University of Nottingham, U.K.
- , first issue appeared in January 2009.
- , Hong Kong, .
- , Soft Computing Journal, Completed & In Press, 2008.
- , Singapore, .
- by Thomson Scientific's Essential Science Indicators as an Emerging Front Research Area.
- , IEEE Transactions on Systems, Man, and Cybernetics - Part B: Cybernetics, Vol. 37, No. 1, February 2007.
- , Series: Studies in Fuzziness and Soft Computing, Vol. 166,, 2005.
- , Evolutionary Computation Fall 2004, Vol. 12, No. 3: v-vi.