Genetic Operations

We now know how individuals can be selected based on their fitness. The next part to consider is what happens to individuals once they are selected. One of three things can occur at this stage to produce a new individual for insertion in the population: they can be reproduced exactly, combined with another individual, or mutated randomly. Let us examine these three operations in further detail.


Reproduction involves making an exact copy of the individual and placing it into the population. The goal of reproduction is to increase the prevalence of individuals which have proven themselves fit to solve the problem. This gives good individuals a greater chance of being preserved, reducing the risk of losing fit individuals.

Reproductions generally do not account for many of the genetic operations, because we would ideally like to see progress in the population, and progress cannot occur if there isn't any change. If the prevalence of reproduction operations is too high, we might expect to see a few decent individuals quickly overwhelming the population, eliminating diversity and preventing further change. Reproduction is important, but too much can really hurt a GP run. That being said, let us spend some more time discussing the genetic operations which actually bring about change.


Crossover involves two individuals and combines genetic material from both parents to produce a new individual. To determine which individuals undergo crossover, there are two possibilities. The first and more common method is to select both parents based on fitness, ensuring the genetic material from both parents is a component of fit individuals. An alternative, which does have merit is to select the first individual based on fitness, and the second at random. This tends to increase the diversity of a population, while maintaining some selective pressure.

Regardless of how two individuals are chosen, some crossover operation actually needs to combine genetic material from both parents. The most common crossover procedure is known as Subtree Crossover, shown below:

Subtree Crossover:

Subtree crossover is by far the most common genetic operation used in work on GP. It was described in Koza's 1992 Book and subsequently throughout the literature.

Subtree crossover operates on two parent individuals, namely the father and the mother. A random subtree is selected from both the father and the mother. The father's subtree is replaced by the mother's subtree and the new individual is the offspring.

Subtree crossover is simple in principle, but still an extremely powerful genetic operation. A vast majority of the genetic operations occurring in most GP runs are subtree crossovers. In fact, many people support using only subtree crossover. The choice is up to you.

There are several lesser-used crossover operations that appear throughout the literature. You could easily come up with your own crossover operations. For instance, consider the following crossover operation:

Arity-2 Combination:

This operation takes two parent trees and randomly chooses a function of arity-2 from the set of all available nodes. The two parents then become the child nodes of the randomly chosen function. This operation very rarely, if at all, used in GP work, but I have found that it does connote modest performance increases when used sparingly. Further research should be done on the subject, however.

Crossover takes advantage of the fact that both individuals chosen to participate likely contain components of the ideal solution. By randomly combining subtrees from the individuals, new combinations of proven techniques can be discovered that may be more effective at solving the problem at hand.


Mutation operates on a single individual from the population. The changes made to the individual represent an addition of new genetic material to the gene pool. For this reason, mutation operations tend to preserve or increase the diversity of the population. They do this at the expense of tried-and-true genetic material. Because the new material is completely untested, mutation operations often end up decreasing the fitness of an individual. For this reason, they tend to be used less than crossover. They are, however, still important to a successful GP run.

Subtree Mutation:

Subtree mutation can be thought of as crossover between an individual from the population and a newly generated individual. A mutation point is chosen and the subtree connected to that point is removed. It is then replaced with a newly generated subtree. The new subtee is usually constructed via the grow method, with a maximum depth near the depth of the tree being replaced.

Node Replacement Mutation:

Node replacement mutation picks a mutation point from the individual. A new node with the same signature (arity, return type, etc.) is selected from the node factory and the node at the mutation point is replaced with the new node. Node replacement is quite simple, and might change something like (2 + x) to (2 - x).

Constant Mutation:

Constant mutation works by selecting some node with an associated constant value and changing the value associated with that node. The actual behavior and effect of constant mutation varies greatly from problem-to-problem and not all problems can actually take advantage of constant mutation, since only problems that have ephemeral random constants can use constant mutation. Still, when it is used, constant mutation can be very powerful for fine-tuning solutions. Constant mutation might change something like (2 * x) to (2.5 * x).

Shrink Mutation:

Shrink mutation was created with the intent of reducing the complexity of populations. Shrink works by selecting a random mutation point and removing the subtree associated with that point. A terminal is then selected from the node factory and inserted into the empty space in the tree. Subtree mutation proves effective in reducing complexity, particularly by removing large subtrees of ineffective code. Shrink might change something like (0*3 + x) to (0 + x).

Hoist Mutation:

Hoist mutation was also created to reduce complexity. Hoist works by replacing an individual with a randomly selected subtree from the individual. Hoist is also effective in removing large segments of ineffective code. For instance, you could imagine hoist changing something like (3*x + 0) to (3*x).

A quick note on operation points: Throughout this section, we discussed randomly selecting subtrees or points of operation. These points generally are not selected with a uniform distribution. Note that in many trees, the number of terminals is greater than the number of functions. Also note that little change actually occurs when the points selected are terminals. For this reason, there is generally a parameters, function selection probability (often 0.9). This parameter controls the probability that a function is forced to be selected as the operation point. This promotes more significant genetic manipulations.

The reproduction operation increases the likeliness that well-fit individuals will be selected for future genetic operations. The crossover operations combine historically successful genetic code in new combinations. And the mutation operations introduce new genetic material to the gene pool. No single operation is more effective in all situations and it is a complex interplay among them that accounts for the change in the most successful GP runs.

To this point in the guide, we have discussed several aspects of GP in isolation. Now it is time to revisit and expand upon the GP flowchart and really figure out how GP does what it does. Time to put it all together.