Genetic Algorithm Optimisation

Optimisation strategies rely on the determination of "best design" according to some pre-specified criteria within a potentially large parameter space. Deterministic methods employ a search strategy heading in the direction of improved design performance from a point within the space. This point is usually a good design arrived at through an intuitive or empirical design approach.

The various deterministic methods differ mainly in the method they employ to construct the search direction vector in the parameter space. In these methods the iterative updating of the trial solution vector requires not only the value of the objective function (design quality criterion), but also the derivatives of this function with respect to parameter changes.

These methods perform well for uni-modal optimisation problems with easily characterisable objective functions and smoothly varying optimisation criteria. They will, however, only search for the closest extremum of the objective function which means that the initial seed design has to be somewhere near the global optimum, and they frequently fail for stiff problems (low gradient objective function), problems involving discrete design elements, and multi-parametric design tasks. Many problems in electromagnetics are of this nature having objective functions that are highly non-linear, stiff, multi-extremal and non-differentiable.

GA methods are stochastic in nature and differ from deterministic methods in a number of important respects. Firstly the GA operates on a large number of designs at once which randomly cover the entirety of the design space, not just a single design. This means that the absolute convergence of the GA is not guaranteed, and convergence is slower than local searches, but they are far more robust and are much less likely to converge to local optima. In addition, the random generation of designs within the space of all designs means that the GA method can arrive at counter intuitive solutions. This is unlikely with gradient based methods which are started from a 'good' design.

Secondly, GA's use the value of the objective function to estimate fitness and do not rely on calculation of gradients. This is important for difficult optimisation tasks.

Thirdly, GA's can operate on either binary coded or continuous variables. This means that they are ideal for discrete problems (for example sparse array design), and can even be used to select suitable constructional elements from a database using the binary encoding of element identification. Continuous variables can either be discretised (represented using a binary number) or the GA can be implemented using continuous parameters. These methods have been applied to a wide range of design problems.

The basic implementation of the GA is as follows. The parameters governing the device behaviour are defined within their minimum and maximum limits. For binary coded algorithms these variables are digitised into a binary number to a suitable resolution. The device design can then be characterised by the definition of a single binary number which is the concatenation of each of the device parameters. For example say a device has 3 parameters, each encoded to 4 bit resolution. All possible device designs can then be represented by a 12 bit binary number, 4 bits for each parameter.

Each 4 bit parameter specification is termed a gene, and the 12 bit device specification, made up of the 3 genes, is termed a chromosome. A population of device designs can then be generated either randomly, with the random generation of a number of chromosomes, or with some predetermination of 'good' design by seeding the initial population with genes expected to perform well. From this initial population the objective function, or quality measure, is calculated for each member. Members are then classified according to performance as measured by the objective function.

Three GA operators are then applied sequentially to produce a new generation of chromosomes:

Step 1 Selection. The fittest chromosomes are selected preferentially on the basis of their object function to go forward into the next generation. This operator is responsible for the convergence of the algorithm as it eliminates weak genetic material (bad designs).

Step 2 Crossover. The fittest genes which survive selection pair and mate in a crossover operator. Pairs of fit chromosomes are chosen randomly and split at a random location in their genetic code. The left segment of the first chromosome is attached to the right segment of the second chromosome and vice-versa, giving two new chromosomes. These chromosomes inherit the good genetic material of their predecessors. The crossover operator is the main search tool of the GA since it combines genetic information known to be useful into new designs.

Step 3 Mutation. With a low probability chromosomes in the new generation are selected for mutation. In this operator a bit of the chromosome is selected, again randomly, and is changed from 1 to 0, or from 0 to 1 depending on its initial state. This function maintains genetic diversity in a mature population and prevents the GA from getting stuck in a local optimum.

These processes are repeated iteratively until the objective function of the population is found not to vary with successive generations. At this point the majority of genetic material in the population will be very similar, disregarding mutations, with all chromosomes close to optimal as defined by the objective function.