****optimize evolution#

Description#

This section describes indeed the options available for the evolution type optimizer engine is used. In that case, an evolutionary (or genetic) algorithm is used for the optimization. Evolution methods search the space of variables using random jumps. Even though such jumps are oriented so as to be more efficient than a completely random search, evolution still needs around 1000 calculations of the function to get near the optimum. The advantage of evolution is that it can escape local optima in the search space.

A useful application of the evolution method is to generate a starting point for other optimization methods. This is because the evolution method is not sensitive to the convexity of the problem, which permits approaching optimization problems that are numerically ill-conditioned, that diverge, that have discontinuous functions, …

evolution is a steady state genetic algorithm that handles continuous variables. It can handle constraints using a penalization scheme (cf. introduction of the optimization in Z-set, penalty). There is a rule of thumb when changing the parameters of evolution: some parameters will make the search more random, i.e. more robust but also less efficient, others will increase the speed of convergence, but the probability of getting trapped far from the global optimum simultaneously increases. The tradeoff is clear, and default parameters are supposed to represent a reasonable compromise between randomness and efficiency. Based on its experience, the user may decide to emphasize an aspect or another of evolution’s search. population_size and prob_muta should be increased to increase randomness.

The following files are generated by a run of evolution:

case.evo.log

contains various data about the convergence of the run.

case.evo.gnut

is an executable file that will automatically plot (with gnuplot) the main results found in case.evo.log.

case.evo.best

contains the best solution found at the end of the run.

case.evo.gid

contains “good ideas”, that is to say good solutions, different from each other, classified in two groups, feasible solutions that satisfy the constraints, and infeasible solutions.

Parameters#

Parameters of evolution are given in the **convergence section. They may be taken from the following:

population_size

Sets the population size of the algorithm, i.e., the number of points that are processed by the algorithm. Default value is 50.

prob_Xover

Sets the probability of crossover (a number between 0. and 1.). Default value is 0.8 .

prob_muta

Sets the probability of mutation (a number between 0. and 1.). Default value is 0.2 .

max_nb_analyse

Sets max_nb_analyse. The search is stopped after max_nb_analyse calculations of the function. Default value is 700.

max_no_progress

Sets max_no_progress The search is stopped after max_no_progress calculations of the function without improvement. Default value is very large, making it inactive.

no_records

Sets the number of outputs recorded in the file case.evo.log . This is the number of points that will be plotted then with the command case.evo.gnut . Default value is 30.

no_feas_pts_res, no_ifeas_pts_res evolution

also collects points that might not be the best overall, but might be the best of their kind. Those points are to be found at the end of the run in the file case.evo.gid . They are divided in 2 groups, the feasible and the infeasible points. no_feas_pts_res sets the number of feasible points that are collected, and no_ifeas_pts_res the number of infeasible points. The algorithm that collects those points make sure that they are different based on a distance metric that can be set with the command neighborhood_dist. If two points are separated by a distance smaller than neighborhood_dist, they are considered as being close and only one of them will be kept in case.evo.gid . A default value of neighborhood_dist is 1/20×i=0n(xi,maxxi,min)2.

lag_mult

v1vm where m is the number of constraints. It enables to set the lagrange multipliers at the beginning of the runs. Lagrange multipliers apply to each constraint separately, and they are used here as penalties. If a constraint is difficult to satisfy, the corresponding lagrange multiplier should be increased. Lagrange multipliers are automatically changed by the optimizer, and they will eventually represent the real lagrange multipliers after convergence. Default values are 1.

lag_mult_step

sets the amount of change of the lagrange multipliers after each evaluation of the function. The lagrange multipliers are updated at each function evaluation according to,

λik+1 = λik + lag_mult_step×gi,i=1,m
λik+1 = max(λik+1,0.)

where λik is the lagrange multiplier of the ith constraint at the kth iteration, and gi is the value of the ith constraint of the best solution known so far. The larger lag_mult_step, the quicker the lagrange multipliers will change. Default value is 0.1 .