****optimize
#
Description#
This keyword marks the start of an optimization command sequence, which will be terminated by the next ****
-level command.
Syntax#
The syntax is as follows:
****optimize
type
\(~\,\) ***files
…
\(~\,\) ***values
…
\(~\,\) ***shell
…
\(~\,\) ***zrun
…
\(~\,\) ***compare
…
[ ***enforce
… ]
[ ***function
… ]
[ ***convergence
… ]
[ ***evaluate
… ]
[ ***constraint
… ]
[ ***linear_constraint
… ]
****return
Different optimizer types are available by substituting keywords for type from the following:
|
|
---|---|
|
nelder-mead (aka simplex) method (slow but it gets there) |
|
Levenberg-Marquardt method. (the classic for identification) |
|
augmented lagrangian dual method for constrained optimization |
|
evolutionary stochastic method, slow but escapes local minima |
Anatomy of an optimization#
All the optimization methods rely on a centralized method of evaluating the error function, and modifying the simulation using the variables being optimized. Error functions will be computed using a weighted sumof reference-simulation comparisons specified using the ***compare
commands. The variables of the optimization are specified in the ***values
section, and must correspond to one or more entries in the “optimization” files listed in ***files
. These optimization files are in fact templates for real files, which alter the simulation result of commands listed in ***shell
or ***zrun
entries. The optimizer thus constructs new input files for the simulations, with the current optimization variables inserted in the appropriate locations.
The figure below shows a basic diagram of the interaction between the optimizer and any number of sub-simulations which generate the data to be compared with experimental results.
In general (hopefully) the number of simulation-experiment comparisons will lead the problem to be over-constrained. Its the optimizers job then to find the best fit. If certain experiments are “important” the user will employ weights on the comparisons functions to make them more dominant in the function evaluation. Remember that the best, must robust results will be found with many diverse experiment conditions. Repeated experiments which merely provide a slight variation in response should be avoided, while complex experiments with many interacting effects should be emphasized.
Recommendations#
Some comments follow which are purely the personal preferences of the author, but can probably help ahead of time with the management of typical real-life optimization projects. The fact of life is that optimization involves many trials of different models, simulations, and overall rapid iteration towards the solution – both by the code and by the user. Also it is relatively easy to find oneself in a situation where, when lightly “tweaking” some parameters, the last best solution gets lost. It is really important to maintain control of the changes in input files, models, coefficients, etc in order to safeguard against loosing valuable time. Some hints are therefore:
Use a hierarchical structure for different stages in the analysis, and keep the number of files in each directory very specific to one task. That is, if you want to modify the initial values dramatically, or continue from a previous solution, make a new directory, and copy the files before running. A typical directory structure could be:
BasicIdentification/ exp_files/ .. trial1/ Makefile simulation.inp levenberg.inp simplex.inp trial2/ Makefile simulation.inp levenberg.inp simplex.inp continue2/ Makefile simulation.inp levenberg.inp
and so on.
A Makefile can ease the work of cleaning out a lot of secondary files, but more importantly it is a very good way to clean those files without the risk of deleting important files by mistake. I (RF) always set up a makefile more or less as follows:
all : clean : Zclean -a rm -f *.best
Remember to use tabs for the target lines. to clean the directory one then issues
make clean
1this works in Win32 platforms the sameIt’s helpful to have the experiment (reference) files is a centralized location, as shown above, but instead of using absolute path names use relative ones. In
trial2
for example an experiment file can be accessed 1this works in Win32 platforms the same as../exp_files/exp1.dat
In this way the project can be moved without everything breaking. To start a whole new iteration then, one just copies theBasicIdentification
directory elsewhere and the work starts.
Example#
This example is from EXAMPLES-MAT/Identify-in738/STEP1
. First is the template file for the material definition in738.tmpl
:
*coefficients
young 149650.0000
n ?n
K ?K
n2 ?n2
K2 ?K2
C1 ?C1
D1 ?D1
R0 80.0
***return
****optimize levenberg_marquardt
***convergence
perturb 0.06
lambda0 5.0
iter 60
***zrun Zrun -Q -S simulate
***files in738
***values
**auto_init_from_file levenberg.best
K 700. min 600. max 5000.
n 4.47 min 1.2 max 20.
K2 500. min 200. max 5000.
n2 35. min 3. max 75.
C1 290. min 3.0 max 800.
D1 600. min 5. max 5000.
***compare
g_file_file tenx2.test 1 2 ../EXP/tenx2.exp 1 4
g_file_file tenx3.test 1 2 ../EXP/tenx3.exp 1 4
g_file_file tx6x2x6.test 1 2 ../EXP/tx6x2x6.exp 1 4
g_file_file tx6x3x4.test 1 2 ../EXP/tx6x3x4.exp 1 4
g_file_file cr410.test 1 2 ../EXP/cr410.exp 1 3
g_file_file frlx3.test 1 2 ../EXP/frlx3.exp 1 4
****return