Introduction#

Description#

The version 8.0 of Z-set introduces a generalized optimization module which may be used to identify material coefficients, optimize geometry, etc. The optimizer will modify tokenized parameters in a different user specified ASCII files in order to minimize the combined error of a variety of tests. Each test will generally consist of one or more shell scripts or sub-processes used for “simulation” of the test, and a certain method of comparison with experimental, reference data, or optimal condition. No explicit assumption is made to the nature of simulation method so these simulations may be made using the Z-set programs internally, or by another means. The real strength of this method is one can obtain the best comprehensive approximation to many data sets, even if they are theoretically over-constrained.

The optimizer problem and its solution are defined in an input text file, similar to that for the other main program types. The optimizer uses a template file for definition of the parameters to be modified.

Basic Concepts and Notations#

Optimization problems are formulated in Z-set in the standard form,

(499)#\[\begin{split}\begin{aligned} \mbox{minimize} \qquad & {\cal F}(x) & = \frac{1}{2}\sum_{i=1}^N w_i (f(x,t_i)- y(t_i))^2 \nonumber\\ & & = \frac{1}{2}\sum_{i=1}^N w_i (f_i(x)- y_i)^2 \\ \mbox{by changing} \qquad & x \in S \nonumber \\ \mbox{such that} \qquad & g_j(x) \le 0, j=1, n_g \end{aligned}\end{split}\]

\({\cal F}\) is the cost function (scalar), \(g\) is a vector of constraints, \(x\) are the parameters to be optimized, \(t_i\) tags the experiment (experiment number, time, …), and \(w_i\) is the weight associated to experiment \(i\). A variable \(x\) (set of parameters) such that \(g_j({ x}) \le 0 ~,~j=1,n_g\) represents a feasible point (vice versa infeasible). Z-set’s optimizers are primarily meant for parameter identification of material behaviors. Therefore, the default cost function \({\cal F}\) is the least-square distance between experiments and simulations, and constraints \(g\) are used to bound and/or to relate parameters with each other. Of course, other general type of optimization problems can be addressed.

There are two main categories of optimization methods, local and global optimizers. Global optimizers seek \(x^*\) such that

(500)#\[{\cal F}({ x}^*) < {\cal F}( x), ~ \forall x \in S\]

Local optimizers look for \(x^*\) such that

(501)#\[{\cal F}({ x}^*) < {\cal F}(x), ~ \forall x~~\mbox{such that} || x - { x}^*||< \epsilon\]

Typically, local methods iterate from a set of variables \(x\) in the search space \(S\) to another based on information gathered in a neighborhood of \(x\). Zeroth order optimizers use exclusively the value of \({\cal F}\) and \(g\). First order methods additionally use \({\tt grad~} {\cal F}\) and \({\tt grad~} g\), second order methods \({\rm hessian~} {\cal F}\) and \({\rm hessian~} g\) (or an approximation of \({\tt grad~}\) and \({\rm hessian~}\)). Global optimizers typically rely on pseudo-stochastic transitions in the search space in order to be able to escape local optima (we do not consider optimizers based on enumeration of all possible local optima). Practically, an important difference between global and local optimizers is that global optimizers are slower to converge, but offer greater guarantees on the quality of the solution produced. In many cases, convergence of global optimizers is so slow that a solution cannot be found in a reasonable time.

Optimization methods can also be divided into methods that explicitly handle constraints (\(\ref{ctr}\)) (SQP) and the others (Levenberg-Marquardt, Simplex, evolutionary algorithm). Penalizing \(f\) is a simple way of transforming a constrained optimization problem into an unconstrained optimization problem:

(502)#\[\mbox{minimize}~~{\cal F}_p = {\cal F}( x) +\sum_{i=1}^{n_g} p_i \mbox{max}(0.,g_i( x))^{\alpha}\]

where \(p\) is a vector of (positive) penalty parameters. The exponent \(\alpha\) is usually taken larger than 1 (typically 2) for gradient based optimization methods (in order to ensure differentiability). The tricky aspect of penalization is in choosing \(p\). If \(p\) is to small, the solution to the optimization problem will not satisfy the constraints. On the contrary, if \(p\) is taken too large, convergence to the optimum might be difficult. Optimizers that explicitly handle constraints do not require that the user specifies \(p\). Further details on optimization in mechanics can be found in [U12]

Four principal optimizers are available in Z-set: Levenberg-Marquardt, simplex, SQP, and an evolutionary algorithm. They span the different categories of optimizers mentioned earlier. A rapid description of their principle can be found in the relevant sections.

Exceution proceedure#

% Zrun -o problem \(\hookleftarrow\)