Tutorials#
Linear and nonlinear cube with holes#
This first tutorial is an introduction to the parallel computing with Z-set. The archive contains three directories:
MESH contains the initial mesh file;
INP is your working directory;
REF serves as a reference.
Go to the MESH directory and visualize the original mesh
cube.geo
.Go back to the INP directory read the file
split.inp
.Run the command
Zrun -m split.inp
. This command generates various files; the filepara.cu
and the folderpara-pcus
contain informations about the decomposition and the subdomain meshes are in thepara-pmeshes
directory.Visualize the subdomain meshes in Zmaster.
Read the file
linear.inp
, analyze the file and look for the differences induced by the parallel computation. With the help of the documentation, describe the simulation.Launch the parallel computation
linear.inp
. This step can be done in various way, and you have to fit with your supercomputer architecture.If you are using your own machine, you can simply use the command
Zrun -mpimpi 8 linear.inp
Otherwise, you may have to reserve an interactive compute node or to write a slurm (bsub, qsub, …) script. The file
linear.slurm
is an example for the Spiro supercomputer.
The parallel computation generates one file (
msg
,ut
,node
, …) per subdomain. Most of the message logs are in thelinear-001.msg
files. In this file, you have various informations about the convergence of the parallel solver:The convergence history of the residual of the iterative solver (
ratio
);The norm of displacement jump at convergence;
The number of rigid body motions (see
OPERATOR RBM
andFound x rigid modes
in each msg file).
Analyze these informations
Do the post-processing, with the command
Zrun -mpimpi 8 -pp linear.inp
for example.Visualize the results in Zmaster
Do the same computation without parallelism and compare the results.
Play with the number of subdomains, the number of threads per subdomains and the solver options. Analyze the convergence, solution and computational time.
Do the previous step for the nonlinear computation
nonlinear.inp
Parallel computation with external parameter#
This second tutorial introduce how to do a parallel computation that need a parameter coming from a sequential computation. The archive contains three directories:
MESH contains the initial mesh file;
INP is your working directory;
REF serves as a reference.
Go to the MESH directory and visualize the original mesh
cube.geo
.Go back to the INP directory read the file
thermal.inp
. Launch the sequential thermal computation. Visualize the temperature field. The field will be used as an external parameter for the parallel computation.Read the file
split.inp
and generate the subdomain decomposition.Read the script
split_results.py
and launch it. This script splits the sequential result according to the subdomain decomposition.Visualize the the temperature field on the subdomain meshes with Zmaster. Compare with the sequential temperature field.
Read the file
mechanical.inp
, analyze the file and look for the differences induced by the parallel computation. With the help of the documentation, describe the simulation.Launch the parallel computation
mechanical.inp
. This step can be done in various way, and you have to fit with your supercomputer architecture.Analyze the log messages.
Do the post-processing, with the command
Zrun -mpimpi 8 -pp mechanical.inp
for example.Visualize the results in Zmaster
Do the same computation without parallelism and compare the results.
Play with the number of subdomains, the number of threads per subdomains and the solver options. Analyze the convergence, solution and computational time.
Try to perform the thermal computation in parallel and adapt the previous steps
Parallel computation with ill conditioned systems#
This third tutorial exhibits influence of the heterogeneity on the convergence of the solvers. It shows how to use AMPFETI when needed. The archive contains three directories:
MESH contains the initial mesh file;
INP is your working directory;
REF serves as a reference.
Go to the MESH directory and visualize the original mesh
composite.geo
.Read the file
split.inp
and generate the subdomain decomposition.Read the files
feti.inp
andampfeti.inp
, analyze the differences. With the help of the documentation, describe the simulations.Launch the parallel computation
feti.inp
andampfeti.inp
. This step can be done in various way, and you have to fit with your supercomputer architecture.Analyze the log messages and compare the convergence of the methods.
Visualize the results in Zmaster
Play with the jump of Young modulus between the inclusions and the matrix and analyze the influence of the heterogeneity on both solvers.
Play with the number of subdomains, the number of threads per subdomains and the solver options. Analyze the convergence, solution and computational time.
How to use advanced_detection (TBD)#
This fourth tutorial exhibits the critical point of local kernel
detections of FETI based methods. It shows how to use
advanced_detection
when mumps
is no longer able to detect the
correct null space. The archive contains three directories:
MESH contains the initial mesh file;
INP is your working directory;
REF serves as a reference.
How to put mpcs on a single subdomain (TBD)#
This fifth tutorial shows how to play with the graph partitionner to keep mpcs local. The archive contains three directories:
MESH contains the initial mesh file;
INP is your working directory;
REF serves as a reference.