Tutorials#

Linear and nonlinear cube with holes#

This first tutorial is an introduction to the parallel computing with Z-set. The archive contains three directories:

  • MESH contains the initial mesh file;

  • INP is your working directory;

  • REF serves as a reference.

  1. Go to the MESH directory and visualize the original mesh cube.geo.

  2. Go back to the INP directory read the file split.inp.

  3. Run the command Zrun -m split.inp. This command generates various files; the file para.cu and the folder para-pcus contain informations about the decomposition and the subdomain meshes are in the para-pmeshes directory.

  4. Visualize the subdomain meshes in Zmaster.

  5. Read the file linear.inp, analyze the file and look for the differences induced by the parallel computation. With the help of the documentation, describe the simulation.

  6. Launch the parallel computation linear.inp. This step can be done in various way, and you have to fit with your supercomputer architecture.

    • If you are using your own machine, you can simply use the command Zrun -mpimpi 8 linear.inp

    • Otherwise, you may have to reserve an interactive compute node or to write a slurm (bsub, qsub, …) script. The file linear.slurm is an example for the Spiro supercomputer.

  7. The parallel computation generates one file (msg, ut, node, …) per subdomain. Most of the message logs are in the linear-001.msg files. In this file, you have various informations about the convergence of the parallel solver:

    • The convergence history of the residual of the iterative solver (ratio);

    • The norm of displacement jump at convergence;

    • The number of rigid body motions (see OPERATOR RBM and Found x rigid modes in each msg file).

  8. Analyze these informations

  9. Do the post-processing, with the command Zrun -mpimpi 8 -pp linear.inp for example.

  10. Visualize the results in Zmaster

  11. Do the same computation without parallelism and compare the results.

  12. Play with the number of subdomains, the number of threads per subdomains and the solver options. Analyze the convergence, solution and computational time.

  13. Do the previous step for the nonlinear computation nonlinear.inp

Parallel computation with external parameter#

This second tutorial introduce how to do a parallel computation that need a parameter coming from a sequential computation. The archive contains three directories:

  • MESH contains the initial mesh file;

  • INP is your working directory;

  • REF serves as a reference.

  1. Go to the MESH directory and visualize the original mesh cube.geo.

  2. Go back to the INP directory read the file thermal.inp. Launch the sequential thermal computation. Visualize the temperature field. The field will be used as an external parameter for the parallel computation.

  3. Read the file split.inp and generate the subdomain decomposition.

  4. Read the script split_results.py and launch it. This script splits the sequential result according to the subdomain decomposition.

  5. Visualize the the temperature field on the subdomain meshes with Zmaster. Compare with the sequential temperature field.

  6. Read the file mechanical.inp, analyze the file and look for the differences induced by the parallel computation. With the help of the documentation, describe the simulation.

  7. Launch the parallel computation mechanical.inp. This step can be done in various way, and you have to fit with your supercomputer architecture.

  8. Analyze the log messages.

  9. Do the post-processing, with the command Zrun -mpimpi 8 -pp mechanical.inp for example.

  10. Visualize the results in Zmaster

  11. Do the same computation without parallelism and compare the results.

  12. Play with the number of subdomains, the number of threads per subdomains and the solver options. Analyze the convergence, solution and computational time.

  13. Try to perform the thermal computation in parallel and adapt the previous steps

Parallel computation with ill conditioned systems#

This third tutorial exhibits influence of the heterogeneity on the convergence of the solvers. It shows how to use AMPFETI when needed. The archive contains three directories:

  • MESH contains the initial mesh file;

  • INP is your working directory;

  • REF serves as a reference.

  1. Go to the MESH directory and visualize the original mesh composite.geo.

  2. Read the file split.inp and generate the subdomain decomposition.

  3. Read the files feti.inp and ampfeti.inp , analyze the differences. With the help of the documentation, describe the simulations.

  4. Launch the parallel computation feti.inp and ampfeti.inp. This step can be done in various way, and you have to fit with your supercomputer architecture.

  5. Analyze the log messages and compare the convergence of the methods.

  6. Visualize the results in Zmaster

  7. Play with the jump of Young modulus between the inclusions and the matrix and analyze the influence of the heterogeneity on both solvers.

  8. Play with the number of subdomains, the number of threads per subdomains and the solver options. Analyze the convergence, solution and computational time.

How to use advanced_detection (TBD)#

This fourth tutorial exhibits the critical point of local kernel detections of FETI based methods. It shows how to use advanced_detection when mumps is no longer able to detect the correct null space. The archive contains three directories:

  • MESH contains the initial mesh file;

  • INP is your working directory;

  • REF serves as a reference.

How to put mpcs on a single subdomain (TBD)#

This fifth tutorial shows how to play with the graph partitionner to keep mpcs local. The archive contains three directories:

  • MESH contains the initial mesh file;

  • INP is your working directory;

  • REF serves as a reference.

How to use acceleration for evolutionary problem (TBD)#