Computational Science and High-Performance Computing
The HPC (High-Performance Computing) term describes the scientific field bringing together applied mathematics, particularly the optimization of algorithms and computer science, more precisely the use of parallelization, through hardware and software.
I am interested in the whole process of parallel numerical simulation.
Mathematical modelization of physical problems by equations and discretization so as to obtain a linear system Ax=b.
In particular, DGM (Discontinuous Galerkin Method) leads to well-suited algorithms for HPC.
Solve Ax=b problems, directly by inverting the matrix or iteratively by using solvers, sequential or parallel.
A recent innovation concerns the AMG (Algebraic MultiGrid method) which outperforms the Krylov efficiency.
Use the computation capabilities of the computer architecture, while preserving the cache coherency.
The x86 processors allow, for instance, vectorization through the SIMD (Single Instruction on Multiple Data) instructions.
Make the execution of a code on several computers in the same time. Usual implementation is based on hybrid MPI/OpenMP.
A new paradigm of parallelization by tasks let a runtime manage the binding process.
Use of accelerators
Handle a part of the computations out of the CPU so as to save time simulation, e.g. GPGPU programming.
Very recently, the MIC (Many-Integrated Cores) architecture of the Intel Xeon Phi claims the same performances without the use of CUDA.
Software engineering tools
- Mesh partitioning with (Par)Metis or (PT-)Scotch
- Visualization and data post-processing through ParaView
- Parallel Input/Output and Big Data solutions
- Memory and parallel execution profiling with VTune
- Software versioning with Git or SVN
- Smart and portable compilation with CMake