GASPI Tutorial @ Inria Bordeaux : Efficient parallel programming with GASPI – Wednesday January 28th, 2015 – Room : Ada Lovelace

In this tutorial we present an asynchronous dataflow programming model for Partitioned Global Address Spaces (PGAS) as an alternative to the programming model of MPI. GASPI, which stands for Global Address Space Programming Interface, is a partitioned global address space (PGAS) API. The GASPI API is designed as a C/C++/Fortran library and focused on three key objectives: scalability, flexibility and fault tolerance. In order to achieve its much improved scaling behaviour GASPI aims at asynchronous dataflow with remote completion, rather than bulk-synchronous message exchanges. GASPI follows a single/multiple program multiple data (SPMD/MPMD) approach and offers a small, yet powerful API (see also http://www.gaspi.de and http://www.gpi-site.com). GASPI is successfully used in academic and industrial simulation applications. Hands-on sessions (in C and Fortran) will allow users to immediately test and understand the basic constructs of GASPI. This course provides scientific training in Computational Science, and in addition, the scientific exchange of the participants among themselves.

Agenda

09:30-10:00 Welcome coffee
10:00-10:45 General introduction to GASPI
10:45-12:00 One sided communication in GASPI
12:00-13:30 Lunch
13:30-14:00 Memory segments in GASPI
14:00-15:30 Data Flow in GASPI
15:30-15:45 Coffee break
15:45-16:30 Data Flow in GASPI
16:30-16:45 Collectives and Passive Communication
16:45-17:00 Questions and Answers

Trainers

Dr. Christian Simmendinger, T-Systems Solutions for Research GmbH
Dr. Mirko Rahn, Fraunhofer ITWM
Dr. Daniel Gruenewald, Fraunhofer ITWM

Local organizers

Emmanuel Agullo, INRIA HiePACS team
Luc Giraud, INRIA HiePACS team
Raymond Namyst, INRIA Runtime team

Funding

This tutorial is funded by the European Exascale initiative in the context of the Exa2ct ( http://www.exa2ct.eu/ ) project.

Comments are closed.