Parallelism, from every day devices to HPC
Parallel processor architectures are now used on virtually every computer platform, from smartphones to embedded devices, to high performance computing (HPC) machines. This evolution obviously has an impact on every application, as the potential performance gains open the way to new usages, new scientific progress, and industrial innovations. The counterpart, however, comes as the difficulty to develop algorithms and codes able to efficiently exploit such parallelism.
Fast evolution, challenging complexity
Ever increasing core counts, increasingly dense processor sockets need to be fed with ever increasing amounts of parallel work to perform. Specialized cores (accelerators, big.LITTLE architectures) introduce heterogeneity. Deepening hierarchies of memory layers (cache levels) and hardware parallelism (instructions, vectors, threads) necessitate suitably structured algorithms. Thus, programming modern architectures requires impeding levels of expertise, and the expensive optimizing effort involved can quickly be nullified by the fast hardware evolution pace.
Static Optimizations and Runtime Methods. Statical Optimizations and Runtime Methods. For tackling the parallelism complexity challenge, a coordinated set of programming tools and techniques is necessary before (compiler), during (runtime) and after (analysis) program execution. Team STORM aims at combining strengths along these three directions: High level domain specific languages; Runtime systems for heterogeneous, manycore platforms; Analysis and performance feedback tools.
Team STORM Research Directions
- High level domain specific languages
- Runtime systems for heterogeneous, manycore platforms
- Analysis and performance feedback tools