Presentation

Exciting Updates: The End of HiePACS and the Introduction of Concace and Topal project-teams!

We’re thrilled to share some exciting news with you regarding the future of our projects! After months of hard work and dedication, we’ve reached the conclusion of HiePACS and are thrilled to announce the launch of two new initiatives: Concace and Topal.

End of HiePACS

After 12 years of hard work, HiePACS has successfully achieved its
objectives of performing efficiently frontier simulations arising from
challenging research and industrial multiscale applications. We thank
everyone who supported us along the way.

Introduction of Concace and Topal project-teams

But the journey doesn’t end here! We’re excited to introduce you to
the Concace and Topal project-teams, two innovative ventures aimed at
pursuing the goal of tackling large-scale simulations. Concace will
take advantage of modern development tools and languages to design
high-level expressions of complex numerical, parallel algorithms to
enable a richer composability. Topal will develop tools and
optimizations for high performance applications and learning.

Thank you for being a part of our journey, and we can’t wait to embark
on this new chapter with you!

HiePACS (2010-2022)

HiePACS was a joint project-team with Bordeaux INP, Bordeaux University and CNRS (LaBRI UMR 5800) .  It had been created on the first of January 2010 and is led by Luc Giraud.

The HiePACS project has concluded, marking an important milestone in our journey. We’re delighted to unveil the next chapters: Concace and Topal! Join us as we embark on new ventures and continue our commitment to innovation and progress.

Project-Team Presentation of HiePACS (2010-2022)

An important force which has continued to drive HPC has been to focus on frontier milestones which consist in technical goals that symbolize the next stage of progress in the field. In the 1990s, the HPC community sought to achieve computing at a teraflop rate and currently we are able to compute on the first leading architectures at a petaflop rate. Generalist petaflop supercomputers are available  and some communities are already in the early stages of thinking about what computing at the exaflop level would be like early 2020. For application codes to sustain a petaflop and more in the next few years, hundreds of thousands of processor cores or more will be needed, regardless of processor technology. Currently, a few HPC simulation codes easily scale to this regime, and major code development efforts are critical to achieve the potential of these new systems. Scaling to a petaflop and more will involve improving physical models, mathematical modelling, super scalable algorithms that will require paying particular attention to acquisition, management and vizualization of huge amounts of scientific data.

In this context, the purpose of the HiePACS project is to perform efficiently frontier simulations arising from challenging research and industrial multiscale applications. The solution of these challenging problems require a multidisciplinary approach involving applied mathematics, computational and computer sciences. In applied mathematics, it essentially involves advanced numerical schemes. In computational science, it involves massively parallel computing and the design of highly scalable algorithms and codes to be executed on future petaflop (and beyond) platforms. Through this approach, HiePACS intends to contribute to all steps that go from the design of new high-performance more scalable, robust and more accurate numerical schemes to the optimized implementations of the associated algorithms and codes on very high performance supercomputers.

  • Methodological and algorithmic research
    • High performance computing on next generation architectures
    • High performance solvers for linear algebra problems
      • Sparse direct solver for heterogeneous platforms
      • Hybrid direct/iterative solvers based on algebraic decomposition domain
      • Hybrid solvers based on a combination of multigrid methods and of
        direct solvers
      • Linear Krylov solvers
      • Eigensolvers
    • High performance Fast Multipole Method for N-body problems
    • Algorithmics for load balancing for complex  simulations
  • Frontier simulations arising from challenging academic and industrial research
    • Material Physics
    • Application customers of high performance linear algebra solvers
    • Co-design applications

International and Industrial Relations

  • International collaborations
    • Berkeley National Lab. (USA)
    • King Abdullah University of Science and Tehnology (Saoudit Arabia)
    • University of Colorado at Denver (USA)
    • University of Minnesota (USA)
    • University of Tennessee, ICL (USA)
    • Stanford University (USA)
  • Industrial collaborations
    • CEA (Cadarache, CESTA, Ile-de-France, SACLAY)
    • Airbus Group innovations – Airbus Defence and Space
    • EDF
    • TOTAL

Comments are closed.