Jose Fonseca: Thursday 11 July at 11:00, A415 Inria Paris. The accurate simulation of variably saturated flow in a porous media is a valuable component in understanding physical processes occurring in many water resources problems. Such simulations require expensive and extensive computations and efficient usage of the latest high performance parallel computing systems becomes a necessity. The simulation software ParFlow has been shown to have excellent solver scalability for up to 16k processes. In order to scale the code to the full size of current petascale systems, we have reorganized its mesh subsystem to use state of the art mesh refinement and partition algorithms provided by the parallel software library p4est. Evaluating the scalability and performance of our modified version of ParFlow, we demonstrate weak and strong scaling to over 458k processes of the Juqueen supercomputer at the Jülich Supercomputing Centre. In the first part of the talk we will briefly present the algorithmic approach employed to couple both libraries. The enhanced scalability results of ParFlow’s modified version were obtained for uniform meshes. Hence, without explicitly exploiting the adaptive mesh refinement (AMR) capabilities of p4est. We will finish this first part presenting our current efforts to enable the usage of locally refined meshes in ParFlow. In an AMR framework. In such case, the finite difference (FD) method taken by ParFlow will require modifications to correctly deal with different size elements. Mixed finite elements (MFE) are on the other hand better suited for the usage of AMR. It is known that the cell centered FD method used in ParFlow might be reinterpreted as a MFE discretization using Raviart-Thomas elements of lower order. We conclude this talk presenting a block preconditioner for saddle point problems arising from a MFE that retains its robustness in the case of locally refined meshes.
Quanling Deng: Thursday 6 June at 11:00, A415 Inria Paris. The well-known generalized-alpha method is an unconditionally stable and second-order accurate time-integrator which has a feature of user-control on numerical dissipation. The method encompasses a wide range of time-integrators, such as the Newmark method, the HHT-alpha method by Hilber, Hughes, and Taylor, and the WBZ-alpha method by Wood, Bossak, and Zienkiewicz. The talk starts with the simplest time-integrator, forward/backward Euler schemes, then introduces Newmark’s idea followed by the ideas of Chung and Hulbert on the generalized-alpha method. For parabolic equations, we show that the generalized-alpha method also includes the BDF-2 and the second-order dG time-integration scheme. The focus of the talk is to introduce two ideas to generalize the method further to higher orders while maintaining the features of unconditional stability and dissipation control. We will show third-order (for parabolic equations) and 2n-order (for hyperbolic equations) accurate schemes with numerical validations. The talk closes with the introduction of a variational-splitting framework for these time-integrators. As a consequence, the splitting schemes reduce the computational costs significantly (to linear cost) for multi-dimensional problems.
Menel Rahrah: Friday 12 April at 14:30, A315 Inria Paris. Fast, High Volume Infiltration (FHVI) is a new method to quickly infiltrate large amounts of fresh water into the shallow subsurface. This infiltration method would have a great value for the effective storage of rainwater in the underground, during periods of extreme precipitation. To describe FHVI, a model for aquifers is considered in which water is injected. Water injection induces changes in the pore pressure and deformations in the soil. Furthermore, the interaction between the mechanical deformations and the flow of water gives rise to a change in porosity and permeability, which results in nonlinearity of the mathematical problem. Assuming that the deformations are very small, Biot’s theory of linear poroelasticity is used to determine the local displacement of the skeleton of a porous medium, as well as the fluid flow through the pores. The resulting problem needs a considerate numerical methodology in terms of possible nonphysical oscillations. Therefore, a stabilised Galerkin finite element method based on Taylor-Hood elements is developed. Subsequently, the impact of mechanical oscillations and pressure pulses on the amount of water that can be injected into the aquifer is investigated. In addition, a parameter uncertainty quantification is applied using Monte Carlo techniques and statistical analysis, to quantify the impact of variation in the parameters (such as the unknown oscillatory modes and the soil characteristics) on the model output. Since the assumption that the deformations are very small can be violated by imposing large mechanical oscillations, the difference between the linear and the nonlinear poroelasticity equations is analysed in a moving finite element framework using Picard iterations.
Mathématiques Radicalement Élémentaires Claude Lobry Leçon 1 (jeudi 4 avril, 10h30-12h, Simone Iff A415, inscription) Dans un premier temps (env. 1h) j’expose le système axiomatique I.S.T. proposé par Nelson en 1977 ; dans ce système les expressions x est infiniment grand, (infiniment petit), ont un sens mathématique (formel) précis et se manipulent comme le commande le sens intuitif des expressions en caractères gras. Le plan de cette partie est : Axiomatique Idéalisation Standardisation Transfert Premières applications : Une alternative aux fonctions de variables réelles : les pointillés. Le théorème d’existence des solutions d’une équation différentielle. Consistance relative de I.S.T. à Z.F.C. Ce travail (un peu formel) nous a permis de nous familiariser avec les concepts. À ce moment un peu d’histoire est utile. J’expliquerai comment les « infinitésimaux » de Leibniz, encore défendus par Lazare Carnot autour de 1800 ont été chassés (au nom de la rigueur) au cours du XIXe au profit de l’epsilontique (∀ε∃η···) codifiée plus tard (début du XXe siècle) dans le formalisme Z.F.C. au point que vers 1960 l’idéologie dominante chez les mathématiciens est que « mathématiques rigoureuses » = « qui se formalise dans Z.F.C. » On en déduit (de manière erronée) qu’il ne peut pas exister de mathématiques rigoureuses utilisant les infinitésimaux. Le dogme est remis en question autour de 1960 par Robinson qui propose l’analyse non standard (ANS) dans une version formalisée dans Z.F.C. L’ANS est alors conçue comme une technique de démonstration (éventuellement plus simple) d’énoncés traditionnels. Leçons 2 et 3 (vendredi 5 avril, 10h30-12h, 14h-15h30, Simone Iff A415, inscription) Dans les années 1980 Nelson et Reeb proposent un programme de recherche qui consiste à ne pas traduire dans le langage traditionnel les énoncés obtenus via l’ANS. C’est ce que j’appelle les mathématiques radicalement élémentaires. Avant de commenter ce programme je propose…
Patrik Daniel: Monday 18 March at 14:00, A315 Inria Paris. We propose new practical adaptive refinement algorithms for conforming hp-finite element approximations of elliptic problems. We consider the use of both exact and inexact solvers within the established framework of adaptive methods consisting of four concatenated modules: SOLVE, ESTIMATE, MARK, REFINE. The strategies are driven by guaranteed equilibrated flux a posteriori error estimators. Namely, for an inexact approximation obtained by an (arbitrary) iterative algebraic solver, the bounds for the total, the algebraic, and the discretization errors are provided. The nested hierarchy of hp-finite element spaces is crucially exploited for the algebraic error upper bound which in turn allows us to formulate sharp stopping criteria for the algebraic solver. Our hp-refinement criterion hinges on from solving two local residual problems posed on patches of elements around marked vertices selected by a bulk-chasing criterion. They respectively emulate h-refinement and p-refinement. One particular feature of our approach is that we derive a computable real number which gives a guaranteed bound on the ratio of the (unknown) energy error in the next adaptive loop step with respect to the present one (i.e. on the error reduction factor). Numerical experiments are presented to validate the proposed adaptive strategies. We investigate the accuracy of our bound on the error reduction factor which turns out to be excellent, with effectivity indices close to the optimal value of one. In practice, we observe asymptotic exponential convergence rates, in both the exact and inexact algebraic solver settings. Finally, we also provide a theoretical analysis of the proposed strategies. We prove that under some additional assumptions on the h- and p-refinements, including the interior node property and sufficient p-refinements, the computable reduction factors are indeed bounded by a generic constant strictly smaller than one. This implies the convergence of the…
Thibault Faney, Soleiman Yousef: Thursday 14 February at 15:00, A415 Inria Paris. Les simulations numériques sont un outil important pour mieux comprendre et prédire le comportement des systèmes physiques complexes. Dans le cas des équilibres thermodynamiques, ces systèmes sont modélisés principalement par des systèmes d’équations non-linéaires. La résolution de ces systèmes après discrétisation nécessite l’emploi de méthodes itératives se basant sur une solution initiale approximative. L’objectif des travaux présentés est d’établir un modèle statistique par apprentissage permettant de remplacer cette solution initiale heuristique par une approximation de la solution fondée sur une base de données de résultats de calculs. Le simulateur ciblé est un simulateur de flash triphasique (aqueux, liquide et gaz) à pression et température constante. A partir de fractions molaires initiales des éléments chimiques, on calcule les phases présentes à l’état d’équilibre, ainsi que leur fraction volumique respective et les fractions molaires de chaque espèce chimique dans chacune des phases présentes. Nous formulons le problème sous la forme d’un problème d’apprentissage supervisé en série : d’abord une classification des phases présentes à l’équilibre puis une régression sur les fractions volumiques et molaires. Les résultats sur différents tests de complexité croissante en nombre de composés chimiques montrent un très bon accord du modèle statistique avec la solution fournie par le simulateur. Les éléments clés sont le choix de la base de données pour l’apprentissage (plan d’expérience) et l’emploi de méthodes d’apprentissage non-linéaires.
Camilla Fiorini: Thursday 31 January at 14:30, A415 Inria Paris. Sensitivity analysis (SA) is the study of how changes in the input of a model affect the output. Standard SA techniques for PDEs, such as the continuous sensitivity equation method, call for the differentiation of the state variable. However, if the governing equations are hyperbolic PDEs, the state can be discontinuous and this generates Dirac delta functions in the sensitivity. We aim at defining and approximating numerically a system of sensitivity equations which is valid also when the state is discontinuous: to do that, one can define a correction term to be added to the sensitivity equations starting from the Rankine-Hugoniot conditions, which govern the state across a shock. We detail this procedure in the case of the Euler equations. Numerical results show that the standard Godunov and Roe schemes fail in producing good numerical results because of the underlying numerical diffusion. An anti-diffusive numerical method is then successfully proposed.
Gregor Gantner: Thursday 7 February at 11 am, A415 Inria Paris. The CAD standard for spline representation in 2D or 3D relies on tensor-product splines. To allow for adaptive refinement, several extensions have emerged, e.g., analysis-suitable T-splines, hierarchical splines, or LR-splines. All these concepts have been studied via numerical experiments, but there exists only little literature concerning the thorough analysis of adaptive isogeometric methods. The work  investigates linear convergence of the weighted-residual error estimator (or equivalently: energy error plus data oscillations) of an isogeometric finite element method (IGAFEM) with truncated hierarchical B-splines. Optimal convergence was indepen- dently proved in [2, 3]. In , we employ hierarchical B-splines and propose a refinement strategy to generate a sequence of refined meshes and corresponding discrete solutions. Usually, CAD provides only a parametrization of the boundary ∂Ω instead of the domain Ω itself. The boundary element method, which we consider in the second part of the talk, circumvents this difficulty by working only on the CAD provided boundary mesh. In 2D, our adaptive algorithm steers the mesh-refinement and the local smoothness of the ansatz functions. We proved linear convergence of the employed weighted-residual estimator at optimal algebraic rate in [5, 6]. In 3D, we consider an adaptive IGABEM with hierarchical splines and prove linear convergence of the estimator at optimal rate; see . REFERENCES  A. Buffa and C. Giannelli, Adaptive isogeometric methods with hierarchical splines: error estimator and convergence. Math. Mod. Meth. Appl. S., Vol. 26, 2016.  A. Buffa and C. Giannelli, Adaptive isogeometric methods with hierarchical splines: Optimality and convergence rates. Math. Mod. Meth. in Appl. S., Vol. 27, 2017.  G. Gantner, D. Haberlik, and Dirk Praetorius, Adaptive IGAFEM with optimal con- vergence rates: Hierarchical B-splines. Math. Mod. Meth. in Appl. S., Vol. 27, 2017.  G. Gantner, Adaptive…
Amina Benaceur: Wednesday 5 December at 11:00 am, A315 Inria Paris. This thesis introduces three new developments of the reduced basis method (RB) and the empirical interpolation method (EIM) for nonlinear problems. The first contribution is a new methodology, the Progressive RB-EIM (PREIM) which aims at reducing the cost of the phase during which the reduced model is constructed without compromising the accuracy of the final RB approximation. The idea is to gradually enrich the EIM approximation and the RB space, in contrast to the standard approach where both constructions are separate. The second contribution is related to the RB for variational inequalities with nonlinear constraints. We employ an RB-EIM combination to treat the nonlinear constraint. Also, we build a reduced basis for the Lagrange multipliers via a hierarchical algorithm that preserves the non-negativity of the basis vectors. We apply this strategy to elastic frictionless contact for non-matching meshes. Finally, the third contribution focuses on model reduction with data assimilation. A dedicated method has been introduced in the literature so as to combine numerical models with experimental measurements. We extend the method to a time-dependent framework using a POD-greedy algorithm in order to build accurate reduced spaces for all the time steps. Besides, we devise a new algorithm that produces better reduced spaces while minimizing the number of measurements required for the final reduced problem.
Zhaonan Dong: Wednesday 9 January at 11 am, A415 Inria Paris. PDE models are often characterised by local features such as solution singularities/layers and domains with complicated boundaries. These special features make the design of accurate numerical solutions challenging, or require huge amount of computational resources. One way of achieving complexity reduction of the numerical solution for such PDE models is to design novel numerical methods which support general meshes consisting of polygonal/polyhedral elements, such that local features of the model can be resolved in efficiently by adaptive choices of such general meshes. In this talk, we will review the recently developed hp-version symmetric interior penalty discontinuous Galerkin (dG) finite element method for the numerical approximation of PDEs on general computational meshes consisting of polygonal/polyhedral (polytopic) elements. The key feature of the proposed dG method is that the stability and hp-version a-priori error bound are derived based on the specific choice of the interior penalty parameters which allows for edges/faces degeneration. Moreover, under certain practical mesh assumptions, the proposed dG method was proven to be available to incorporate very general polygonal/polyhedral elements with an arbitrary number of faces. Because of utilising general shaped elements, the dG method shows a great flexibility in designing an adaptive algorithm by refining or coarsening general polytopic elements. Especially for solving the convection-dominated problems on which boundary and interior layers may appear and need a lot of degrees of freedom to resolve. Finally, we will present several numerical examples through different classes of linear PDEs to highlight the practical performance of the proposed method.