4-5 avril – Claude Lobry : Mathématiques Radicalement Élémentaires

Mathématiques Radicalement Élémentaires Claude Lobry Leçon 1 (jeudi 4 avril, 10h30-12h, Simone Iff A415, inscription) Dans un premier temps (env. 1h) j’expose le système axiomatique I.S.T. proposé par Nelson en 1977 ; dans ce système les expressions x est infiniment grand, (infiniment petit), ont un sens mathématique (formel) précis et se manipulent comme le commande le sens intuitif des expressions en caractères gras. Le plan de cette partie est : Axiomatique Idéalisation Standardisation Transfert Premières applications : Une alternative aux fonctions de variables réelles : les pointillés. Le théorème d’existence des solutions d’une équation différentielle. Consistance relative de I.S.T. à Z.F.C. Ce travail (un peu formel) nous a permis de nous familiariser avec les concepts. À ce moment un peu d’histoire est utile. J’expliquerai comment les « infinitésimaux » de Leibniz, encore défendus par Lazare Carnot autour de 1800 ont été chassés (au nom de la rigueur) au cours du XIXe au profit de l’epsilontique (∀ε∃η···) codifiée plus tard (début du XXe siècle) dans le formalisme Z.F.C. au point que vers 1960 l’idéologie dominante chez les mathématiciens est que « mathématiques rigoureuses » = « qui se formalise dans Z.F.C. » On en déduit (de manière erronée) qu’il ne peut pas exister de mathématiques rigoureuses utilisant les infinitésimaux. Le dogme est remis en question autour de 1960 par Robinson qui propose l’analyse non standard (ANS) dans une version formalisée dans Z.F.C. L’ANS est alors conçue comme une technique de démonstration (éventuellement plus simple) d’énoncés traditionnels. Leçons 2 et 3 (vendredi 5 avril, 10h30-12h, 14h-15h30, Simone Iff A415, inscription) Dans les années 1980 Nelson et Reeb proposent un programme de recherche qui consiste à ne pas traduire dans le langage traditionnel les énoncés obtenus via l’ANS. C’est ce que j’appelle les mathématiques radicalement élémentaires. Avant de commenter ce programme je propose…

Continue reading

March 18 – Patrik Daniel: Adaptive hp-finite elements with guaranteed error contraction and inexact multilevel solvers

Patrik Daniel: Monday 18 March at 14:00, A315 Inria Paris. We propose new practical adaptive refinement algorithms for conforming hp-finite element approximations of elliptic problems. We consider the use of both exact and inexact solvers within the established framework of adaptive methods consisting of four concatenated modules: SOLVE, ESTIMATE, MARK, REFINE. The strategies are driven by guaranteed equilibrated flux a posteriori error estimators. Namely, for an inexact approximation obtained by an (arbitrary) iterative algebraic solver, the bounds for the total, the algebraic, and the discretization errors are provided. The nested hierarchy of hp-finite element spaces is crucially exploited for the algebraic error upper bound which in turn allows us to formulate sharp stopping criteria for the algebraic solver. Our hp-refinement criterion hinges on from solving two local residual problems posed on patches of elements around marked vertices selected by a bulk-chasing criterion. They respectively emulate h-refinement and p-refinement. One particular feature of our approach is that we derive a computable real number which gives a guaranteed bound on the ratio of the (unknown) energy error in the next adaptive loop step with respect to the present one (i.e. on the error reduction factor). Numerical experiments are presented to validate the proposed adaptive strategies. We investigate the accuracy of our bound on the error reduction factor which turns out to be excellent, with effectivity indices close to the optimal value of one. In practice, we observe asymptotic exponential convergence rates, in both the exact and inexact algebraic solver settings. Finally, we also provide a theoretical analysis of the proposed strategies. We prove that under some additional assumptions on the h- and p-refinements, including the interior node property and sufficient p-refinements, the computable reduction factors are indeed bounded by a generic constant strictly smaller than one. This implies the convergence of the…

Continue reading

February 14 – Thibault Faney, Soleiman Yousef: Accélération d’un simulateur d’équilibres thermodynamiques par apprentissage automatique

Thibault Faney, Soleiman Yousef: Thursday 14 February at 15:00, A415 Inria Paris. Les simulations numériques sont un outil important pour mieux comprendre et prédire le comportement des systèmes physiques complexes. Dans le cas des équilibres thermodynamiques, ces systèmes sont modélisés principalement par des systèmes d’équations non-linéaires. La résolution de ces systèmes après discrétisation nécessite l’emploi de méthodes itératives se basant sur une solution initiale approximative. L’objectif des travaux présentés est d’établir un modèle statistique par apprentissage permettant de remplacer cette solution initiale heuristique par une approximation de la solution fondée sur une base de données de résultats de calculs. Le simulateur ciblé est un simulateur de flash triphasique (aqueux, liquide et gaz) à pression et température constante. A partir de fractions molaires initiales des éléments chimiques, on calcule les phases présentes à l’état d’équilibre, ainsi que leur fraction volumique respective et les fractions molaires de chaque espèce chimique dans chacune des phases présentes. Nous formulons le problème sous la forme d’un problème d’apprentissage supervisé en série : d’abord une classification des phases présentes à l’équilibre puis une régression sur les fractions volumiques et molaires. Les résultats sur différents tests de complexité croissante en nombre de composés chimiques montrent un très bon accord du modèle statistique avec la solution fournie par le simulateur. Les éléments clés sont le choix de la base de données pour l’apprentissage (plan d’expérience) et l’emploi de méthodes d’apprentissage non-linéaires.

Continue reading

January 31 – Camilla Fiorini: Sensitivity analysis for hyperbolic PDEs systems with discontinuous solution: the case of the Euler Equations.

Camilla Fiorini: Thursday 31 January at 14:30, A415 Inria Paris. Sensitivity analysis (SA) is the study of how changes in the input of a model affect the output. Standard SA techniques for PDEs, such as the continuous sensitivity equation method, call for the differentiation of the state variable. However, if the governing equations are hyperbolic PDEs, the state can be discontinuous and this generates Dirac delta functions in the sensitivity. We aim at defining and approximating numerically a system of sensitivity equations which is valid also when the state is discontinuous: to do that, one can define a correction term to be added to the sensitivity equations starting from the Rankine-Hugoniot conditions, which govern the state across a shock. We detail this procedure in the case of the Euler equations. Numerical results show that the standard Godunov and Roe schemes fail in producing good numerical results because of the underlying numerical diffusion. An anti-diffusive numerical method is then successfully proposed.

Continue reading

February 7 – Gregor Gantner: Optimal adaptivity for isogeometric finite and boundary element methods

Gregor Gantner: Thursday 7 February at 11 am, A415 Inria Paris. The CAD standard for spline representation in 2D or 3D relies on tensor-product splines. To allow for adaptive refinement, several extensions have emerged, e.g., analysis-suitable T-splines, hierarchical splines, or LR-splines. All these concepts have been studied via numerical experiments, but there exists only little literature concerning the thorough analysis of adaptive isogeometric methods. The work [1] investigates linear convergence of the weighted-residual error estimator (or equivalently: energy error plus data oscillations) of an isogeometric finite element method (IGAFEM) with truncated hierarchical B-splines. Optimal convergence was indepen- dently proved in [2, 3]. In [3], we employ hierarchical B-splines and propose a refinement strategy to generate a sequence of refined meshes and corresponding discrete solutions. Usually, CAD provides only a parametrization of the boundary ∂Ω instead of the domain Ω itself. The boundary element method, which we consider in the second part of the talk, circumvents this difficulty by working only on the CAD provided boundary mesh. In 2D, our adaptive algorithm steers the mesh-refinement and the local smoothness of the ansatz functions. We proved linear convergence of the employed weighted-residual estimator at optimal algebraic rate in [5, 6]. In 3D, we consider an adaptive IGABEM with hierarchical splines and prove linear convergence of the estimator at optimal rate; see [6]. REFERENCES [1] A. Buffa and C. Giannelli, Adaptive isogeometric methods with hierarchical splines: error estimator and convergence. Math. Mod. Meth. Appl. S., Vol. 26, 2016. [2] A. Buffa and C. Giannelli, Adaptive isogeometric methods with hierarchical splines: Optimality and convergence rates. Math. Mod. Meth. in Appl. S., Vol. 27, 2017. [3] G. Gantner, D. Haberlik, and Dirk Praetorius, Adaptive IGAFEM with optimal con- vergence rates: Hierarchical B-splines. Math. Mod. Meth. in Appl. S., Vol. 27, 2017. [4] G. Gantner, Adaptive…

Continue reading

Wednesday 5 – Amina Benaceur: Model reduction for nonlinear thermics and mechanics

Amina Benaceur: Wednesday 5 December at 11:00 am, A315 Inria Paris. This thesis introduces three new developments of the reduced basis method (RB) and the empirical interpolation method (EIM) for nonlinear problems. The first contribution is a new methodology, the Progressive RB-EIM (PREIM) which aims at reducing the cost of the phase during which the reduced model is constructed without compromising the accuracy of the final RB approximation. The idea is to gradually enrich the EIM approximation and the RB space, in contrast to the standard approach where both constructions are separate. The second contribution is related to the RB for variational inequalities with nonlinear constraints. We employ an RB-EIM combination to treat the nonlinear constraint. Also, we build a reduced basis for the Lagrange multipliers via a hierarchical algorithm that preserves the non-negativity of the basis vectors. We apply this strategy to elastic frictionless contact for non-matching meshes. Finally, the third contribution focuses on model reduction with data assimilation. A dedicated method has been introduced in the literature so as to combine numerical models with experimental measurements. We extend the method to a time-dependent framework using a POD-greedy algorithm in order to build accurate reduced spaces for all the time steps. Besides, we devise a new algorithm that produces better reduced spaces while minimizing the number of measurements required for the final reduced problem.

Continue reading

January 9 – Zhaonan Dong: hp-Version Discontinuous Galerkin Methods on Polygonal and Polyhedral Meshes

Zhaonan Dong: Wednesday 9 January at 11 am, A415 Inria Paris. PDE models are often characterised by local features such as solution singularities/layers and domains with complicated boundaries. These special features make the design of accurate numerical solutions challenging, or require huge amount of computational resources. One way of achieving complexity reduction of the numerical solution for such PDE models is to design novel numerical methods which support general meshes consisting of polygonal/polyhedral elements, such that local features of the model can be resolved in efficiently by adaptive choices of such general meshes. In this talk, we will review the recently developed hp-version symmetric interior penalty discontinuous Galerkin (dG) finite element method for the numerical approximation of PDEs on general computational meshes consisting of polygonal/polyhedral (polytopic) elements. The key feature of the proposed dG method is that the stability and hp-version a-priori error bound are derived based on the specific choice of the interior penalty parameters which allows for edges/faces degeneration. Moreover, under certain practical mesh assumptions, the proposed dG method was proven to be available to incorporate very general polygonal/polyhedral elements with an arbitrary number of faces. Because of utilising general shaped elements, the dG method shows a great flexibility in designing an adaptive algorithm by refining or coarsening general polytopic elements. Especially for solving the convection-dominated problems on which boundary and interior layers may appear and need a lot of degrees of freedom to resolve. Finally, we will present several numerical examples through different classes of linear PDEs to highlight the practical performance of the proposed method.

Continue reading

IFPEN / Inria meeting

26 novembre 2018 Salle Jacques-Louis Lions 2 Bâtiment C, rez-de-chaussée Inria, 2 rue Simone Iff, Paris 12ème Comment venir Programme 9h45h-10h15 : Accueil autour d’un café 10h15-10h30 : Avenant contrat cadre IFPEN/INRIA (2018-2022), ajout de la nouvelle thématique « Intelligence Artificielle et science des données » (Van Bui-Tran) 10h30-11h : Méthodes de décomposition de domaine via le langage dédié FreeFem++ (Fréderic Nataf) 11h-11h30 : Goal-oriented a posteriori error estimation for conforming and nonconforming approximations with inexact solvers (Soleiman Yousef) 11h30-12h : Reduced-Basis method for two-phase darcean flows (Sébastien Boyaval) 12h-12h30 : Etude et simulation d’un modèle stratigraphique advecto-diffusif non-linéaire avec frontières mobiles (Huy Quang Tran) 12h30-14h00 : Repas: Crêperie Paris Breizh, 177 avenue Daumesnil, 75012 Paris 14h00-14h30 : A posteriori error estimates and adaptive stopping criteria for a compositional two-phase flow with nonlinear complementarity constraints (Jad Dabaghi) 14h30-15h00 : Adaptive resolution of linear systems based on error estimators (Zakariae Jorti) 15h00-15h30 : Modèle cinétique pour le transport réactif (Bastien Hamlat) 15h30-15h45 : Pause-café 15h45-16h15 : Stratégie(s) pour améliorer la robustesse de l’algorithme de Newton intervenant dans la résolution de l’équation de Richards (Guillaume Enchéry) 16h15-16h20 : Conclusion de la journée (Martin Vohralík)

Continue reading

December 13 – Maxime Breden: An introduction to a posteriori validation techniques, illustrated on the Navier-Stokes equations

Maxime Breden: Thursday 13 December at 11 am, A415 Inria Paris. The aim of a posteriori validation techniques is to obtain mathematically rigorous and quantitative existence theorems, using numerical simulations. Given an approximate solution, the general strategy is to combine a posteriori estimates with analytical ones to apply a fixed point theorem, which then yields the existence of a true solution in an explicit neighborhood of the approximate one. In the first part of the talk, I’ll present the main ideas in more detail, and describe the general framework in which they are applicable. In the second part, I’ll then focus on a specific example and explain how to validate a posteriori periodic solutions of the Navier-Stokes equations with a Taylor-Green type of forcing. This is joint work with Jan Bouwe van den Berg, Jean-Philippe Lessard and Lennaert van Veen.

Continue reading

Axioms of adaptivity — mini-course by C. Carstensen

Tuesday 11 September — Thursday 13 September at 9am (three 2-hour sessions), A415 Inria Paris Files for the participants (available only during the course): AoA1 AoA2 AoA3 Mini-course by C. Carstensen (Humboldt-Universität zu Berlin, Germany): The lecture series on the optimal rates of adaptive mesh-refining algorithms in computational PDEs provides an introduction to the mathematics of optimal rates based on the standard Dörfler marking in a collective refinement strategy. The focus is on the thorough insight into the mathematics for the simplest meaningful setting with elementary tools like the trace inequality, inverse estimate, plus several forms of triangle and Cauchy inequalities. Solely four axioms guarantee the optimality in terms of the error estimators outlined in Part 1 of the lectures. This general framework covers a huge part of the existing literature on optimal rates of adaptive schemes and is exemplified for the 2D Poisson model problem on polygonal domains. Part 2 gives the outline of the proof of optimal rates with linear convergence and a comparison lemma as Stevenson’s key argument for the optimality. The abstract analysis covers linear as well as nonlinear problems and is independent of the underlying finite element or boundary element method. The local discrete efficiency of the error estimator is neither needed to prove convergence nor utilised for the quasi-optimal convergence behaviour of the error estimator. Efficiency exclusively characterises the approximation classes involved in terms of the best-approximation error and data resolution and so the upper bound on the optimal marking parameters does not depend on any efficiency constant. Some general quasi-Galerkin orthogonality is not only sufficient, but also necessary for the R-linear convergence of the error estimator. Part 3 discusses the lowest-order conforming finite element method based on triangles and provides proofs of the stability, reduction, and discrete localised reliability of the explicit residual-based…

Continue reading