Amina Benaceur: Wednesday 5 December at 11:00 am, A315 Inria Paris. This thesis introduces three new developments of the reduced basis method (RB) and the empirical interpolation method (EIM) for nonlinear problems. The first contribution is a new methodology, the Progressive RB-EIM (PREIM) which aims at reducing the cost of the phase during which the reduced model is constructed without compromising the accuracy of the final RB approximation. The idea is to gradually enrich the EIM approximation and the RB space, in contrast to the standard approach where both constructions are separate. The second contribution is related to the RB for variational inequalities with nonlinear constraints. We employ an RB-EIM combination to treat the nonlinear constraint. Also, we build a reduced basis for the Lagrange multipliers via a hierarchical algorithm that preserves the non-negativity of the basis vectors. We apply this strategy to elastic frictionless contact for non-matching meshes. Finally, the third contribution focuses on model reduction with data assimilation. A dedicated method has been introduced in the literature so as to combine numerical models with experimental measurements. We extend the method to a time-dependent framework using a POD-greedy algorithm in order to build accurate reduced spaces for all the time steps. Besides, we devise a new algorithm that produces better reduced spaces while minimizing the number of measurements required for the final reduced problem.
Zhaonan Dong: Wednesday 9 January at 11 am, A415 Inria Paris. PDE models are often characterised by local features such as solution singularities/layers and domains with complicated boundaries. These special features make the design of accurate numerical solutions challenging, or require huge amount of computational resources. One way of achieving complexity reduction of the numerical solution for such PDE models is to design novel numerical methods which support general meshes consisting of polygonal/polyhedral elements, such that local features of the model can be resolved in efficiently by adaptive choices of such general meshes. In this talk, we will review the recently developed hp-version symmetric interior penalty discontinuous Galerkin (dG) finite element method for the numerical approximation of PDEs on general computational meshes consisting of polygonal/polyhedral (polytopic) elements. The key feature of the proposed dG method is that the stability and hp-version a-priori error bound are derived based on the specific choice of the interior penalty parameters which allows for edges/faces degeneration. Moreover, under certain practical mesh assumptions, the proposed dG method was proven to be available to incorporate very general polygonal/polyhedral elements with an arbitrary number of faces. Because of utilising general shaped elements, the dG method shows a great flexibility in designing an adaptive algorithm by refining or coarsening general polytopic elements. Especially for solving the convection-dominated problems on which boundary and interior layers may appear and need a lot of degrees of freedom to resolve. Finally, we will present several numerical examples through different classes of linear PDEs to highlight the practical performance of the proposed method.
Maxime Breden: Thursday 13 December at 11 am, A415 Inria Paris. The aim of a posteriori validation techniques is to obtain mathematically rigorous and quantitative existence theorems, using numerical simulations. Given an approximate solution, the general strategy is to combine a posteriori estimates with analytical ones to apply a fixed point theorem, which then yields the existence of a true solution in an explicit neighborhood of the approximate one. In the first part of the talk, I’ll present the main ideas in more detail, and describe the general framework in which they are applicable. In the second part, I’ll then focus on a specific example and explain how to validate a posteriori periodic solutions of the Navier-Stokes equations with a Taylor-Green type of forcing. This is joint work with Jan Bouwe van den Berg, Jean-Philippe Lessard and Lennaert van Veen.
Simon Lemaire: Thursday 16 April at 3 pm, A415 Inria Paris. We are interested in physical settings presenting an interface between a classical (positive) material and a (negative) metamaterial, in such a way that the coefficients of the model change sign in the domain. We study, in the “elliptic” case, the numerical approximation of such sign-shifting problems. We introduce a new numerical method, based on domain decomposition and optimization, that we prove to be convergent, as soon as, for a given right-hand side, the problem admits a solution that is unique. The proof of convergence does not rely on any symmetry assumption on the mesh family with respect to the sign-changing interface. In that respect, it gives a more convenient alternative to T-coercivity based approximation in the situations when the latter is applicable, whereas it constitutes a new paradigm in the situations when the latter is not. We illustrate our findings on a comprehensive set of test-cases.
Thirupathi Gudi: Tuesday 20 February at 3 pm, A415 Inria Paris. In this talk, we review some approaches for formulating the Dirichlet boundary control problem and then we present a new energy space based approach. We show that this new approach allows high regularity for both optimal control and the optimal state. Using, the optimality conditions at continuous level, we propose a finite element method for numerical solution and derive subsequent error estimates. We show some numerical experiments to illustrate the method.
Franz Chouly: Thursday 15 February at 2 pm, A415 Inria Paris. In the first part of this talk, we will present a residual based a posteriori error estimate for contact problems in small strain elasticity, discretized with finite elements and Nitsche’s method. Upper and lower bounds are established under a saturation assumption. This theoretical results will be illustrated by some numerical experiments (joint work with Mathieu Fabre, Patrick Hild, Jérôme Pousin and Yves Renard). In the second part of this talk, we will present preliminary results on goal oriented error estimates for soft-tissue biomechanics, still under small strain assumptions. The performance of the Dual Weighted Residual method will be assessed for two simplified scenarios involving tongue muscular activation, and contraction of the arterial wall. Open mathematical questions and the potential interest of such a methodology for computational biomechanics will be discussed (joint work with Stéphane Bordas, Marek Bucki, Michel Duprez, Vanessa Lleras, Claudio Lobos, Alexei Lozinski, Pierre-Yves Rohan and Satyendra Tomar).
Sébastien Furic: Thursday 30 November at 14 pm, A415 Inria Paris. This presentation aims at introducing the concept of System-Level Physical Model, with some emphasis on model construction and simulation. In particular, soundness of models and meaning and solution of associated differential equations will be discussed.
Hend Benameur: Thursday 2 November at 11 am, A415 Inria Paris. We are interested in some inverse problems in porous media: parameter estimation, fracture identification and wells location. All these problems are formulated as optimization problems. The main and common tool in the developed techniques is “ the gradient” of a convenient function. An adaptive parameterization algorithm is developed, implemented and applied for the estimation of scalar and vector parameters in porous media. Values of parameters and shapes of hydrogeological zones are unknown. The main tool in the adaptive parameterization approach is a refinement indicator: Once the identification problem is set as a minimization of an objective function, the question is what is the effect on this function of allowing discontinuity of the parameter in some candidate location? Refinement indicators give the answer to this question . Since fractures are characterized by discontinuities, the idea is to extend previous indicators to locate fractures. We define fracture indicators and we proceed in an iterative way in order to identify fractures in porous media. The topological sensitivity analysis method has been recognized as a promising tool to solve topology optimization problems. It consists to provide an asymptotic expansion of a shape functional with respect to the size of a small hole created inside the domain. To solve the inverse problem where both parametrization and well’s location are unknown, we incorporate the topological gradient approach in the adaptive parametrization algorithm; results are promising.
Peter Minev: Tuesday 10 October at 11 am, A415 Inria Paris. The presentation will be focused on two classes of recently developed splitting schemes for the Navier-Stokes equations. The first class is based on the classical artificial compressibility (AC) method. The original method proposed by J. Shen in 1995 reduces the solution of the incompressible Navier-Stokes equations to a set of two or three parabolic problems in 2D and 3D correspondingly. Unfortunately, its accuracy is limited to first order in time and can be extended further only if the resulting scheme involves an elliptic problem for the velocity vector. Recently, together with J.L. Guermond (Texas A&M University) we proposed a scheme that extends the AC method to any order in time using a bootstrapping approach to the incompressibility constraint that essentially requires to solve only a set of parabolic equations for the velocity. The conditioning of the corresponding linear systems is therefore O(Δth^-2). This is generally better than solving a parabolic equation for the velocity and an elliptic equation for the pressure required by the various projection schemes that are perhaps the most popular approach at present. Besides, the bootstrapping algorithm allows to achieve any order in time, subject to some initialization conditions, in contrast to the projection methods whose accuracy seems to be essentially limited to second order on the velocity in the L2 norm. The second class of methods is based on a novel approach to the Navier-Stokes equations that reformulates them in terms of stress variables. It was developed in a recent paper together with P. Vabishchevich (Russian Academy of Sciences). The main advantage of such an approach becomes clear when it is applied to fluid-structure interaction problems since in such case the problems for the fluid and the structure, both written in terms of stress variables,…
Théophile Chaumont: Monday 18 September at 1 pm, A415 Inria Paris. Time-harmonic wave propagation problems are costly to solve numerically since the corresponding PDE operators are not strongly elliptic, and as a result, discretization methods might become unstable. Specifically, the finite element solution is quasi-optimal (almost as good as the best approximation the finite element space can provide) only under restrictive assumptions on the mesh size. If the mesh size is too large, stability is lost, and the finite element solution can become completely inaccurate, even when the best approximation is. This phenomenon is called the “pollution effect” and becomes more important for larger frequencies. For the case of wave propagation problems in homogeneous media, it is known that high order finite element methods are less sensitive to the pollution effect. For this reason, they are employed in a wide range of applications, as the corresponding linear systems are smaller and easier to solve. In this talk, we investigate the use of high order finite element methods to solve wave propagation problems in highly heterogeneous media. Since the heterogeneities of the medium can exhibit small scale features, we consider “non-fitting” meshes, that are not aligned with the physical interfaces of the medium. Instead, the parameters defining the medium of propagation can be discontinuous inside each element. We propose a convergence analysis and draw two main conclusions: – the asymptotic convergence rate of the proposed finite element method is suboptimal due to the lack of regularity of the solution inside each cell – the pollution effect is greatly reduced by increasing the order of discretization. We illustrate our main conclusions with geophysical application benchmarks. These examples confirm that higher order methods are more efficient than linear finite elements.