Jad Dabaghi: Thursday 15 June at 3pm, A415 Inria Paris. We propose an adaptive inexact version of a class of semi-smooth Newton methods. As a model problem, we consider the system of variational inequalities describing the contact between two membranes and its finite element discretization. Any iterative linearization algorithm like the Newton-min, Newton-Fisher Burmeister is taken into account, as well as any iterative linear algebraic solver. We prove an a posteriori error estimate between the exact solution and the approximate solution which is valid on any step of the linearization and algebraic resolution. This estimate is based on discretization and algebraic flux reconstructions, where the latter one is obtained on a hierarchy of nested meshes. The estimate distinguishes the discretization, linearization, and algebraic components of the error and allows us to formulate adaptive stopping criteria for both solvers. Numerical experiments for the semi-smooth Newton-min algorithm in combination with the GMRES solver confirm the efficiency of the method.
Martin Eigel: Thursday 1 June at 10:45 am, A415 Inria Paris. We consider a class of linear PDEs with stochastic coefficients which depend on a countable (inifinite) number of random parameters. As an alternative to classical Monte Carlo sampling techniques, a functional discretisation of the stochastic space in generalised polynomial chaos may lead to significantly improved (optimal) convergence rates. However, when employed in the context of Galerkin methods, the arising algebraic systems are very high-dimensional and quickly become intractable to computations. As a matter of fact, this is an exemplary example for the curse of dimensionality with exponential growth of complexity which makes model reduction techniques inevitable. In the first part, we discuss two approaches for this: (1) a posteriori adaptivity and exploitation of sparsity of the solution, and (2) low-rank compression in a hierarchical tensor format. In the second part, the low-rank discretisation is used as an efficient representation of the stochastic model for Bayesian inversion. This is an important application in Uncertainty Quantification where one is interested in determining the (statistics of) parameters of the model based on a set of noisy measurements. In contrast to popular sampling techniques such as MCMC, we derive an explicit representation of the posterior densities. The examined sampling-free Bayesian inversion is adaptive in all discretisation parameters. Moreover, convergence of the method is shown.
Joscha Gedicke: Thursday 1 June at 10 am, A415 Inria Paris. We extend the Hodge decomposition approach for the cavity problem of two-dimensional time harmonic Maxwell’s equations to include the impedance boundary condition, with anisotropic electric permittivity and sign changing magnetic permeability. We derive error estimates for a P_1 finite element method based on the Hodge decomposition approach and develop a residual type a posteriori error estimator. We show that adaptive mesh refinement leads empirically to smaller errors than uniform mesh refinement for numerical experiments that involve metamaterials and electromagnetic cloaking. The well-posedness of the cavity problem when both electric permittivity and magnetic permeability can change sign is also discussed and verified for the numerical approximation of a flat lens experiment.
Quang Duc Bui: Thursday 1 June at 11:30am, A415 Inria Paris. Parareal method is a numerical method to solve time-evolution problems in parallel, which uses two propagators: the coarse – fast and inaccurate – and the fine – slow but more accurate. Instead of running the fine propagator on the whole time interval, we divide the time space into small time intervals, where we can run the fine propagator in parallel to obtain the desired solution, with the help of the coarse propagator and through parareal steps. Furthermore, each local subproblem can be solved by an iterative method, and instead of doing this local iterative method until convergence, one may perform only a few iterations of it, during parareal iterations. Propagators then become much cheaper but sharply lose their accuracy, and we hope that the convergence will be achieved across parareal iterations. Here, we propose to couple Parareal with a well-known iterative method – Schwarz Waveform Relaxation (SWR)- with only few SWR iterations in the fine propagator and with a simple coarse propagator deduced from Backward Euler method. We present the analysis of this coupled method for 1-dimensional advection reaction diffusion equation, for this case the convergence is at least linear. We also give some numerical illustrations for 1D and 2D parabolic equations, which shows that the convergence is much faster in practice.
Quanling Deng: Tuesday 16 May at 3 pm, A415 Inria Paris. The isogeometric analysis (IgA) is a powerful numerical tool that unifies the finite element analysis (FEA) and computer-aided design (CAD). Under the framework of FEA, IgA uses as basis functions those employed in CAD, which are capable of exactly represent various complex geometries. These basis functions are called the B-Splines or more generally the Non-Uniform Rational B-Splines (NURBS) and they lead to an approximation which may have global continuity of order up to $p-1$, where $p$ is the order of the underlying polynomial, which in return delivers more robustness and higher accuracy than that of finite elements. We apply IgA to wave propagation and structural vibration problems to study their dispersion and spectrum properties. The dispersion and spectrum analysis are unified in the form of a Taylor expansion for eigenvalue errors. By blending optimally two standard Gaussian quadrature schemes for the integrals corresponding to the stiffness and mass, the dispersion error of IgA is minimized. The blending schemes yield two extra orders of convergence (superconvergence) in the eigenvalue errors, while the eigenfunction errors are of optimal convergence order. To analyze the eigenvalue and eigenfunction errors, the Pythagorean eigenvalue theorem (Strang and Fix, 1973) is generalized to establish an equality among the eigenvalue, eigenfunction (in L2 and energy norms), and quadrature errors.
Sarah Ali Hassan: Thursday 11 May at 3 pm, A415 Inria Paris. In this work we develop a posteriori error estimates and stopping criteria for domain decomposition (DD) methods with optimized Robin transmission conditions on the interface. Steady diffusion equation using the mixed finite element (MFE) discretization as well as in the heat equation using the MFE method in space and the discontinuous Galerkin scheme in time are analysed. For the heat equation, a global-in-time domain decomposition method is used, allowing for different time steps in different subdomains. We bound the error between the exact solution of the PDE and the approximate numerical solution at each iteration of the domain decomposition algorithm. Different error components (domain decomposition, space discretization, time discretization) are distinguished, which allows us to define efficient stopping criteria for the DD algorithm. The estimates are based on the reconstruction techniques for pressures and fluxes. Numerical experiments illustrate the theoretical findings.
Ivan Yotov: Tuesday 6 June at 11 am, A415 Inria Paris. We discuss mixed finite element approximations for the Biot system of poroelasticity. We employ a weak stress symmetry elasticity formulation with three fields – stress, displacement, and rotation, as well as a mixed velocity-pressure Darcy formulation. The method is reduced to a cell-centered scheme for the displacement and the pressure, using the multipoint flux mixed finite element method for flow and the recently developed multipoint stress mixed finite element method for elasticity. The methods utilize the Brezzi-Douglas-Marini spaces for velocity and stress and a trapezoidal-type quadrature rule for integrals involving velocity, stress, and rotation, which allows for local flux, stress, and rotation elimination. We perform stability and error analysis and present numerical experiments illustrating the convergence of the method and its performance for modeling flows in deformable reservoirs. This is joint work with Ilona Ambartsumyan and Eldar Khattatov, University of Pittsburgh.
Jannelle Hammond: Thursday 13 April at 15 pm, A315 Inria Paris. With increased pollutant emissions and exposure due to mass urbanization worldwide, air quality measurement campaigns and epidemiology studies on air pollution and health effects have become increasingly common to estimate individual exposures and evaluate their association to various illnesses. As air pollution concentrations are known to be highly heterogeneous, sophisticated physically based air quality models (AQMs), in particular CFD based models, can provide spatially rich approximations and enable to better estimate individual exposure. In this work we investigate reduced basis (RB) methods  to diminish the resolution cost of advanced AQMs developed for concentration evaluation at urban scales. These models depend on varying parameters including meteorological conditions and pollutant emissions, often unknown at the micro scale. RB methods use approximation spaces made of suitable samples of solutions of AQMs governed by parameterized partial differential equations (PDEs), to rapidly construct accurate and computationally efficient approximations. A key to this technique is decomposing computational work into an offline and online stage. The RB functions used to build approximation spaces and all expensive parameter-independent terms, are computed “offline” once and stored, whereas inexpensive parameter-dependent quantities are evaluated “online “ for each new value of the parameters. However, the decomposition of the matrices into offline-online pieces requires modifying the calculation code, an intrusive procedure, which in some situations is impractical. In this work, we extend the Parameterized-Background Data-Weak (PBDW) method introduced in  to physically based AQMs. We will generate a sample of solutions from physical AQMs with varying meteorological conditions and pollution emissionsto build the RB approximation space and combine it with experimental observations, using the method in , to improve pollutant concentration estimations, with the goal of collaboration with an epidemiology exposure assessment team at the University of California-Berkeley. The goal…
Mohammad Zakerzadeh: Thursday 30 March at 10 am, A415 Inria Paris. The well-posedness of the entropy weak solutions for scalar conservation laws is a classical result. However, for multidimensional hyperbolic systems, some theoretical and numerical evidence cast doubt on that entropy solutions constitute the appropriate solution paradigm, and it has been conjectured that the more general EMV solutions ought to be considered the appropriate notion of solution. In the numerical framework and building on previous results, we prove that bounded solutions of a certain class of space-time discontinuous Galerkin (DG) schemes converge to an EMV solution. The novelty in our work is that no streamline-diffusion (SD) terms are used for stabilization. While SD stabilization is often included in the analysis of DG schemes, it is not commonly found in practical implementations. We show that a properly chosen nonlinear shock-capturing operator success to provide the necessary stability and entropy consistency estimates. In case of scalar equations this result can be strengthened and the reduction to the entropy weak solution is obtained. We prove the boundedness of the solutions as well as the consistency with all entropy inequalities, and consequently the convergence to the entropy weak solution is obtained. For viscous conservation laws, we extend our framework to general convection-diffusion systems, with both nonlinear convection and nonlinear diffusion, such that the entropy stability of the scheme is preserved. It is well-known that this property is not guaranteed, even if the convective discretization is entropy stable with respect to the purely hyperbolic problem with a naive formulation of the viscous fluxes. We use a mixed formulation, and handle the difficulties arising from the nonlinearity of the viscous flux by an additional Galerkin projection operator. We prove the entropy stability of the method for different treatments of the viscous flux, thus unifying and extending…
Karol Cascavita: Thursday 23 March at 3 pm, A415 Inria Paris. Bingham fluids model are a group of non-Newtonian fluids with a wide and diverse range of applications in industry and research. These materials are governed by a yield limit stress, which determines solid- or fluid-like features. This behavior is model by a viscoplastic term that introduces a non-linearity in the constitutive equations. Hence, the great difficulty to solve the problem, due to the non-regularity along with the a priori unknown solid-fluid boundaries. The yield stress model considered is the Bingham model, which despite being the simplest viscoplastic model is still considered a hot problem to solve theoretically and experimentally. The approaches proposed to handle this difficulties are mainly regularization methods and augmented Lagrangian algorithms. The first technique adds a regularization parameter to smooth the problem avoiding the singularity in the rigid zones. This procedure permits an straightforward implementation at the expense of a deterioration on the accuracy. The remaining technique solves the variational problem by uncoupling nonlinearities and the gradients. All the above methods are mainly approximating solutions in a finite-element or a finite volume framework. In this work, we focus on a different discretization technique named the Discontinuous Skeletal method, introduced recently by Di Pietro et al. The aim of this work is to perform an h-adaptation to enhance the prediction of the solid-liquid boundary, exploding the salient features of the DISK method. For instance: supports general meshes, face and cell-based unknowns formulation, high-order reconstruction operator, locally conservative.