Martin Eigel: Thursday 1 June at 10:45 am, A415 Inria Paris. We consider a class of linear PDEs with stochastic coefficients which depend on a countable (inifinite) number of random parameters. As an alternative to classical Monte Carlo sampling techniques, a functional discretisation of the stochastic space in generalised polynomial chaos may lead to significantly improved (optimal) convergence rates. However, when employed in the context of Galerkin methods, the arising algebraic systems are very high-dimensional and quickly become intractable to computations. As a matter of fact, this is an exemplary example for the curse of dimensionality with exponential growth of complexity which makes model reduction techniques inevitable. In the first part, we discuss two approaches for this: (1) a posteriori adaptivity and exploitation of sparsity of the solution, and (2) low-rank compression in a hierarchical tensor format. In the second part, the low-rank discretisation is used as an efficient representation of the stochastic model for Bayesian inversion. This is an important application in Uncertainty Quantification where one is interested in determining the (statistics of) parameters of the model based on a set of noisy measurements. In contrast to popular sampling techniques such as MCMC, we derive an explicit representation of the posterior densities. The examined sampling-free Bayesian inversion is adaptive in all discretisation parameters. Moreover, convergence of the method is shown.
Joscha Gedicke: Thursday 1 June at 10 am, A415 Inria Paris. We extend the Hodge decomposition approach for the cavity problem of two-dimensional time harmonic Maxwell’s equations to include the impedance boundary condition, with anisotropic electric permittivity and sign changing magnetic permeability. We derive error estimates for a P_1 finite element method based on the Hodge decomposition approach and develop a residual type a posteriori error estimator. We show that adaptive mesh refinement leads empirically to smaller errors than uniform mesh refinement for numerical experiments that involve metamaterials and electromagnetic cloaking. The well-posedness of the cavity problem when both electric permittivity and magnetic permeability can change sign is also discussed and verified for the numerical approximation of a flat lens experiment.
Quanling Deng: Tuesday 16 May at 3 pm, A415 Inria Paris. The isogeometric analysis (IgA) is a powerful numerical tool that unifies the finite element analysis (FEA) and computer-aided design (CAD). Under the framework of FEA, IgA uses as basis functions those employed in CAD, which are capable of exactly represent various complex geometries. These basis functions are called the B-Splines or more generally the Non-Uniform Rational B-Splines (NURBS) and they lead to an approximation which may have global continuity of order up to $p-1$, where $p$ is the order of the underlying polynomial, which in return delivers more robustness and higher accuracy than that of finite elements. We apply IgA to wave propagation and structural vibration problems to study their dispersion and spectrum properties. The dispersion and spectrum analysis are unified in the form of a Taylor expansion for eigenvalue errors. By blending optimally two standard Gaussian quadrature schemes for the integrals corresponding to the stiffness and mass, the dispersion error of IgA is minimized. The blending schemes yield two extra orders of convergence (superconvergence) in the eigenvalue errors, while the eigenfunction errors are of optimal convergence order. To analyze the eigenvalue and eigenfunction errors, the Pythagorean eigenvalue theorem (Strang and Fix, 1973) is generalized to establish an equality among the eigenvalue, eigenfunction (in L2 and energy norms), and quadrature errors.
Sarah Ali Hassan: Thursday 11 May at 3 pm, A415 Inria Paris. In this work we develop a posteriori error estimates and stopping criteria for domain decomposition (DD) methods with optimized Robin transmission conditions on the interface. Steady diffusion equation using the mixed finite element (MFE) discretization as well as in the heat equation using the MFE method in space and the discontinuous Galerkin scheme in time are analysed. For the heat equation, a global-in-time domain decomposition method is used, allowing for different time steps in different subdomains. We bound the error between the exact solution of the PDE and the approximate numerical solution at each iteration of the domain decomposition algorithm. Different error components (domain decomposition, space discretization, time discretization) are distinguished, which allows us to define efficient stopping criteria for the DD algorithm. The estimates are based on the reconstruction techniques for pressures and fluxes. Numerical experiments illustrate the theoretical findings.
Ivan Yotov: Tuesday 6 June at 11 am, A415 Inria Paris. We discuss mixed finite element approximations for the Biot system of poroelasticity. We employ a weak stress symmetry elasticity formulation with three fields – stress, displacement, and rotation, as well as a mixed velocity-pressure Darcy formulation. The method is reduced to a cell-centered scheme for the displacement and the pressure, using the multipoint flux mixed finite element method for flow and the recently developed multipoint stress mixed finite element method for elasticity. The methods utilize the Brezzi-Douglas-Marini spaces for velocity and stress and a trapezoidal-type quadrature rule for integrals involving velocity, stress, and rotation, which allows for local flux, stress, and rotation elimination. We perform stability and error analysis and present numerical experiments illustrating the convergence of the method and its performance for modeling flows in deformable reservoirs. This is joint work with Ilona Ambartsumyan and Eldar Khattatov, University of Pittsburgh.
Thomas Boiveau: Thursday 16 March at 3 pm, A415 Inria Paris. Abstract: In numerical simulations, the reduction of computational costs is a key challenge for the development of new models and algorithms; tensor methods are widely used for this purpose. In this work, we consider parabolic equations and define a mathematical framework in order to use iterative low-rank greedy algorithms, based on the separation of the space and time variables. The problem is handled using a minimal residual formulation. We perform numerical tests to compare the proposed method with the strategies already suggested in the literature.
Lars Diening: 23 February at 10 am, A415 Inria Paris. This is a joint work with Massimo Fornasier and Maximilian Wank. In this talk we propose a iterative method to solve the non-linear -Poisson equation. The method is derived from a relaxed energy by an alternating direction method. We are able to show algebraic convergence of the iterates to the solution. However, our numerical experiments based on finite elements indicate optimal, exponential convergence.
The Multiscale Finite Element Method (MsFEM) is a powerful numerical method in the context of multiscale analysis. It uses basis functions which encode details of the fine scale description, and performs in a two-stage procedure: (i) offline stage in which basis functions are computed solving local fine scale problems; (ii) online stage in which a cheap Galerkin approximation problem is solved using a coarse mesh. However, as in other numerical methods, a crucial issue is to certify that a prescribed accuracy is obtained for the numerical solution. In the present work, we propose an a posteriori error estimate for MsFEM using the concept of Constitutive Relation Error (CRE) based on dual analysis. It enables to effectively address global or goal-oriented error estimation, to assess the various error sources, and to drive robust adaptive algorithms. We also investigate the additional use of model reduction inside the MsFEM strategy in order to further decrease numerical costs. We particularly focus on the use of the Proper Generalized Decomposition (PGD) for the computation of multiscale basis functions. PGD is a suitable tool that enables to explicitly take into account variations in geometry, material coefficients, or boundary conditions. In many configurations, it can thus be efficiently employed to solve with low computing cost the various local fine-scale problems associated with MsFEM. In addition to showing performances of the coupling between PGD and MsFEM, we introduce dedicated estimates on PGD model reduction error, and use these to certify the quality of the overall MsFEM solution.
Amina Benaceur: January 26 at 3pm, A415 Inria Paris. We address the reduced order modeling of parameterized transient non-linear and non-affine partial differential equations (PDEs). In practice, both the treatment of non-affine terms and non-linearities result in an empirical interpolation method (EIM) that may not be affordable although it is performed `offline’, since it requires to compute various nonlinear trajectories using the full order model. An alternative to the EIM that lessens its cost for steady non-linear problems has been recently proposed by Daversion and Prudhomme so as to alleviate the global cost of the `offline’ stage in the reduced basis method (RBM) by enriching progressively the EIM using the computed reduced basis functions. In the present work, we adapt the latter ideas to transient PDEs so as to propose an algorithm that solely requires as many full-model computations as the number of functions that span both the reduced basis and the EIM spaces. The computational cost of the procedure can therefore be substantially reduced compared to the standard strategy. Finally, we discuss possible variants of the present approach.