Jannelle Hammond: Thursday 13 April at 15 pm, A315 Inria Paris. With increased pollutant emissions and exposure due to mass urbanization worldwide, air quality measurement campaigns and epidemiology studies on air pollution and health effects have become increasingly common to estimate individual exposures and evaluate their association to various illnesses. As air pollution concentrations are known to be highly heterogeneous, sophisticated physically based air quality models (AQMs), in particular CFD based models, can provide spatially rich approximations and enable to better estimate individual exposure. In this work we investigate reduced basis (RB) methods  to diminish the resolution cost of advanced AQMs developed for concentration evaluation at urban scales. These models depend on varying parameters including meteorological conditions and pollutant emissions, often unknown at the micro scale. RB methods use approximation spaces made of suitable samples of solutions of AQMs governed by parameterized partial differential equations (PDEs), to rapidly construct accurate and computationally efficient approximations. A key to this technique is decomposing computational work into an offline and online stage. The RB functions used to build approximation spaces and all expensive parameter-independent terms, are computed “offline” once and stored, whereas inexpensive parameter-dependent quantities are evaluated “online “ for each new value of the parameters. However, the decomposition of the matrices into offline-online pieces requires modifying the calculation code, an intrusive procedure, which in some situations is impractical. In this work, we extend the Parameterized-Background Data-Weak (PBDW) method introduced in  to physically based AQMs. We will generate a sample of solutions from physical AQMs with varying meteorological conditions and pollution emissionsto build the RB approximation space and combine it with experimental observations, using the method in , to improve pollutant concentration estimations, with the goal of collaboration with an epidemiology exposure assessment team at the University of California-Berkeley. The goal…
Mohammad Zakerzadeh: Thursday 30 March at 10 am, A415 Inria Paris. The well-posedness of the entropy weak solutions for scalar conservation laws is a classical result. However, for multidimensional hyperbolic systems, some theoretical and numerical evidence cast doubt on that entropy solutions constitute the appropriate solution paradigm, and it has been conjectured that the more general EMV solutions ought to be considered the appropriate notion of solution. In the numerical framework and building on previous results, we prove that bounded solutions of a certain class of space-time discontinuous Galerkin (DG) schemes converge to an EMV solution. The novelty in our work is that no streamline-diffusion (SD) terms are used for stabilization. While SD stabilization is often included in the analysis of DG schemes, it is not commonly found in practical implementations. We show that a properly chosen nonlinear shock-capturing operator success to provide the necessary stability and entropy consistency estimates. In case of scalar equations this result can be strengthened and the reduction to the entropy weak solution is obtained. We prove the boundedness of the solutions as well as the consistency with all entropy inequalities, and consequently the convergence to the entropy weak solution is obtained. For viscous conservation laws, we extend our framework to general convection-diffusion systems, with both nonlinear convection and nonlinear diffusion, such that the entropy stability of the scheme is preserved. It is well-known that this property is not guaranteed, even if the convective discretization is entropy stable with respect to the purely hyperbolic problem with a naive formulation of the viscous fluxes. We use a mixed formulation, and handle the difficulties arising from the nonlinearity of the viscous flux by an additional Galerkin projection operator. We prove the entropy stability of the method for different treatments of the viscous flux, thus unifying and extending…
Karol Cascavita: Thursday 23 March at 3 pm, A415 Inria Paris. Bingham fluids model are a group of non-Newtonian fluids with a wide and diverse range of applications in industry and research. These materials are governed by a yield limit stress, which determines solid- or fluid-like features. This behavior is model by a viscoplastic term that introduces a non-linearity in the constitutive equations. Hence, the great difficulty to solve the problem, due to the non-regularity along with the a priori unknown solid-fluid boundaries. The yield stress model considered is the Bingham model, which despite being the simplest viscoplastic model is still considered a hot problem to solve theoretically and experimentally. The approaches proposed to handle this difficulties are mainly regularization methods and augmented Lagrangian algorithms. The first technique adds a regularization parameter to smooth the problem avoiding the singularity in the rigid zones. This procedure permits an straightforward implementation at the expense of a deterioration on the accuracy. The remaining technique solves the variational problem by uncoupling nonlinearities and the gradients. All the above methods are mainly approximating solutions in a finite-element or a finite volume framework. In this work, we focus on a different discretization technique named the Discontinuous Skeletal method, introduced recently by Di Pietro et al. The aim of this work is to perform an h-adaptation to enhance the prediction of the solid-liquid boundary, exploding the salient features of the DISK method. For instance: supports general meshes, face and cell-based unknowns formulation, high-order reconstruction operator, locally conservative.
Thomas Boiveau: Thursday 16 March at 3 pm, A415 Inria Paris. Abstract: In numerical simulations, the reduction of computational costs is a key challenge for the development of new models and algorithms; tensor methods are widely used for this purpose. In this work, we consider parabolic equations and define a mathematical framework in order to use iterative low-rank greedy algorithms, based on the separation of the space and time variables. The problem is handled using a minimal residual formulation. We perform numerical tests to compare the proposed method with the strategies already suggested in the literature.
Matteo Cicuttin: Thursday 2 March at 3 pm, A415 Inria Paris. Abstract: Discontinuous Skeletal methods are devised at the mathematical level in a dimension-independent and cell-shape-independent fashion. Their implementation, at least in principle, should conserve this feature: a single piece of code should be able to work in any space dimension and to deal with any cell shape. It is not common, however, to see software packages taking this approach. In the vast majority of the cases, the codes are capable to run only on few very specific kinds of mesh, or only in 1D or 2D or 3D. On the one hand, this can happen simply because a fully general tool is not always needed. On the other hand, the programming languages commonly used by the scientific computing community (in particular Fortran and Matlab) are not easily amenable to an implementation which is generic and efficient at the same time. The usual (and natural) approach, in conventional languages, is to have different versions of the code, for example one specialized for 1D, one for 2D and one for 3D applications, making the overall maintenance of the codes rather cumbersome. The same considerations generally apply to the handling of mesh cells with various shapes, i.e., codes written in conventional languages generally support only a limited (and set in advance) number of cell shapes. Generic programming offers a valuable tool to address the above issues: by writing the code generically, it is possible to avoid making any assumption neither on the dimension (1D, 2D, 3D) of the problem, nor on the kind of mesh. In some sense, writing generic code resembles writing pseudocode: the compiler will take care of giving the correct meaning to each basic operation. As a result, with generic programming there will be still differents versions of the…
Christian Kreuzer: 23 February at 10h45 am, A415 Inria Paris. It is a well known fact that inf-sup stable Galerkin discretisations of linear continuous problem provide quasi-optimal approximations in the corresponding norms. For elliptic problems, this is e.g. known as Cea’s Lemma. A priori error bounds are then typically obtained with the help of some (quasi)-interpolation. We apply this principle to parabolic problems and prove inf-sup stability and thus quasi-optimality of space and time adaptive backward Euler-Galerkin discretisations. In a second step, we define a reasonable (quasi)-interpolation operator and conclude a priori error bounds. In 1982 Dupont presented a counter example showing non-convergence of the backward Euler-Galerkin in the presence of spatial mesh changes. In this case, our bound contains an additional term, which is consistent with Dupont’s observation.
Lars Diening: 23 February at 10 am, A415 Inria Paris. This is a joint work with Massimo Fornasier and Maximilian Wank. In this talk we propose a iterative method to solve the non-linear -Poisson equation. The method is derived from a relaxed energy by an alternating direction method. We are able to show algebraic convergence of the iterates to the solution. However, our numerical experiments based on finite elements indicate optimal, exponential convergence.
The Multiscale Finite Element Method (MsFEM) is a powerful numerical method in the context of multiscale analysis. It uses basis functions which encode details of the fine scale description, and performs in a two-stage procedure: (i) offline stage in which basis functions are computed solving local fine scale problems; (ii) online stage in which a cheap Galerkin approximation problem is solved using a coarse mesh. However, as in other numerical methods, a crucial issue is to certify that a prescribed accuracy is obtained for the numerical solution. In the present work, we propose an a posteriori error estimate for MsFEM using the concept of Constitutive Relation Error (CRE) based on dual analysis. It enables to effectively address global or goal-oriented error estimation, to assess the various error sources, and to drive robust adaptive algorithms. We also investigate the additional use of model reduction inside the MsFEM strategy in order to further decrease numerical costs. We particularly focus on the use of the Proper Generalized Decomposition (PGD) for the computation of multiscale basis functions. PGD is a suitable tool that enables to explicitly take into account variations in geometry, material coefficients, or boundary conditions. In many configurations, it can thus be efficiently employed to solve with low computing cost the various local fine-scale problems associated with MsFEM. In addition to showing performances of the coupling between PGD and MsFEM, we introduce dedicated estimates on PGD model reduction error, and use these to certify the quality of the overall MsFEM solution.
Amina Benaceur: January 26 at 3pm, A415 Inria Paris. We address the reduced order modeling of parameterized transient non-linear and non-affine partial differential equations (PDEs). In practice, both the treatment of non-affine terms and non-linearities result in an empirical interpolation method (EIM) that may not be affordable although it is performed `offline’, since it requires to compute various nonlinear trajectories using the full order model. An alternative to the EIM that lessens its cost for steady non-linear problems has been recently proposed by Daversion and Prudhomme so as to alleviate the global cost of the `offline’ stage in the reduced basis method (RBM) by enriching progressively the EIM using the computed reduced basis functions. In the present work, we adapt the latter ideas to transient PDEs so as to propose an algorithm that solely requires as many full-model computations as the number of functions that span both the reduced basis and the EIM spaces. The computational cost of the procedure can therefore be substantially reduced compared to the standard strategy. Finally, we discuss possible variants of the present approach.
Laurent Monasse: January 19 December at 3pm, A415 Inria Paris. We will present a conservative method for three-dimensional inviscid fluid-structure interaction problems. Body-fitted methods are not well-suited for large displacements or fragmentation of the structure, since they involve possibly costly remeshing of the fluid domain. We use instead an immersed boundary technique through the modification of the finite volume fluxes in the vicinity of the solid. The method is tailored to yield the exact conservation of mass, momentum and energy of the system and exhibits consistency properties. In the event of fragmentation, void can appear due to the velocity of crack opening. In order to ensure stability in the presence of void, we resort locally to the Lax-Friedrichs flux near cracks. Since both fluid and solid methods are explicit, the coupling scheme is designed to be explicit too. The computational cost of the fluid and solid methods lies mainly in the evaluation of fluxes on the fluid side and of forces and torques on the solid side. It should be noted that the coupling algorithm evaluates these only once every time step, ensuring the computational efficiency of the coupling. We also analyze a corner instability of the conservative explicit immersed boundary method in the case of a Roe flux, explain its origin and propose a way to fix the issue.