#### Assessment of torso conductivities effect on the ECGI inverse problem solution

During the first and second years, we have been working on three main topics. First, we studied the effect of torso conductivity inhomogeneity on the ECGI inverse problem solution. We proceeded following two different strategies. The strategy is simple: We simulate the forward problem using a heterogeneous 3D real life human torso that takes into account different organs conductivities (lungs, bones and the rest of the tissue). Then, we solve the inverse problem using a homogeneous torso. We assume that the mathematical and geometrical models are perfect. Only the conductivities are supposed to be unknown. This test is very important because it isolates the issue of torso inhomogeneity and its effect on the ECGI inverse solution. The first results show that taking into account the heterogeneities allows a gain of 20% in terms of accuracy when we don’t consider errors in the electrical potential measurements. It also shows that this accuracy is altered when high level of noise is introduced in the electrical measurements [CINC1].

The synthetic data produced in this work would be used later to test the different methods developed in the EPICARD framework. In the second strategy, we used a stochastic finite element method in order to quantify the effect of torso conductivity uncertainties on the ECGI-inverse solution. The problem is formulated using a cost function that we minimize using a conjugate gradient method. We minimize the mean value of a stochastic energy functional where the stochastic part comes from the conductivity uncertainties. We used a stochastic finite element method in order to discretise the problem both on space and on the stochastic variable [MMNP2016] [FIMH1]. Results show that the error in the lungs conductivities has more effects than the other organs. We also combined the uncertainty of the conductivity with an uncertainty on the measured signals. For the forward problem, we have seen evidence of the effect of conductivity uncertainties for epicardial electrical potential uncertainty less than 10%.

#### Nash game approach for solving ECGI inverse problem

Second, we have been working last year on using the Nash-game approach for solving the ECGI inverse problem. This method has been used in 3D but for spherical geometries where the solution is an analytical function. We have adapted the FreeFem++ code to the 3D anatomical model and we conducted some numerical simulations. There was a technical issue requiring the use Raviart Thomas finite elements method in order to approximate the current fluxes over the epicardium. This issue has been sorted out using standard finite element method during the visit of Rabeb Chamekh in Bordeaux. The new changes in the code dramatically improve the computational time for the data completion problem. We then have been able to combine both conductivity estimation and data completion problem using a Nash game algorithm based on the previous works of M. Kallel and A. Habbel [11]. The novelty here is that we introduce a third player which role is optimising conductivities by minimizing a Kohn-Vogelius type cost functional [PICOF2016]. We started with Rabeb Chamekh writing a paper on this topic.

#### Extended domain approch for the ECGI inverse problem regularization

F. Ben Belgacem and F. Jelassi have introduced the method in the field of the data completion problem for the Laplace equation and have proved its robustness in 2D [17]. N. Zemzemi has implemented the extended domain approach used as a regularization method for solving the ECGI inverse problem. But the first results did not show any improvement compared to the non-extended Steklov-Poincaré variational approach. These results have been discussed with F. Ben Belgacem. He started with F. Jelassi looking at the problem using a 2D section of the 3D anatomical geometry.

#### Ischemia dection based on a variational approach combined with a levelset method

C. E. Chavez, N. Zemzemi, Y. Coudière, F. Alonzo and A. Alvarez have been working on introducing a level set approach for detecting ischemia regions in the heart. This method has been used for localizing cancer regions. We used the state-of-the-art monodomain model coupled to two state variables ionic model. We characterize the ischemic region using two parameters. The results obtained in [FIMH2] show that this method allows finding the ischemic region with a very good accuracy. In [CINC4] we compared the accuracy of level set approach combined with the adjoint problem is more accurate than two other state-of-the-art method. We still need to validate this method against physiological recordings.

#### New formulation for the missing electrodes problem in ECGI inverse problem

M. Addouche, Nadra Belaiib, J. Henry, Fadhel Jday and Nejib Zemzemi have been working on the mathematical formulation of how to introduce missing electrodes issue in the ECGI inverse problem using the factorization of boundary values method. This method has been introduced to the ECGI inverse problem by J. Bouyssier, J. Henry and N. Zemzemi in [ISBI2015]. We started by formulating the problem using Robin boundary conditions by introducing a Robin to Dirichlet operator. This approach didn’t provide good results due to a singularity at the interface between the known and unknown potential boundaries. We found a better way to solve this problem by introducing the torso potential missing data in the control variables. We use Dirichlet to Neumann (D-N) and Neumann to Dirichlet (N-D) operators as introduced previously in [ISBI2015], but with more technical issues related to the missing data. Using this formulation we obtain three equations: Two equations give the potential and its flux on the heart boundary and the other gives the value of the potential on the torso surface where the data is missing. We numerically tested this method on a 2D rectangular domain. Without noise, results show that when the size of the measuring domain (Γ_{m}^{0}) is larger than 40% of the full accessible domain (Γ^{0}), the relative error is less than 1%. When Γ_{m}^{0} is less than 10% of Γ^{0} the relative error is higher than 10%. Whereas, when the level of the noise is 30%, we found that the relative error of the solution is less than 5% when the size of Γ_{m}^{0} covers more than 80% of Γ^{0}. The error is higher than 10% when the size of Γ_{m}^{0 }covers less than 60% of Γ^{0}. In future work, we aim to extend this study in 3D domain and theoretically study the effect of the size of the measuring domain. Part of this work has been submitted as a conference proceeding [TAMTAM2017] and we are writing a detailed paper that we will submit to Inverse problem. Jacques Henry and Nejib Zemzemi supervised a Master internship of Ronald Reagan-Missoutou, during which we tested the use of suitable integration algorithms for computing N-D and D-N operators. These algorithms called Anadromic schemes are multi-order efficient numerical schemes for integrating Ricatti equations. The simulation in 2D showed a very good stability of these schemes, but the main gain is the conservation of the symmetry and positivity of the N-D and D-N operators. This would be later applied to 3D realistic geometries.

#### Opitimizing conductivities on experimental data

At the Liryc institute, Laura Bear, Rémi Dubois and Nejib Zemzemi have been working on optimizing the conductivities for the ECGI forward problem. The idea is the following: We first build electrical signals on the heart surface and on the torso surface, both are synchronized. We optimize the conductivities in the chest by minimizing a cost function for which we are able to compute the gradient using an adjoint formulation [CINC2016]. In this study, we have presented a method for optimizing the conductivity of different organs in a torso model, given simultaneous recordings of epicardial and torso potentials. Here, conductivities of 3 organs were successfully computed under typical signal noise and geometric error levels. A sensitivity analysis revealed the conductivities could be computed to within 10 % of their true values when the standard deviation of signal noise was less than 0.20 mV and the mean electrode localization error less than 2.56 cm. The accuracy of the final conductivity values was dependent on the signal selected for optimization, demonstrated by an increase in standard deviation with errors. Further analysis showed that signals taken near the start and end of depolarization were typically less accurate than those near the middle of depolarization. Thus, care must be taken when selecting the signals used for optimization, as the method is dependent not only on the noise level but the signal-to-noise ratio itself. The optimization of conductivity values would be later used for solving the inverse problem. The goal is to see if on real data using optimized conductivities would improve the results.