CIDRE weekly seminars
For CIDRE members : please register for presentation here : https://lite.framacalc.org/jykc1mixp1
Thursday 27 June 2019 – INRIA Rennes – 14h
Arnaud Van Straaten
Étude du fonctionnement d’un logiciel évasif dans un anti-virus
De nos jours, les malwares ne se contentent plus seulement d’infecter une machine cible et d’effectuer des actions indésirables. Leurs développeurs implémentent des routines d’évasion pour contrer les méthodes de détection des anti-virus : analyse statique, analyse dynamique et analyse comportementale. Les évasions ne concernent pas uniquement les analyses des anti-virus mais aussi les environnements virtualisés comme les sandboxs ou les machines virtuelles. Les principes généraux sont : 1) n’appliquer son code malveillant que si la machine présente l’ensemble des conditions pour être infectée 2) rester inconnu par les méthodes d’analyse le plus longtemps possible.
Cette présentation va mettre en avant un exemple de malware regroupant des méthodes d’anti-debugging, d’anti-virtualisation et d’évasion face à un anti-virus.
Arnaud van Straaten est stagiaire dans l’équipe CIDRE à la suite d’un Master 1 Sécurité, Systèmes et Réseaux à l’université Rennes 1.
Thursday 23 Mai 2019 – CentraleSupelec Rennes – 10h-16h30
A cross-team workshop on Malware analysis with TAMIS, EMSEC & CIDRE.
Thursday 9 Mai 2019 – INRIA Rennes – 14h
Overview of Android malware datasets, obfuscation techniques, and taxonomy
Research in Android malware use already written malware in order to carry out their experiences. These samples can be shared through datasets of known, and examined malware, where its characteristics and behavior are detailed, which helps researches know with the samples in advance. Unfortunately, most datasets are either 1) too old, 2) not well documented, 3) too small, 4) or not longer available. Moreover, malware nowadays obfuscate their code more frequently and better, in order to evade reverse engineering, which consumes time and effort to analyze manually. With third party markets more prone to contain malware, and the need of information about malware, these problems should be addressed properly. We discuss research Android malware datasets, their contributions, obfuscation techniques, and a proposed general taxonomy.
Tomas Concepcion is a student coursing the Master Sécurité des Systèmes Informatiques of the University of Rouen
Thursday 2 Mai 2019 – INRIA Rennes – 14h
Systèmes de détection d’intrusion basé sur les anomalies: apprentissage robuste en continu des modèles
Alexandre est Doctorant CIFRE en première année pour Airbus CyberSecurity au sein de l’équipe CIDRE. Il est spécialisé dans les applications de l’apprentissage machine aux problématiques de cybersécurité
Thursday 25 April 2019 – CentraleSupelec – 14h
Opacity properties and SMT-solvers
Obfuscation defines a set of transformations which must protect programs against reverse engineering. Lots of academic works have been published during these last years about theory and practice in obfuscation. The main goal of this work is to understand how to slow down the understanding of the behavior of the program, which means understanding how the opacity properties work. Reverse engineering can be automated with behavioral analysis of programs. They can be used, for example, to detect potential vulnerabilities or to detect malware, by applying mathematical logic in solving Boolean satisfiability (SAT) problems, to a computer program.
This abstraction can be realized with an analysis framework which use an SMT (Satisfiability Modulo Theories) solver. The solver checks the satisfiability of given logical with regards to some background theory. But, if obfuscation techniques are embedded in the program, the analysis framework can make some mistakes in the interpretation of the program. The following analysis is then either false, or more time-consuming than usual.
This presentation suggests an attacker point of view against opacity properties. More precisely, why opacity properties have an impact in the behavioral analysis made by an SMT solver. This presentation indicates interpretations made by analysis framework is not enough alone. We suggest if inputs of the decision procedure are adapted to an opacity property, the number of steps needed to analyze the formula is reduced. A proof-of-concept has been done showing that if parity (which is an opacity property) is handled correctly, we are able to reduce the time needed to analyze a set of formula.
Alexandre is a 3rd year PhD student in TAMIS team (Inria).
Thursday 28 March 2019 – INRIA Rennes
Automaton Based Modeling and Learning of the Normal Behavior of an Industrial System for Intrusion Detection
Un système de détection d’intrusion (IDS) est un système permettant de détecter des activités anormales ou suspectes. Deux grandes approches coexistent dans la détection d’intrusion. La première est l’approche par signature. La deuxième est l’approche comportementale. Nous nous intéressons ici à la détection d’intrusion comportementale sur un log lié à un système de distribution d’électricité. Cette approche est divisée en deux grandes phases. Tout d’abord, un modèle du comportement normal du système est créé. Ensuite, ce modèle sera utilisé pour détecter des anomalies.
Ici, on s’intéresse à la détection d’attaques contre des systèmes de distributions d’électricité à l’aide des logs. Notre objectif a pour but de modéliser le comportement du système à l’aide des invariants ou avec des automates. À l’exécution, si un des invariants est violé, nous considérons que nous sommes en présence d’une intrusion. De plus, si un langage n’est pas reconnu par notre automate, nous considérons que nous sommes aussi en présence d’une intrusion.
Stéphane is a Master 2 Research student at Univ. Rennes 1, currently in internship at CIDRE
Thursday 4 April 2019 – 15h45 @INRIA Rennes
Valérie Viet Triem Tong
Retour et discussion sur l’évaluation de l’équipe CIDRE par Inria le 22 mars
Valérie is head of CIDRE team and professor at CentraleSupelec
Thursday 28 February 2019
Defusing malicious behaviours in Android applications at runtime
As of today, a lot of efforts are made to detect malwares prior to their execution on the user’s phone using in particular advanced static analysis techniques.
However, it is still challenging to detect malicious applications prior to their execution.
Malwares use obfuscation, string encryption or other anti-analysis techniques to hide themselves when under analysis.
Moreover, malicious applications can trigger their malicious behavior only under certain circumstances, making them undetectable using current automated analysis tools.
One solution to tackle this problem could be to continue these analysis efforts directly on user’s device, at runtime.
In this talk, we will discuss about a novel technique that monitor Android applications to detect and disable malicious behaviors in real time.
This approach aims to be highly scalable to track suspicious events along with application execution, without the need to modify the Android operating system.
Louison Gitzinger is a second year PhD student at WIDE (World is DEstributed), an INRIA/IRISA research team mainly focused on solving theoretical problems on decentralized systems.
He first followed general engineering studies at Université de Lorraine and then graduated from the ESIR engineering school at Université de Rennes 1 in 2017 (IoT section).
His research mainly focus on malware and vulnerability detection on mobile devices. Future works may introduce decentralized solutions to share malware detection knowledge between mobile devices nodes.
Thursday 21 February 2019
Les attaques par injections électromagnétiques engendrent une perturbation du signal d’horloge des micro-contrôleurs. TRAITOR permet
de reproduire un signal d’horloge où la perturbation est maîtrisée et paramétrable. TRAITOR permet également de fauter plusieurs instructions à chaque exécution. Cette plateforme est transportable et facile à mettre en œuvre.
Je vous présenterai la plateforme ainsi qu’une démonstration de son fonctionnement sur 2 codes simples. Je ferai le point sur mon avancée quant à la compréhension du mécanisme de faute (le programme ne se déroule pas comme prévu, mais que se passe-t-il ?).
J’attends vos questions et vos idées d’application avec impatience !
Ludovic is a post-doctoral research at CIDRE.
Thursday 14 February 2019 – IRISA
Sebanjila Kevin Bukasa
Side-channel attacks on modern SOC
Side-channel attacks (SCA) exploit the reification of a computation
through its physical dimensions (current consumption, EM emission, . . .
). Focusing on Elecromagnetic analyses (EMA), such analyses have mostly
been considered on low-end devices: smartcards and microcontrollers. In
the wake of recent works, we propose to analyze the effects of a modern
microarchitecture on the efficiency of EMA (here Correlation Power
Analysis and template attacks).
We show that despite the difficulty to synchronize the measurements, the
speed of the targeted core and the activity of other cores on the same
chip can still be accommodated. Finally, we confirm that enabling the
secure mode of TrustZone (a hardware-assisted software countermeasure)
has no effect whatsoever on the EMA efficiency. Therefore, critical
applications in TrustZone are not more secure than in the normal world
with respect to EMA, in accordance with the fact that it is not a
countermeasure against physical attacks. For the best of our knowledge
this is the first application of EMA against TrustZone.
Kevin is a 3rd year PhD student in CIDRE team.
Thursday 7 February 2019
Visualization for information system security monitoring
The large quantities of alerts generated by intrusion detection systems (IDS) make very difficult to distinguish on a network real threats from noise. To help solving this problem, we propose VEGAS and TheStrip, two visualization and classification tool that allows security analysts to group alerts and enable a better collaboration between them. These tools are included in a workflow in such a way that once a set of similar alerts has been collected, a filter is generated that redirects forthcoming similar alerts to other security analysts that are specifically in charge of this set of alerts, in effect reducing the flow of raw undiagnosed alerts.
Damien is a 3rd year PhD student in CIDRE team.
Thursday 31 January 2019
Toward secured energy-efficient routing protocol for UAV swarm
Next generation of UAV is currently growing. In close future they will form consistent swarm of devices and efficient communication is critical for such system. We will focus on routing problem on such decentralized network, outlining constraints and challenges. To our knowledge there is currently no “plug and play” secured protocol adapted for such network, we will present our first solution to address those problems. We will also discuss questions about multi swarm cohabitation and collaboration.
I’m a first year PhD student in Computer Science at IRISA/Inria Rennes/CentralSupélec/Université de Rennes 1.
Research topic : Security of communications Pilot-UAV and UAV-UAV. I deeply focus on security of routing algorithm applied on swarm UAV AdHoc network.
Thursday 17 January 2019
Privacy protection of predictive machine learning models in exploitation phase with empirical max-entropy constraints
On a regular basis, people consent to the processing of their personal data by entities that use some form of predictive learning (e.g., recommenders, social networks, state administration). Predictors are functions estimated from empirical data and subsequently exploited on unseen inputs. Much attention has been put on limiting the privacy impact of training and releasing predictors as statistics involving sensitive variables. This is typically the scope of differential privacy (DP). Less focus has been devoted to privacy leaks in exploitation phase, where the learned model is used to make predictions relating to unseen individuals. While one might expect that privacy guarantees about the learning algorithm will automatically extend to the predictions made by the learned model, this is not natively handled by DP. We show that even under a strong DP assumption, a predictor learned naively will leak sensitive information during exploitation. We establish new vulnerability measures for this scenario, and propose an empirical mitigation strategy that leverage “Do Not Predict” examples to maximize entropy of the predictor w.r.t sensitive variables. Interestingly, this mechanism circumvent the usual privacy-utility tradeoff and acts as a regularizer offering better generalization capacity to the predictor.
I’m a postdoc in IRISA CIDRE team in Rennes, working on ANR project PAMELA (Personalized and decentrAlized MachinE Learning under constrAints) on privacy-preserving distributed machine-learning. I received my PhD from University of Cergy-Pontoise on Asynchronous Gossip algorithms for large-scale machine learning and multimedia retrieval. I am also a lonely piano player at CentraleSupelec Rennes during lunch breaks, desperately seeking collaborators in this area to initiate jam sessions.
Thursday 20 December 2018
Retrofitting security in closed-source binary programs
In spite of the presence of increasingly sophisticated compiler-level
verifications, testing frameworks and code audit tools, security bugs
remain in the code of off-the-shelf software components. Unfortunately,
software components presenting security risks may be developed and
integrated in opaque, closed-source environments (e.g., as part of an
embedded device’s proprietary firmware). The process of automatically
evaluating the security of software programs in such environments
involves multiple challenges in terms of accuracy and scalability. We
investigate solutions to address these challenges at scale based on
lightweight template-based static analysis and symbolic execution at
the binary level. Our initial focus is on memory corruption
vulnerabilities caused by unsafe input parsing implementations.
Christophe Hauser is a research computer scientist at the Information
Sciences Institute (ISI) of University of Southern California (USC). His
research interests span across multiple areas of security, with a focus
on binary program analysis, as well as OS and kernel security.
Previously, he was a postdoctoral researcher at University of
California, Santa Barbara, where he worked on building parts of the
angr binary analysis framework.
He received his Ph.D. in computer science from Supélec/University of
Rennes (France) and Queensland University of Technology (Australia).
Thursday 13 December 2018
Protection of systems against fuzzing attacks
A fuzzing attack enables an attacker to gain access to restricted resources by exploiting a wrong specification implementation. Fuzzing attack consists in sending commands with parameters out of their specification range. This study aims at protecting Java Card applets against such attacks. To do this, we detect prior to deployment an unexpected behavior of the application without any knowledge of its specification. Our approach is not based on a fuzzing technique. It relies on a static analysis method and uses an unsupervised machine-learning algorithm on source codes. For this purpose, we have designed a front end tool fetchVuln that helps the developer to detect wrong implementations. It relies on a back end tool Chucky-ng which we have adapted for Java. In order to validate the approach, we have designed a mutant applet generator based on LittleDarwin. The tool chain has successfully detected the expected missing checks in the mutant applets. We evaluate then the tool chain by analyzing five applets which implement the OpenPGP specification. Our tool has discovered both vulnerabilities and optimization problems. These points are then explained and corrected.
Thursday 6 December 2018 – CentraleSupelec – 14h
We present an anomaly detection mechanism to determine if a suspected
group of threads are valid cryptographic software or malicious codes. We
evaluate the effectiveness of our solution to correctly distinguish
between valid programs and ransomware. We used the tf-idf metric to
choose the most pertinent features. Then, we measure the distance of the
candidate software with a collection of centroids to determine its
nature. If the distance exceeds a threshold, we are facing an unknown
ransomware. We have evaluated our approach with the samples provided by
open databases and executed in our bare metal platform.
JL Lanet joined INRIA- Rennes Bretagne Atlantique in September 2014 to
be involved into the High Security Labs (LHS). He was Professor at the
University of Limoges (2007-2014) at the Computer Science department,
where he lead the SSD team (Smart Secure Device). He was also associate
professor of the University of Sherbrooke and was in charge of the
Security and Cryptology course of the USTH Master (Hanoi). His research
interests include the security of small systems like smart cards, but
also software engineering, malware analysis, hardware security.
Prior to that, he was senior researcher at Gemplus Research Labs
(1996-2007) the smart card manufacturer. During this period he spent two
years at INRIA (2003-2004) as an engineer at DirDRI (Direction des
Relations Industrielles) and senior research associate in the Everest
team at INRIA Sophia-Antipolis. he got his Habilitation à Diriger des
Recherches (HdR) during the first INRIA period.
He was researcher at the Advanced Studies Labs of Elecma, Electronic
division of the Snecma, now part of the Safran group. IHe has worked on
hard real time techniques for jet engine control (1984-1995)
Thursday 22 November 2018 – Centrale Supelec – 14h
Nicolae Paladi (RISE – Lund University, Sweden)
Isolated Execution Environments have been around for more than a decade. They are deployed in millions of devices and have been used to improve the security of mobile communication devices, cloud computing deployments and (to some extent) the payments industry. On the other hand, they have limited capabilities and are vulnerable to various side channel attacks. In this talk I will go though the Isolated Execution Environments – both deployed and under development, their latest applications, standardisation efforts and future research directions.
Nicolae Paladi is a security researcher at RISE Research Institutes of Sweden and a postdoctoral researcher at Lund University. Following studies at Gothenburg University, Luleå University of Technology and extensive industry experience, he obtained his PhD from Lund University in 2017. His research focus is on the security guarantees and applications of trusted execution environments. This includes applying trusted execution environments to improve security of cloud and mobile communication infrastructure, as well as applications running on such infrastructure.