PIRAT biweekly seminars
Links: Attend remotely | iCalendar | Youtube channel | Contact: Pierre-François Gimenez
Tuesday May 6th – 2PM
Samiha Ayed (UTT): TBA
Type: remote talk
Abstract: TBA
Thursday June 12th – 2PM
Antony Dalmière (LAAS-CNRS): TBA
Type: remote talk
Abstract: TBA
Past seminars
Wednesday April 16th – 2PM
Pierre-François Gimenez (Inria): Synthetic network traffic with Fos-R
Type: in-person talk
Abstract: Network traffic datasets are regularly criticized, notably for the lack of realism and diversity in their attack or benign traffic. Generating synthetic network traffic using generative machine learning techniques is a recent area of research that could complement experimental test beds and help assess the efficiency of network security tools such as network intrusion detection systems. This presentation entails some recent work on this subject.
Thursday March 27th – 2PM
Loïc Miller (IMT Atlantique): Sécurité et Optimisation des Infrastructures Distribuées : BGP, Hypergraphes et Blockchain
Type: in-person talk
Abstract: Les infrastructures distribuées modernes font face à des défis significatifs en terme de sécurité et de passage à l’échelle. Dans cette présentation, nous allons explorer trois axes de recherche visant à renforcer la résilience et l’optimisation des infrastructures distribuées. Nous allons d’abord nous intéresser à une attaque exploitant BGP pour perturber le trafic réseau, illustrant les vulnérabilités des politiques de routage. Nous aborderons ensuite l’utilisation de métagraphes pour modéliser et vérifier des politiques de sécurité. Nous allons enfin détailler une méthode de compression permettant de réduire la taille des blockchains à un ordre logarithmique, sans compromettre leur sécurité.
Thursday March 13th – 2PM
Samuel Pélissier (Inria): From IoT to the Web: The Many Faces of Network Privacy
Type: in-person talk
Abstract: I will present two of my past and current works on privacy attacks & defenses. First, we will explore how IoT devices are subject to fingerprinting via their network protocols, including encrypted DNS (DNS-over-HTTPS, DNS-over-TLS), allowing for device identification and tracking. Then, we will discuss how countermeasures exist, though they are not always perfect nor implemented. Second, I will introduce my current work, stemming from the realization that studying performance impacts of privacy protections focus on only a part of the problem. Data extracted from user-centric IoT devices also induces a performance hit, which is directly paid by users. To demonstrate this, we will broaden the scope from IoT to the Web and study the impact of web tracking on energy consumption.
Thursday February 13th – 2PM
Juan Caballero (IMDEA Sofware Institute): A Framework for Automating Cyberattack Attribution
Type: remote talk
Abstract: Cyberattacks are ever-increasing, stretching the capabilities of law enforcement agencies to identify the responsible entities and bring them to justice. As such, many cyberattacks currently remain unattributed. This is mainly due to current attribution techniques being largely manual and thus not scaling well. In this talk, I will present our research towards developing a framework for automating cyberattack attribution. I will illustrate it for different attribution scenarios such as malicious mobile application developers, privacy-invading websites, and ransomware.
Thursday January 30th – 2PM
Jaroslav Pesek (Faculty of Information Technology CTU in Prague): Interpretable Threat Detection – MQTT case with evidential classifier
Type: remote talk
Abstract: We propose a universal method for threat detection with a set-valued evidential classification based on Dempster-Shafer theory and deep learning. Our approach is designed to handle the inherent uncertainty in threat detection by incorporating extended flow features and packet metadata, making it adaptable to various protocols and environments. To demonstrate its benefit, we apply the method to the threats in the MQTT protocol, which is widely used in the IoT environment for its lightweight character. Our method significantly reduces the number of false security alerts, with a zero false positive rate achieved in most cases. Additionally, the evidential framework provides interpretable outputs that allow for thorough post-event analysis, enabling a clear understanding of the evidence supporting each decision, which is crucial in security-sensitive applications.
Thursday December 19th 2024 – 2PM
Guillaume Vachez (CentraleSupélec): Poneypot: Highlighting Botnet Flaws and Erratic Behaviors
Type: in-person talk
Abstract: Botnets represent one of the most frequent forms of cyberattacks. It is therefore essential to maintain surveillance of these automated attackers, which is often achieved through the use of honeypots. These tools allow us to better understand their behavior and monitor their evolution over time. However, a detailed analysis of these attacks reveals flaws and inconsistent behaviors in botnet operations. The aim of this presentation is to highlight the various behaviors observed during a previous Poneypot campaign, as well as different mechanisms added to this project since then.
Thursday November 28th 2024 – 2PM
Clémence Chevignard (Inria): Reducing the Number of Qubits in Quantum Factoring
Type: in-person talk
Abstract: Shor’s algorithm is a quantum algorithm allowing to factor integers in a very short time. It is therefore a threat to RSA’s cryptosystem, and also a perspective of improvement of many number theoretic algorithms. This talk presents a new way to reduce the memory complexity of Shor’s algorithm. To do so, we take a theoretic idea of May and Schlieper, and make it possible to use in practice. This adaptation is made possible by some nitpicking arithmetical modifications on the computations that Shor’s algorithm has to carry.
Thursday November 7th 2024 – 2PM
Cédric Herzog (CodeClarity): CodeClarity : Détection des vulnérabilités JavaScript
Type: in-person talk
Abstract: Dans cette présentation, nous allons explorer CodeClarity, une plateforme open-source pour détecter les vulnérabilités dans les dépendances JavaScript. Nous examinerons comment sont gérées ces dépendances (package.json et lockfiles) ainsi que la création d’une représentation unique pour tous les gestionnaires de paquets via un SBOM (Software Bill of Materials). Nous allons également aborder les défis liés à la détection des vulnérabilités, tels que la gestion des conflits potentiels entre sources telles que NVD et OSV. Enfin, nous proposerons des moyens efficaces pour maintenir le logiciel à jour tout en minimisant l’impact des vulnérabilités et en garantissant son bon fonctionnement.
Thursday October 24th 2024 – 2PM
Pierre-François Gimenez (Inria): Towards programming languages free of injection-based vulnerabilities by design
Type: in-person talk
Abstract: Many systems are controlled via commands built upon user inputs. For systems that deal with structured commands, such as SQL queries, XML documents, or network messages, such commands are generally constructed in a “fill-in-the-blank” fashion: the user input is concatenated with a fixed part written by the developer (the template). However, the user input can be crafted to modify the command’s semantics intended by the developer and lead to the system’s malicious usages. Such an attack, called an injection-based attack, is considered one of the most severe threat to web applications. Solutions to prevent such vulnerabilities exist but are generally ad hoc and rely on the developer’s expertise and diligence. Our approach addresses these vulnerabilities from the formal language theory’s point of view. We formally define two new security properties. The first one, “intent-equivalence”, guarantees that a developer’s template cannot lead to malicious injections. The second one, “intent-security”, guarantees that every possible template is intent-equivalent, and therefore that the programming language itself is secure. From these definitions, we propose new techniques to create programming language that are secure by design, and present two secure, simplified versions of widespread languages.
Wednesday October 16th 2024 – 2PM
Lénaïg Cornanguer (CISPA): Causal discovery from observational data
Type: in-person talk
Abstract: In this presentation, I will introduce the fundamental concepts of causal discovery, including Directed Acyclic Graphs (DAGs), the do-operator, and d-separation. I will begin by discussing classical causal discovery algorithms, such as the PC-algorithm and Greedy Equivalence Search (GES), which are commonly used to identify cause-effect relationships between continuous-valued variables. Next, I will shift the focus to an information-theoretic perspective with the Algorithmic Markov Condition (AMC) and its application in causal discovery. Finally, I will address the less-explored challenge of discovering causal relationships between discrete-valued data.
Thursday October 10th 2024 – 2PM
Paolo Ballarini (CentraleSupélec): Hybrid-Automata Specification Language: from expressive stochastic model checking to parametric stochastic model checking
Type: in-person talk
Abstract: Probabilistic model checking is concerned with providing modellers with an effective means for automatically assessing with what probability a model exhibits a given behaviour formally expressed by a temporal logic specification. The ability to capture relevant behaviours depends on the expressiveness of the considered property language and in this respect the Hybrid Automata Specifications Language (HASL) has proved a powerful means which exceeds the expressiveness of classical temporal logics. HASL model checking is a simulation-based procedure: given a probabilistic model M and a HASL specification F the expected value of the quantity captured by F is estimated by sampling trajectories of the product process M x F. In this talk I am going first to give an overview of the HASL model checking approach and then discuss how, taking advantage of HASL expressiveness, we can characterise a procedure that allows for tuning the parameters of a probabilistic model so that it meets a given behaviour. Otherwise said we introduce a procedure for reverse engineering of a probabilistic model w.r.t. a temporal property, something which is also known as parametric model checking. The parametric model checking problem is as follows: given a model M that depends on a set of parameters P and a behavior of interest expressed by a specification F assess how the probability that F is satisfied by M varies over the domain of the parameters P. The formulation of parametric model checking based on HASL relies on the characterisation of “satisfiability distance” (i.e. how far a model’s trajectory is from satisfying a specification F) and on the corresponding hybrid-automaton for measuring it. The searching of the parameter-space is obtained through the adaptation of the Approximated Bayesian Computation (ABC) method obtained by plugging in the satisfiability-distance automaton.
Thursday October 3rd 2024 – 2PM
Yufei Han (Inria): Attack-free Evaluating and Enhancing Adversarial Robustness on Categorical Data
Type: in-person talk
Abstract: Research on adversarial robustness has predominantly focused on continuous inputs, leaving categorical inputs, especially tabular attributes, less examined. To echo this challenge, our work aims to evaluate and enhance the robustness of classification over categorical attributes against adversarial perturbations through efficient attack-free approaches. We propose a robustness evaluation metric named Integrated Gradient-Smoothed Gradient (IGSG). It is designed to evaluate the attributional sensitivity of each feature and the decision boundary of the classifier, two aspects that significantly influence adversarial risk, according to our theoretical analysis. Leveraging this metric, we develop an IGSG-based regularization to reduce adversarial risk by suppressing the sensitivity of categorical attributes. We conduct extensive empirical studies over categorical datasets of various application domains. The results affirm the efficacy of both IGSG and IGSG-based regularization. Notably, IGSG-based regularization surpasses the state-of-the-art robust training methods by a margin of approximately 0.4% to 12.2% on average in terms of adversarial accuracy, especially on high-dimension datasets.
Tuesday September 24th 2024 – 2PM
Maura Pintor (University of Cagliari): Where ML security is broken and how to fix it
Type: remote talk
Abstract: Rigorous testing of machine learning models against test-time attacks is often impractical for modern deep learning systems. For these reasons, empirical methods, optimizing adversarial perturbations via gradient descent, are often used. To assess and mitigate the impacts of adversarial attacks, machine learning practitioners generate worst-case adversarial perturbations to test against their models. Yet, many proposed evaluations have proven to offer deceptive estimates of robustness, often failing under more thorough analysis. Although guidelines and best practices have been suggested to improve current adversarial robustness evaluations, the lack of automatic testing and debugging tools makes it difficult to apply these recommendations in practice and systematically. To this end, the analysis of failures in the optimization of adversarial attacks is the only valid strategy to avoid repeating mistakes of the past. Additionally, the continuous proposal of novel attacks results in overly optimistic and biased evaluations. To address this, we propose a comparison framework to evaluate and benchmark gradient-based attacks for optimizing adversarial examples, ensuring fair assessment and fostering advancements in ML security evaluations.
Wednesday July 10th 2024 – 11AM
Davide Balzarotti (EURECOM): Malware Research: History, Milestones, and Open Questions
Type: in-person talk
Abstract: Researchers have worked on the analysis, detection, and classification of malicious software since the first early viruses in the 1980s. After more than 40 years of academic research and thousands of papers published on this topic, what have we learned about malware? Which problems and questions have attracted the interest of researchers? And for which of those did we find some answers so far? In this talk, I will go through some of these past achievements (shamelessly using some of my research as an example) and discuss past findings as well as open questions for the future.
Monday June 24th 2024 – 2PM
Natan Talon (Hackuity): Retour d’expérience BreizhCTF
Type: in-person talk
Abstract: Ce qu’il faut savoir quand on veut allier l’utile à l’agréable pour récupérer des données.
Tuesday June 18th 2024 – 2PM
Patrik Goldschmidt (KInIT): Common Pitfalls of (Cybersecurity) Machine Learning Research. Pragmatic Model Evaluation
Type: in-person talk
Abstract: Tons of research in machine learning and its applications in security is performed nowadays. However, despite being interesting academically, most never receive attention from practitioners. This phenomenon happens primarily due to the disconnection of academia from industry, such as research papers neglecting operational details or not sharing enough information. Practitioners are thus unable to gauge the practical usability of the research despite achieving state-of-the-art results on public datasets. This seminar, led by a visiting Ph.D. student from KInIT, Slovakia, Patrik Goldschmidt, will merge and present the knowledge from three papers from top-tier cybersecurity conferences discussing this issue. The unique knowledge fusion of three related papers will first outline ten common biases and pitfalls present in contemporary ML-based cybersecurity research. Afterward, we will talk about an approach to perform a pragmatic assessment of ML methods in a statistically significant manner without bias. Presented examples and case studies will focus on cybersecurity, but the problems and recommendations are relevant to many more domains. Therefore, the talk aims to provide valuable insights for researchers and practitioners across a broad spectrum of domains and will shed new light on assessments and evaluations of ML methods.
Thursday June 13th 2024 – 2PM
Julien Piet (University of California, Berkeley): Network Detection of Interactive SSH Impostors Using Deep Learning
Type: in-person talk
Abstract: Impostors who have stolen a user’s SSH login credentials can inflict significant harm to the systems to which the user has remote access. We consider the problem of identifying such imposters when they conduct interactive SSH logins by detecting discrepancies in the timing and sizes of the client-side data packets, which generally reflect the typing dynamics of the person sending keystrokes over the connection.
Wednesday June 5th 2024 – 11AM
Manuel Poisson (Amossys): CVE representation to build attack positions graphs
Type: in-person talk
Abstract: In cybersecurity, CVEs (Common Vulnerabilities and Exposures) are publicly disclosed hardware or software vulnerabilities. These vulnerabilities are documented and listed in the NVD database maintained by the NIST. Knowledge of the CVEs impacting an information system provides a measure of its level of security. Our work points out that these vulnerabilities should be described in greater detail to understand how they could be chained together in a complete attack scenario. We present the first proposal for the CAPG (CVE representation to build Attack Positions Graphs) format, which is a method for representing a CVE vulnerability, a corresponding exploit, and associated attack positions.
Thursday May 23rd 2024 – 2PM
Yufei Han (Inria): Defending Jailbreak Prompts via In-Context Adversarial Game
Type: in-person talk
Abstract: Large Language Models (LLMs) demonstrate remarkable capabilities across diverse applications. However, concerns regarding their security, particularly the vulnerability to jailbreak attacks, persist. Drawing inspiration from adversarial training in deep learning and LLM agent learning processes, we introduce the In-Context Adversarial Game (ICAG) for defending against jailbreaks without the need for fine-tuning. ICAG leverages agent learning to conduct an adversarial game, aiming to dynamically extend knowledge to defend against jailbreaks. Unlike traditional methods that rely on static datasets, ICAG employs an iterative process to enhance both the defense and attack agents. This continuous improvement process strengthens defenses against newly generated jailbreak prompts. Our empirical studies affirm ICAG’s efficacy, where LLMs safeguarded by ICAG exhibit significantly reduced jailbreak success rates across various attack scenarios. Moreover, ICAG demonstrates remarkable transferability to other LLMs, indicating its potential as a versatile defense mechanism.
Thursday April 25th 2024 – 2PM
Solayman Ayoubi (Télécom SudParis): Data-driven Evaluation of Intrusion Detectors: a Methodological Framework
Type: in-person talk
Abstract: Intrusion detection systems are an important domain in cybersecurity research. Countless solutions have been proposed, continuously improving upon one another. Yet, despite the introduction of distinct approaches, including machine-learning methods, the evaluation
methodology has barely evolved. Prior evaluation approaches lack formalization and disregard ML best practices. We address this challenge by implementing an evaluation framework to ensure completeness, reliability, and reproducibility. This framework emphasizes the relationship between evaluation choices and data selection, requiring the generation of purpose-specific datasets.
Friday April 19th 2024 – 2PM
Lucas Aubard: Some fragmented packet characteristics on MAWI traces
Type: in-person talk
Abstract: The presentation will focus on the work I did during my 3-months internship at National Institute of Informatics (NII). First half of the talk will focus on IPv4 fragmentation. Second half will be some feedbacks regarding the mobility exchange experience. IPv4 fragmentation is used whenever the original datagram size is higher than the Maximum Transmit Unit between the two communicatve hosts. It has been some years now that this mechanism has been considered “harmful” because of possible attacks (e.g., DoS, NIDS evasion) or ressource overhead. We conducted a longitudinal study of MAWI traces, for 2 days per month from 2006 to 2023, to 1) verify if IPv4 fragmentation is observed in the wild and 2) try to understand in which circunstances this fragmentation may occur. Note that the presented results will be preliminary results.
Wednesday April 17th 2024 – 2PM
Daniel De Almeida Braga (IRISA): Microarchitectural side-channels and their impact on cryptographic implementations
Type: in-person talk
Abstract: In the rapidly evolving field of cybersecurity, the robustness of cryptographic implementations against side-channel attacks represents a critical challenge. This talk delves into the research on microarchitectural side-channels, presenting sophisticated attacks that underscore the vulnerability of cryptographic protocols to such threats. First, I will present attacks against WPA3, leaking enough information on the Wi-Fi password to recover it. In particular, one of the attacks exploits a previously undocumented prefetcher behavior, which highlights the complex interplay between hardware design and software security. Next, I explain how we ported a well-known CPU side-channel attack to GPUs, demonstrating the feasibility of executing the Prime+Probe technique via a web browser. This attack enables us to implement a keylogger, an AES key recovery attack and a native-to-browser covert channel, entirely from JavaScript, in a drive-by manner.
Wednesday April 10th 2024 – 2PM
Thomas Rokicki (IRISA): Side Channels in Web Browsers: Applications to Security and Privacy
Type: in-person talk
Abstract: Side channel attacks exploit the side effects of sensitive computation to leak secrets. Their implementation in web browsers represents a considerable increase in threat surface, but comes with challenges due to the restrictive environment and the constant browser updates. This presentation introduces a longitudinal analysis of browser-based side channels, as well as a focus on port-contention side channels and how we can use them in the JavaScript sandbox.
Thursday March 28th 2024 – 2PM
Cristoffer Leite (Eindhoven University of Technology): From Cyber Threat Intelligence to Incident Response and Back
Type: in-person talk
Abstract: The presentation will focus on specific aspects of the research conducted during my PhD. First, I will talk about characterising attackers’ behaviour and how to map this to the information provided by a Network Intrusion Detection System. Then, I will present our approaches for improving the use and creation of Cyber Threat Intelligence for incident response by applying those maps.
Thursday March 21st 2024 – 2PM
Lénaïg Cornanguer (CISPA): Timed automata learning from observational data
Type: remote talk
Abstract: I will present my work on modelling systems from observationnal data, carried out during my PhD. As a model of the system, we will use the formlism of timed automata, a state-based machine where the evolution depends on the occurence of events over time. A first part will be devoted to the learning timed automata from event logs with the TAG algorithm. Then, we will see how timed automata can be used for anomaly detection given streaming discrete or continous data.
Thursday March 14th 2024 – 2PM
Tristan Benoît (Loria): La similarité des programmes vue par l’analyse spectrale
Type: in-person talk
Abstract: Les approches basées sur l’apprentissage automatique appliquées à la similarité de fonctions binaires ont gagné en popularité ces dernières années. Dans ce séminaire, je présenterai nos travaux concernant les similarités entre programmes qui peuvent être utiles à la rétro-ingénierie, la classification de programmes, la généalogie de logiciels malveillants et la détection du plagiat. Nous réalisons une évaluation des méthodes de recherche de clone de programme et proposons une méthode de similarité s’appuyant sur la théorie spectrale des graphes. En plus d’être rapide, celle-ci est particulièrement stable face à un changement de compilateur ou d’architecture.
Thursday March 7th 2024 – 2PM
Romain Cayre (EURECOM): OASIS: un framework pour la détection d’intrusion embarquée dans les contrôleurs Bluetooth Low Energy
Type: in-person talk
Abstract: Ces dernières années, le Bluetooth Low Energy (BLE) s’est imposé comme l’un des protocoles centraux de l’Internet des Objets. Nombre de ses caractéristiques (mobilité, faible consommation d’énergie, large déploiement) en font un protocole attrayant pour les objets connectés. En parallèle de cet essor, de nombreuses vulnérabilités critiques affectant le BLE ont été rendues publiques ces dernières années, dont certaines liées au design du protocole lui même. L’impossibilité de corriger ces vulnérabilités sans affecter la spécification nécessite le développement de systèmes de détection d’intrusion (IDS) adaptés, permettant la détection et la prévention de ces nouvelles menaces. Cependant, de nombreuses difficultés techniques entravent le développement de tels systèmes. Le monitoring du protocole par l’intermédiaire de sondes est en effet complexe, coûteux et limité, du fait de l’utilisation de communications pair à pair et la présence de nombreux mécanismes complexes et dynamiques tel que des algorithmes de saut de fréquences. Ces contraintes impactent significativement les approches existantes: celles ci manquent de flexibilité, ont une portée limitée et entraînent des coûts de déploiement élevés. Dans cette présentation, nous présenterons une approche alternative pour la détection d’intrusion, visant à s’affranchir de ces limites en embarquant le système de détection d’intrusion directement au sein des contrôleurs BLE, au plus bas niveau accessible logiciellement. Nous montrerons que cette approche embarquée permet une analyse et une instrumentation plus avancée du protocole et ouvre la voie à de nouvelles applications défensives. Nous présenterons OASIS, un framework générique visant à faciliter l’injection d’heuristiques de détection au sein de contrôleurs BLE propriétaires sans impacter le fonctionnement normal de la pile protocolaire. Nous décrirons les choix ayant guidé sa conception (modularité, généricité, accessibilité), ainsi que son implémentation au sein de cinq contrôleurs de divers fabricants embarquant des piles protocolaires hétérogènes. Nous montrerons la pertinence de cette approche pour la détection d’attaques BLE bas niveau, en décrivant la conception et l’évaluation de cinq modules de détection couvrant diverses attaques complexes telles que KNOB, GATTacker ou BTLEJack. Nous détaillerons également notre analyse de l’impact du déploiement d’un tel IDS sur les performances des contrôleurs, notamment du point de vue de la consommation d’énergie, du temps d’exécution et de la mémoire. Pour terminer, nous discuterons des nouvelles directions ouvertes par ces travaux pour la prévention d’intrusion ou la détection coordonnée d’attaques complexes.
Thursday February 22th 2024 – 2PM
Pierre-François Gimenez (CentraleSupélec): Automatisation de la sécurité (projet de recherche)
Type: in-person talk
Abstract: Dans cette présentation, je vais présenter mon projet de recherche au sein de l’équipe PIRAT, qui porte sur l’automatisation de la sécurité. J’y présenterai mon objectif et les trois grandes étapes pour y parvenir.
Thursday February 8th 2024 – 2PM
Julien Piet (University of California, Berkeley): GGFAST: Automating Generation of Flexible Network Traffic Classifiers
Type: remote talk
Abstract: When employing supervised machine learning to analyze network traffic, the heart of the task often lies in developing effective features for the ML to leverage. We develop GGFAST, a unified, automated framework that can build powerful classifiers for specific network traffic analysis tasks, built on interpretable features. The framework uses only packet sizes, directionality, and sequencing, facilitating analysis in a payload-agnostic fashion that remains applicable in the presence of encryption. GGFAST analyzes labeled network data to identify n-grams (“snippets”) in a network flow’s sequence-of-message-lengths that are strongly indicative of given categories of activity. The framework then produces a classifier that, given new (unlabeled) network data, identifies the activity to associate with each flow by assessing the presence (or absence) of snippets relevant to the different categories. We demonstrate the power of our framework by building—without any case-specific tuning—highly accurate analyzers for multiple types of network analysis problems. These span traffic classification (L7 protocol identification), finding DNS-over-HTTPS in TLS flows, and identifying specific RDP and SSH authentication methods. Finally, we demonstrate how, given ciphersuite specifics, we can transform a GGFAST analyzer developed for a given type of traffic to automatically detect instances of that activity when tunneled within SSH or TLS.
Bio: Julien Piet is a 3rd year Ph.D. student in the EECS department at UC Berkeley, advised by Professors Vern Paxson and David Wagner. He is currently focused on developping new methods to measure network activity and detect specific behaviors.
Thursday February 1st 2024 – 2PM
Francesco Marchiori (University of Padova): ACTing DUMB: What Can We Learn From Attackers?
Type: in-person talk
Abstract: In the ever-evolving cybersecurity landscape, adversaries continually adapt and employ deceptive strategies to breach defenses. In particular, thanks to the recent advancements in Artificial Intelligence (AI), the cybersecurity research community has started to investigate its integration into diverse contexts for bolstering defense mechanisms and identifying vulnerabilities that adversaries could exploit. But what can we learn from these attacks, and most importantly, how can we improve our defenses? In this talk, we will approach the problem from both sides. First, we will analyze the role of transferability in Adversarial Machine Learning (AML), discovering how attackers might intentionally use more simple techniques to have greater evasion capabilities. Furthermore, thanks to the “DUMB” framework, we show how to evaluate the transferability of AML attacks in different conditions. Second, we will explore how Cyber Threat Intelligence (CTI) can improve defense mechanisms and how practitioners can benefit from it. To tackle the automatic generation of CTI, we present our Natural Language Generation system “AGIR” (“to act” in Italian) and show how it can improve defense mechanisms by providing timely and contextually relevant intelligence reports.
Bio: Francesco, Marchiori is a PhD student in <Brain, Mind and Computer Science> (BMCS) at the University of Padova with a Master’s degree in Cybersecurity. There, he is part of the <Security and Privacy> (SPRITZ) research group, under the supervision of <Prof. Mauro Conti>.
Thursday January 17th 2024 – 2PM
Eleonora Losiouk (University of Padova): The Android Virtualization Technique: a Double-Edged Sword for Developing Attacks and Defences
Type: remote talk
Abstract: Android virtualization enables an app to create a virtual environment, in which other apps can run. Originally designed to overcome the limitations of mobile apps dimensions, nowadays this technique is becoming more and more attractive for developing novel Android malwares and defence mechanisms. During this talk, I will illustrate different use cases that refer to malicious and legitimate usages of the Android virtualization technique.
Bio: Eleonora Losiouk is an Assistant Professor from the University of Padua, Italy. She obtained a PhD in Bioengineering and Bioinformatics in 2018 from the University of Pavia, Italy. At the end of the PhD, she moved to Padua and started working on Android security. She visited EPFL in 2017 and Berkeley in 2021/2022. Besides publishing papers in top venues, Eleonora is the recipient of several awards among which: the 2020 CONCORDIA Award for Early Career Women Researcher in 2020; a Fulbright Fellowship for visiting Berkeley in 2020; a Seal of Excellence for her EU Marie Curie Global Fellowship project proposal in 2021; a Google Research Scholar Program in 2022.