https://team.inria.fr/privatics Thu, 05 Dec 2019 14:51:33 +0000 en-US hourly 1 https://wordpress.org/?v=5.9.7 Apple devices are leaking sensitive data over BLE https://team.inria.fr/privatics/apple-devices-are-leaking-sensitive-data-over-ble/ Thu, 05 Dec 2019 14:51:33 +0000 https://team.inria.fr/privatics/?p=1730

Continue reading]]> By Guillaume Celosia and Mathieu Cunche

Discontinued Privacy: Personal Data Leaks in Apple Bluetooth-Low-Energy Continuity Protocols

Summary

We found that Apple devices are leaking sensitive information in the BLE wireless signals they emit. Those issues are associated with the Apple Continuity services and are affecting all Apple devices as well as devices compatible with the Continuity framework. Based on a reverse engineering of Continuity, we identified that the Bluetooth Low Energy (BLE) messages emitted by Apple devices include unencrypted data that can expose sensitive information. We discovered that those data can be easily collected by an eavesdropper and processed in order to: track users, monitor activities in a smarthome, obtain phone number, email addresses and Apple Voice Assistant, Siri, commands, and more.

Background

BLE advertising

In BLE, devices broadcast short messages, called Advertising Packets, to announce their presence and feature to nearby devices (those messages can be observed from an Android device using an application like Ramble). Advertising Packets can include the name of the device, its type, but can also include custom data in a field called Manufacturer specific. This field is typically used by vendors to transmit data for application. Apple make use of this field to include data for its Continuity Protocols.

Apple Continuity Protocols

Apple has developed a number of features, called Continuity, that are designed to increase the usability of its products. Those features include: activity transfert, file transfert (airDrop), Wi-Fi password sharing, etc. The communication between nearby devices, required by Continuity services, is done by using BLE. Continuity data are embedded in BLE advertising packets and are broadcast to be picked up by nearby devices.

Identified privacy leaks

Data exposed in cleartext

We found that, even though some elements are encrypted, most of the data included in Continuity messages is sent in plain text. The exposed data can thus be passively collected by an eavesdropper and exploited to mount one of the attack presented below.

Tracking users (iPhones, iPad, airpods …)

We found that the content of Apple Continuity BLE messages can be used to track the device despite the use address randomization. We have identified several elements that remain constant over time or that can undermine the anti-tracking feature mechanism (i.e. address randomization). For instance, we found that messages emitted by earpods include information (battery levels and lid open counter) that can be exploited to track the earpod set. We also discovered a novel attack that would allow tracking by actively replaying BLE messages. An passive attacker could exploit this information to track the the location of individuals in spite of address randomization, the anti-tracking feature of BLE.

Linking device belonging to the same iCloud account

We discovered that it is possible to link together devices associated to the same iCloud account. This attack relies on the replay of messages that will trigger a response only from devices associated to the same iCloud account. An attacker could exploit this to identify all the device belonging to a person, and could narrow down its home if some device are left there.

Monitoring activities in a smart home (Homekit)

We found that messages emitted by Homekit-compatible devices can betray the activity in a smart-home. Homekit is a smart-home framework developed by Apple and found in devices of Apple and other vendors (…). Homekit devices using BLE continuously emit messages that include an indicator reflecting the device state. For instance, in the case of a lightbulb, this indicator changes only when it is either turned on or turned off. Similarly, in an infrared movement detector, the indicator changes only when a person crosses the detection field. In-lab experiments showed that a passive attacker can leverage Homekit BLE messages to track the evolution of devices in a household and thus monitor the activities of the occupants.

Device model, software version and more

We found that a number of messages expose a wide variety of information on the emitting device characteristics and state: device model, OS version, device color, cellular connectivity, battery level, current activity etc.

E-mail address and Phone numbers (Airdrop & Nearby)

We found that when using features such as Airdop and Nearby, devices emit messages from which email addresses and phone numbers can be extracted. Continuity services allow to seamlessly share resources with nearby devices: Airdrop to share files, Nearby to share Wi-Fi network credential. Prior exchange of information, the devices establish their identity by exchange identifiers over BLE: email addresses and/or phone numbers. Those identifiers are not sent in clear but are rather hashed using a cryptographic hash-function. This obfuscation can be bypassed in most cases and the identifiers recovered.

Voice assistant commands (Siri)

We found that when activated via voice, the Siri voice assistant will generate a message including a digital fingerprint of the command. Although the raw audio signal cannot be reconstructed from it, the fingerprint could be leveraged to infer the command.

Responsible disclosure

The vulnerabilities identified were reported to Apple, Osram and Eve on May 29 th , 2019.

Acknowledgments

This work was supported by the INSA Lyon – SPIE ICS IoT chair and the H2020 SPARTA Cybersecurity Competence Network project.

Paper

The corresponding research paper, Discontinued Privacy: Personal Data Leaks in Apple Bluetooth-Low-Energy Continuity Protocols, will be presented at the 20th Privacy Enhancing Technologies Symposium (PETS 2020) on 14-18 July 2020 in Montreal, Canada.

APA style citation and bibtex entry

You can use the following APA style citation or bibtex entry to reference our paper:

Celosia, G., & Cunche, M. (2020).Discontinued Privacy: Personal Data Leaks
in Apple Bluetooth-Low-Energy Continuity Protocols. Proceedings on Privacy
Enhancing Technologies, 2020(1), 26-46. De Gruyter Open.
@article{celosia2020close,
    title={Discontinued Privacy: Personal Data Leaks in Apple Bluetooth-Low-Energy Continuity Protocols},
    author={Celosia, Guillaume and Cunche, Mathieu},
    journal={Proceedings on Privacy Enhancing Technologies},
    volume={2020},
    number={1},
    pages={26--46},
    year={2020},
    publisher={De Gruyter Open}
}

Creative Com

]]>
Interview Daniel Le Metayer : Ces algorithmes dont on ne sait rien alors qu’ils régissent nos vies https://team.inria.fr/privatics/interview-daniel-le-metayer-ces-algorithmes-dont-on-ne-sait-rien-alors-quils-regissent-nos-vies/ Mon, 30 Sep 2019 07:13:56 +0000 https://team.inria.fr/privatics/?p=1725 Interview de Daniel Le Metayer sur la transparence des algorithmes :

Ces algorithmes dont on ne sait rien alors qu’ils régissent nos vies

]]>
Our report for the European Parliement on the “benefits and risks of Algorithmic Decision Systems (ADS)” is available online https://team.inria.fr/privatics/our-report-for-the-european-parliement-on-the-benefits-and-risks-of-algorithmic-decision-systems-ads-is-available-online/ Mon, 01 Apr 2019 13:42:40 +0000 https://team.inria.fr/privatics/?p=1679 Our report for the European Parliement on the “benefits and risks of Algorithmic Decision Systems (ADS)” is finally available online here : Understanding algorithmic decision-making: Opportunities and challenges

 

]]>
Manipulation informationnelle et psychologique https://team.inria.fr/privatics/manipulation-informationnelle-et-psychologique/ Fri, 11 May 2018 11:03:14 +0000 https://team.inria.fr/privatics/?p=1635 Article from Claude Castelluccia on manipulation published in Le Monde binaire blog: here

]]>
New associated team with UQAM (Sébastien Gambs) https://team.inria.fr/privatics/new-associated-team-with-uqam-sebastien-gambs/ Fri, 30 Mar 2018 13:35:51 +0000 https://team.inria.fr/privatics/?p=1630

Continue reading]]> The accelerated growth of the Internet has outpaced our abilities as individuals to maintain control of our personal data. The recent advent of personalized services has lead to the massive collection of personal data and the construction of detailed profiles about users. However, users have no information about the data which constitute its profile and how they are exploited by the different entities (Internet companies, telecom operators, …). This lack of transparency gives rise to ethical issues such as discrimination or unfair processing. In this associate team, we propose to strengthen the complementary nature and the current collaborations between the Inria Privatics group and UQAM to advance research and understanding on data and the algorithmic transparency and accountability.

]]>
Ouverture du MOOC “Protection de la vie privée dans le monde numérique” https://team.inria.fr/privatics/ouverture-du-mooc-protection-de-la-vie-privee-numerique/ Fri, 26 Jan 2018 16:02:50 +0000 https://team.inria.fr/privatics/?p=1622

Continue reading]]> 29 janvier 2018 : Ouverture du MOOC “Protection de la vie privée dans le monde numérique”, par Cédric Laradoux et Vincent Roca

Librement accessible sur la plateforme FUN du 29/01 au 18/03, ce MOOC abordera la notion de données personnelles et la législation associée, la protection de ses données et de son identité numérique, les risques en termes de vie privée associés à l’usage des smartphones, et finalement la protection de sa messagerie.

https://www.fun-mooc.fr/courses/course-v1:inria+41015+session01/

]]>
The Pitfalls of Hashing for Privacy https://team.inria.fr/privatics/the-pitfalls-of-hashing-for-privacy/ Tue, 16 Jan 2018 08:29:35 +0000 https://team.inria.fr/privatics/?p=1594

Continue reading]]> Boosted by recent legislations, data anonymization is fast becoming a norm. However, as of yet no generic
solution has been found to safely release data. As a consequence, data custodians often resort to ad-hoc means to anonymize datasets. Both past and current practices indicate that hashing is often believed to be an effective way to anonymize data. Unfortunately, in practice it is only rarely effective. In [2], we expose the limits of cryptographic hash functions as an anonymization technique. Anonymity set is the best privacy model that can be achieved by hash functions. However, this model has several shortcomings. We provide three case studies to illustrate how hashing only yields a weakly anonymized data. The case studies include MAC and email address anonymization as well as the analysis of Google Safe Browsing.

]]>
Differentially Private Mixture of Generative Neural Networks https://team.inria.fr/privatics/differentially-private-mixture-of-generative-neural-networks/ Tue, 16 Jan 2018 08:24:16 +0000 https://team.inria.fr/privatics/?p=1591

Continue reading]]> Generative models are used in a wide range of applications building on large amounts of contextually rich
information. Due to possible privacy violations of the individuals whose data is used to train these models,
however, publishing or sharing generative models is not always viable. In [4], we develop a novel technique
for privately releasing generative models and entire high-dimensional datasets produced by these models. We model the generator distribution of the training data with a mixture of k generative neural networks. These are trained together and collectively learn the generator distribution of a dataset. Data is divided into k clusters, using a novel differentially private kernel k-means, then each cluster is given to separate generative neural networks, such as Restricted Boltzmann Machines or Variational Autoencoders, which are trained only on their own cluster using differentially private gradient descent. We evaluate our approach using the MNIST dataset, as well as call detail records and transit datasets, showing that it produces realistic synthetic samples, which can also be used to accurately compute arbitrary number of counting queries.

]]>
A refinement approach for the reuse of privacy risk analysis results https://team.inria.fr/privatics/a-refinement-approach-for-the-reuse-of-privacy-risk-analysis-results/ Tue, 16 Jan 2018 08:23:20 +0000 https://team.inria.fr/privatics/?p=1589

Continue reading]]> With the adoption of the EU General Data Protection Regulation (GDPR), conducting a data protection impact assessment will become mandatory for certain categories of personal data processing. A large body of literature has been devoted to data protection impact assessment and privacy impact assessment. However, most of these papers focus on legal and organizational aspects and do not provide many details on the technical aspects of the impact assessment, which may be challenging and time consuming in practice. The general objective of [10] was to fill this gap and to propose a methodology which can be applied to conduct a privacy risk analysis in a systematic way, to use its results in the architecture selection process (following the privacy by design approach and to re-use its generic part for different products or deployment contexts. The proposed analysis proceeds in three broad phases: (1) a generic privacy risk analysis phase which depends only on the specifications of the system and yields generic harm trees; (2) an architecture-based privacy risk analysis which takes into account the definitions of the possible architectures of the system and refines the generic harm trees into architecture-specific harm trees. (3) a context-based privacy risk analysis which takes into account the context of deployment of the system (e.g., a casino, an office cafeteria, a school) and further refines the architecture-specific harm trees into context-specific harm trees. Context-specific harm trees can be used to take decisions about the most suitable architectures.

]]>
Workshop on data transparency, April 23 2018, Lyon https://team.inria.fr/privatics/workshop-on-data-transparency-april-23-2018-lyon/ Thu, 11 Jan 2018 20:52:24 +0000 https://team.inria.fr/privatics/?p=1569 We are organizing a workshop on data transparency at Lyon on April 23. More information here.

]]>