Overall objectives
The Kairos ambitions are to deal with the Design of Cyber-Physical Systems (CPS), at various stages, using Model-Based techniques and Formal Methods. Design here stands for co-modeling, co-simulation, formal verification and analysis activities, with connections both ways from models to code (synthesis and instrumentation for optimization). Formal analysis, in turn, concerns both functional and extra-functional correctness properties. Our goal is to link these design stages together, both vertically along the development cycle, and horizontally by considering the interactions between cyber/digital and physical models. These physical aspects comprise both physical environments and physical execution platform representations, which may become rather heterogeneous as in the cases of the Internet of Things (IoT) and computing at the edges of the gateways. The global resulting methodology can be tagged as Model-Based, Platform-Based CPS Design, see Figure 1.
CPS design must take into account all 3 aspects of application requirements, execution platform guarantees and contextual physical environment to establish both functional and temporal correctness. The general objective of Kairos is thus to contribute in the definition of a corresponding design methodology, based on formal Models of Computation for joint modeling of cyber and physical aspects, and using the important central concept of Logical Time for expressing the requirements and guarantees that define CPS constraints.
Logical Multiform Time. It may be useful to provide an introduction and motivation for the notion of Logical Multiform Time (and Logical Clocks), as they play a central role in our approach to Design. We call Logical Clock any repetitive sequence of occurrences of an event (disregarding possible values carried by the event). It can be regularly linked to physical time (periodic), but not necessarily so: fancy processors may change speeds, simulation engine change time-integration steps, or much more generally one may react with event-driven triggers of complex logical nature (do this after 3-times that unless this…). It is our belief that user specifications are generally expressed using such notions, with only partial timing correlations between distinct logical clocks, so that the process of realization (or “model-based compilation”) consists for part in establishing (by analysis or abstract simulation) the possible tighter relations between those clocks (unifying them from a partial order of local total orders to a global total order).
Kairos defined in the past a small language of primitives expressing recognized constraints structuring the relations between distinct logical clocks 1, 7. This language (named CCSL for Clock Constraint Specification Language), borrows from notions of Synchronous Reactive Languages 9, Real-Time Scheduling Theory, and Concurrent Models of Computations and Communication (MoCCs) in Concurrency Theory 8 altogether. Corresponding extensions of Timed Models originally based on single (discrete or continuous) time can also be considered. Logical Time is used in our approach to express relation constraints between heterogeneous models, of cyber or physical origin, and to support analysis and co-simulation. Addressing cyber-physical systems demands to revisit logical time to deal with the multi-physical and sometimes uncertain environments.
Kairos is also active in Standardisation of the above mentioned Cyber-Physical Systems, in Internet of Things, and in Smart Contracts languages for electronic ledgers.
Last activity report : 2024
- 2024 : PDF – HTML
- 2023 : PDF – HTML
- 2022 : PDF – HTML
- 2021 : PDF – HTML
- 2020 : PDF – HTML
- 2019 : PDF – HTML
- 2018 : PDF – HTML
- 2017 : PDF – HTML
Results
New results
Logical Time for safety requirements in safety-critical systems
One of the main goals of Kairos is to show how logical time gives a flexible model of time that can be progressively refined from requirements down to the embedded code.
In 13, we show that we can detect insistencies in formal requirements by detecting bad patterns in the causality clock graphs. Contrary to our previous works, the efficiency does not come from a detailed analysis of the semantic models but rather we use the topology of the requirement models to filter out bad requirements. This was done in collaboration with our Chinese partners, mostly in the context of our associated-team Plot4IoT (see collaboration section).
In 15, we have pursued our previous work on dynamic logic. The idea is to deal jointly with the properties and the reactive program rather than only considering the properties to be verified. We use CCSL to capture the properties and a reactive language inspired from Esterel for the program. In this context, because of decidability issues, the proofs are not fully automatic. We look for ways to reduce the burden of the manual proofs.
Finally, in order to get closer to the requirements expressed in natural language, we have, in the context of the PhD of Pavlo Tokariev, proposed some syntactic extentions to CCSL. This new language, called MRT-CCSL 21, proposed some modular extensions as well as native constructs to deal with real-time requirements. Pavlo has defended his PhD on December 13th showing how these syntactic extensions open the path to using new semantic models based on abstract interpretation and polyedral libraries making some of the analysis tractable, even though not very efficient 26.
Safety rules for autonomous driving
We previously addressed the formal modeling of automotive driving Safety Rules in a prior PhD thesis 35. In the current PhD work of Maksym Labzhaniia, we are revisiting this language in light of logical multi-dimensionality, particularly with respect to time and space (interconnected through speed). While we have published initial results on the formal spatio-temporal framework in 16, we are now exploring how this framework can be integrated with a Domain-Specific Language (DSL) on one hand, and with execution/simulation traces on the other. The ultimate goal is to develop a DSL through which scenarios can be validated, either offline or online, serving as an AI safety check.
Cyber Physical Systems co-Modeling and co-Simulation
This research study illustrates how we leveraged lessons learned from previous work to make explicit the semantic relations between models of different natures. This year, the consideration of both simulation and operational model behaviors led us into the Digital Twin landscape, where three scientific questions have been addressed:
This activity may eventually merge with the one from section 8.2 since the goal is to consider heterogeneous models from different system life cycle activities.
Multi-core parallelization of safety-critical real-time applications
This work involved a collaboration with Airbus and the Inria Parkas team (Tim Bourke).
We have continued our collaboration with Airbus and Safran aiming at safe and efficient multi-core implementation of avionics applications. We have designed and implemented optimal synchronization protocols for the generated code, we have further advanced in modeling and reducing inter-core interferences, we have further optimized the scheduling heuristics, and we have integrated our Lopht multi-core parallelizing back-end with the pressail front-end provided by the Inria Parkas team and the multi-core Lopht back-end.
Results: software Lopht, direct financing by Airbus.
Bidirectional Reactive Programming for Machine Learning
This is a collaboration with Google DeepMind (Albert Cohen, Gordon Plotkin).
While Machine Learning (ML) algorithms are typically specified in Python, using libraries with rapidly-evolving syntax and semantics, their informal description follows a dataflow reactive form. Our contribution here is threefold:
- We determined that the 4 primitives statements of conventional dataflow reactive languages allow the representation of the control aspects of all layers and models (in both inference and training form) if they do not involve bidirectional propagation or batch normalization. The same primitives also allow representing the top-level drivers of supervised training and the reinforcement learning (RL) agents, and allow seamless integration with data pre- and post-processing. This grand unification paves the way for simpler, clearer specification and more efficient compilation of ML-based applications.
- We extended dataflow reactive programming with recurrences backward in time, which are needed to represent bidirectional layers (such as transformers), batch normalization, or the training of stateful networks by means of back-propagation through time. The resulting language allows the representation of all ML models working on data organized along a time dimension. Our extension preserves the bounded time and memory guarantees of reactive languages whenever recurrences backwards in time have a bounded horizon.
- We provide the formal semantics of this language. As recurrences backwards in time allow for causality cycles involving multiple time indices, we determine that, to ensure productivity (i.e. the absence of deadlocks), chains of backward dependences must always eventually terminate, so that execution can be implemented as globally forwards and locally backwards with finite buffering.
The current result of this collaboration are the language compiler (not yet distributed), as well as a publication 22.
Beyond the scientific interest of the formal unification of ML programming around a simple and formally defined language, this project may have significant real-life applications in the field of embedded AI, to allow the automatic implementation of advanced AI algorithms (e.g. advanced RL-based motion planning algorithms for autonomous driving). We are exploring this potential through interactions with the ASTRA joint Inria/Valeo team.
Compositional Analysis of Resource Usage in Software Defined Vehicle
In a previous project with the Institut de Recherche Technologique (IRT) Saint-Exupery, and in connection with local R&D labs of Renault Software Factory and Thales Alenia Space, we considered model-based formal engineering of Interface Description Languages (IDL) to support logical time annotations and abstract temporized functional behavior representations. In this previous work, a very pragmatic simulator was developed. In this simulator, applications are characterized by a temporal profile of their resource usage, the resources are characterized by the number of simultaneous access they support and the result is an evaluation of the time spent waiting for a resource for each application.
This year, we started generalizing previous concepts to estimate, at the system engineering level, the resource usage and potential interferences implied by the deployement of new services in a Software Defined Vehicle. This work is developed in the context of the HAL4SDV European project (see Section 10.3.1).
This activity may eventually merge with the one from Section 8.1 since the goal is to provide compositional analysis means for temporal contracts and their extensions.
Trustworthy Fleet deployment and management
This activity is a follow-up of Nicolas Ferry previous activities in ENACT H2020 project, now renewed since his arrival in Kairos in the DYNABIC HE project (see Section 10.3.2). Continuous and automatic software deployment is still an open question for IoT systems, especially at the Edge and IoT ends. The state-of-the-art Infrastructure as Code (IaC) solutions are established on a clear specification about which part of the software goes to which types of resources. This is based on the assumption that, in the Cloud, one can always obtain the exact computing resources as required. However, this assumption is not valid on the Edge and IoT levels. In production, IoT systems typically contain hundreds or thousands of heterogeneous and distributed devices (also known as a fleet of IoT/Edge devices), each of which has a unique context, and whose connectivity and quality are not always guaranteed. In ENACT, we both investigated the challenge of automating the deployment of software on heterogeneous devices and of managing variants of the software which fit different types or contexts of Edge and IoT devices in the fleet. In 2022, GeneSIS (a part of the aforementioned results) was recognized by the EC innovation radar as highly innovant with high maturity level. The natural next step is to investigate how to guarantee the trustworthiness of the deployment when (i) the quality of the devices is not guaranteed, (ii) the context of each device is continuously changing in an unanticipated manner, and (iii) software components are frequently evolving in the whole software stack of each device. In such context, ensuring the proper ordering and synchronization of the deployment actions is critical to improve the quality and trustworthiness and to minimize the downtime.
Security and Resilience of Cyber Physical Systems
This activity is conducted in the context of the DYNABIC HE project (see Section 10.3.2) in connection with SINTEF and Montimage. The objective of the project is to increase the resilience and business continuity capabilities of European critical services in the face of advanced cyber-physical threats. In this context, we investigated:
In the next stage, we plan to explore how the behavioral drift analysis solution can be integrated with state-of-the-art security monitoring systems. This integration aims to enhance the understanding and detection of security attacks, as well as their root causes.
Performance evaluation in ETSI oneM2M standard
The ETSI Testing Task Force (TTF 019) project is aimed to evaluate a oneM2M-based IoT solution regarding different relevant Key Performance Indicator. Based on the previous work in 2023 47, 38, we completed in 2024 the project with the following outputs:
- We experimented a profiler tool developed to that aim. The profiler is a standalone software to be run together with a real open-source oneM2M implementation dealing from scripts creating oneM2M “burst” till real case study, as in 34.
- We experimented a simulation tool developed to that aim. The simulation tool is a OMNeT++ library implementing either the deployment model and the case studies, as in 33; a paper was also published on this issue 17.
- We wrote some deployment guidelines and good practices in order to help open-source communities around oneM2M to get full advantage on our performance evaluation study, as in 31.
Raising a contact tracing standard to a forecast standard
In the recent past, we standardized 44 a novel contact tracing protocol, called Asynchronous Contact Tracing (ACT), also using our previous experience on structured overlay networks 39, 41, 46. ACT traces the presence of Covid19 virus via the IoT connected sensors and makes those informations available anonymously. This year, we extended ACT in order to add a global detection and inter-communication IoT network focused on rapid response to bio-emergencies. This would potentially change the current services already deployed by the National Public Health Institutes to monitor, alert and advise political decision makers, public and military services, environmental institutes and citizens.
This extension could promote an ETSI European standard capable of a real-time “forecasting” of any kind of pathogens and pollutions to people in the EU space, in an anonymous, resilient, and secure way; the standard should be sufficiently flexible and customizable nation-by-nation, according to their peculiar laws.
We also made significant advances in a proof of concepts: sources of the web application (front-end and back-end) and the running android app are fully available on gitlab inria, while the web front-end can be runned by now on act.inria.fr. Those results will continue in 2025 and 2026, in the setting of a new Specialist Task Force (STF 697) funded by ETSI.
Smart Contract Standards
In the context of the ETSI Specialist Task Force (STF655) project, we started in 2024 a standardisation activity on smart contracts, aiming specifically to support the EU Data Act and eIDAS2 proposal directives. This year, the outputs are a scoping study 32 containing the main principle of an original “chain of trust” of electronic signatures to guarantee that every smart contract deployed on a electronic ledger could be legally trusted.
As a student research project, we are currently designing and developing an experimental blockchain allowing dynamic method override in smart contract languages together with the above mentioned chain of trust, following the research line suggested in 37.
In 2025, we will continue this project on using of EU Digital Identity Wallets and electronic signatures for identification with smart contracts and on Policy and security requirements ledgers with smart contracts as a trust service.
Strong priority and determinacy in timed CCS
Building on the classical theory of process algebra with priorities, we identify a new scheduling mechanism, called sequentially constructive reduction which is designed to capture the essence of synchronous programming. The distinctive property of this evaluation strategy is to achieve determinism-by-construction for multi-cast concurrent communication. In particular, it permits us to model shared memory multi-threading with reaction to absence as it lies at the core of the programming language Esterel.
In the technical setting of CCS extended by clocks and priorities, we prove for a large class of processes, which we call structurally coherent, the Church-Rosser confluence property for constructive reductions. We further show that under some syntactic restrictions, called pivotable, the operators of prefix, summation, parallel composition, restriction and hiding preserve structural coherence. This covers a strictly larger class of processes compared to those that are confluent in Milner’s classical theory of CCS without priorities 30.
A middleware for the experimentation on IoT mobile networks
Idawi is a middleware for distributed computing on large mobile and heterogeneous dynamic networks 28 developed within the KAIROS and COATI teams since 2021. It aims at providing the Research communities with an experimentation tool for distributed computing in the Edge, the IoT, mobile ad hoc network, etc.
In 2024, we have pursued our work on the use of the concept of a “digital twin” for the decentralized management of large mobile heterogeneous networks, at a middleware-level 29. In practice, we have implemented in the Idawi middleware a specific (node-local) service hosting and exposing a digital twin of the surrounding network. The nodes within this digital twin (representing the network) are themselves digital twins of their physical counterpart nodes.
This composite digital twin is aimed to be used by other services/applications to 1) discover the routes and resources available into the physical network, as well as to 2) simulate a distributed process. Simulation can be used by any node, on the basis of its local knowledge of the environment (the network). It makes it possible to predict the behavior of complex distributed processes before they are launched across the real system. Simulation results can be then used to drive node’s behavior on-the-fly.
……….