Research


Overall objectives

Presentation

The technologies necessary for the development of pervasive applications are now widely available and accessible for many uses: short/long-range and low energy communications, a broad variety of visible (smart objects) or invisible (sensors and actuators) objects, as well as the democratization of the Internet of Things (IoT). Large areas of our living spaces are now instrumented. The concept of Smart Spaces is about to emerge, based upon both massive and apposite interactions between individuals and their everyday working and living environments: residential housing, public buildings, transportation, etc. The possibilities of new applications are boundless. Many scenarios have been studied in laboratories for many years and, today, a real ability to adapt the environment to the behaviors and needs of users can be demonstrated. However mainstream pervasive applications are barely existent, at the notable exception of the ubiquitous GPS-based navigators. The opportunity of using vast amounts of data collected from the physical environments for several application domains is still largely untapped. The applications that interact with users and act according to their environment with a large autonomy are still very specialized. They can only be used in the environment they had especially been developed for (for example “classical” home automation tasks: comfort, entertainment, surveillance). They are difficult to adapt to increasingly complex situations, even though the environments in which they evolve are more open, or change over time (new sensors added, failures, mobility etc.).

Developing applications and services that are ready to deploy and evolve in different environments should involve a significant cost reduction. Unfortunately, designing, testing and ensuring the maintenance as well as the evolution of a pervasive application remain very complex. In our view, the lack of resources by which properties of the real environment are made available to application developers is a major concern. Building a pervasive application involves implementing one or more logical control loops which include four stages (see figure 1-a): (1) data collection in the real environment, (2) the (re)construction of information that is meaningful for the application and (3) for decision making, and finally, (4) action within the environment. While many decision-algorithms have been proposed, the collection and construction of a reliable and relevant perception of the environment and, in return, action mechanisms within the environment still pose major challenges that the Tacoma/Ease project is prepared to deal with.

Most current solutions are based on a massive collection of raw data from the environment, stored on remote servers. Figure 1-a illustrates this type of approach. Exposure of raw sensor values to the decision-making process does not allow to build relevant contexts that a pervasive application actually needs in order to shrewdly act/react to changes in the environment. So, the following is left up to the developer:

  • To characterize more finely raw data beyond its simple value, for example, the acquisition date, the nature of network links crossed to access the sensor, the durability and accuracy of value reading, etc.
  • To exploit this raw data to calculate a relevant abstraction for the application, such as, whether the room is occupied, or whether two objects are in the same physical vicinity.
  • To modify the environment when possible.

Traditional software architectures isolate the developer from the real environment that he has to depict according to complex, heavy and expensive processes. However, objects and infrastructure integrated into user environments could provide a more suitable support to pervasive applications: description of the actual system’s state can be richer, more accurate, and, simultaneously, easier to handle; the applications’ structure can be distributed by being built directly into the environment, facilitating scalability and resilience by the processing autonomy; finally, moving processing closer to the edge of the network avoids major problems of data sovereignty and privacy encountered in infrastructures very dependent on the cloud. We strongly believe in the advantages of specific approaches to the fields of edge computing and fog computing, which will reveal themselves with the development of Smart Spaces and an expansive growth of the number of connected objects. Indeed, ensuring the availability and reliability of systems that remain frugal in terms of resources will become in the end a major challenge to be faced in order to allow proximity between processing and end-users. Figure 1-b displays the principle of “using data at the best place for processing”. Fine decisions can be made closer to the objects producing and acting on the data, local data characterization and local processing de-emphasize the computing and storage resources of the cloud (which can be used for example to store selected/transformed data for global historical analysis or optimization).

Figure 1: Adaptation processes in pervasive environments

Ease aims at developing a comprehensive set of new interaction models and system architectures to considerably help pervasive application designers in the development phase with the side effect to ease the life cycle management. We follow two main principles:

  • Leveraging local properties and direct interactions between objects, we would be able to enrich and to locally manage data produced in the environment. The application would then be able to build their knowledge about their environment (perception) in order to adjust their behavior (e.g. level of automation) to the actual situation.
  • Pervasive applications should be able to describe requirements they have on the quality of their environmental perception. We would be able to achieve the minimum quality level adapting the diversity of the sources (data fusion/aggregation), the network mechanisms used to collect the data (network/link level) and the production of the raw data (sensors).

Last activity report : 2022

New results

Cooperation between automated vehicles

Industry 4.0 leads to a strong digitalization of industrial processes, but also a significant increase in communication and cooperation between the machines that make it up. The context of factory 4.0 leads more and more to decentralised solutions, as centralisation shows its limits. One of the research areas of Industry 4.0 is the use of autonomous guided vehicles (AGVs), autonomous industrial vehicles (AIVs) and other cooperative mobile robots which are multiplying in factories, often in the form of fleets of vehicles. There intelligence and autonomy are increasing. While the autonomy of autonomous vehicles has been well characterized in the field of road and road transport, this is not the case for the autonomous vehicles used in industry. The establishment and deployment of AIV fleets raises several challenges, all of which depend on the actual level of autonomy of the AIVs: acceptance by employees, vehicle location, traffic fluidity, collision detection, or vehicle perception of changing environments.

Thus, simulation serves to account for the constraints and requirements formulated by the manufacturers and future users of AIVs. Simulation offers a good framework for studying solutions for these different challenges. Thus, we proposed the extension of a collision detection algorithm to deal with the obstacle avoidance issue [1]. The conclusive simulation will allow us to begin experiment in emulation and real conditions. Moreover, we proposed in the paper [2] an agent model to test scenarios in Industry 4.0 environments with a fleet of AIVs. We simulated our proposition of a first step to move towards a resolution of global obstacle avoidance by AIVs with a collective strategy. The results showed the interest of collaboration to increase the collective and individual efficiency of the vehicles in a fleet. It opens the door to more advanced global collective strategies with the possibility of the allocation, scheduling and distribution of tasks between them in real-time after the perception of an obstacle. Finally, we also present in the paper [3] on a method to estimate positions of AIVs moving in a closed industrial environment, the extension of a collision detection algorithm to deal with the obstacle avoidance issue [1], and the development of an agent-based simulation platforms for simulating these two methods and algorithms.

Risk evaluation for Smart Agriculture

Very small connected objects are now able to execute software codes, to drive many sensors, to send data pair with other devices in their neighborhood and to pre-process data for cleaning/aggregation. Our ambition is to optimize the crop monitoring system from a data and energy perspectives, using generic software mechanisms as close as possible to the sensor node and introducing “intelligence” into the data collection mechanisms.

To integrate software services in these nodes, we relies on open source operating systems providing consistent APIs & SDKs. Traditional OS such as Linux cannot run on the limited resources of low-end IoT devices. For the last year, we have focused our efforts on the deployment of an open / stable / sustainable platform exploiting low power nodes running with RIOT. RIOT is an OS developed by a growing open source community. It offers a developer-friendly programming model and APIs, similar to what is found in Linux. It is based on a micro-kernel architecture. The network stack implements the main IoT standards (6LoWPAN, IPv6, RPL, CoAP etc.). It therefore has all the characteristics of a modern OS targeting the low-end IoT nodes.

We used an open hardware and software platform:

  • Offering very fine mechanisms for energy management: the developer is able to put the nodes into the deepest sleep mode, and to wake them up in a synchronized way whenever necessary. Even in the presence of a very energy-intensive sensor, consumption of a node does not exceed a few microamps.
  • Accelerating the integration of complex sensors. A set of libraries facilitates interactions between hardware and OS. The sensor is seen as a simple source of data.

This year, we worked on data quality with a team from INRAE. More specifically, we were interested in approaches to assess the risk of disease on plants, using our platform.

We have developed a simple generic infection model to predict pathogen infection periods. The model is designed to be used in forecasting pathogens that do not have extensive epidemiological data.

Most of the existing models require an epidemiological data set based on a large amount of data, measured over a long period. Our approach relies on locally measured cardinal temperatures and humidity duration. The model uses a temperature response function that is scaled to the minimum and optimum values of the required surface wetness duration.

The algorithm is capable of running for several weeks on a communicating object (a MCU) of very small size. All the nodes are synchronized and wake up from time to time to perform local processing.

Our software environment is based on a limited number of nodes that measure local data (humidity + temperature) and evaluate the pathogen risk. If the risk increases locally, the nodes are able to dynamically extend the measurement area and improve the global quality of the evaluation. This principle is illustrated in the figure 4.

Figure 4: Detection of plant diseases

Comments are closed.