Research

Presentation

The technologies necessary for the development of pervasive applications are now widely available and accessible for many uses: short/long-range and low energy communications, a broad variety of visible (smart objects) or invisible (sensors and actuators) objects, as well as the democratization of the Internet of Things (IoT). Large areas of our living spaces are now instrumented. The concept of Smart Spaces is about to emerge, based upon both massive and apposite interactions between individuals and their everyday working and living environments: residential housing, public buildings, transportation, etc. The possibilities of new applications are boundless. Many scenarios have been studied in laboratories for many years and, today, a real ability to adapt the environment to the behaviors and needs of users can be demonstrated. However mainstream pervasive applications are barely existent, at the notable exception of the ubiquitous GPS-based navigators. The opportunity of using vast amount of data collected from the physical environments for several application domains is still largely untapped. The applications that interact with users and act according to their environment with a large autonomy are still very specialized. They can only be used in the environment they had especially been developed for (for example “classical” home automation tasks: comfort, entertainment, surveillance). They are difficult to adapt to increasingly complex situations, even though the environments in which they evolve are more open, or change over time (new sensors added, failures, mobility etc.).

Developing applications and services that are ready to deploy and evolve in different environments should involve significant cost reduction. Unfortunately, designing, testing and ensuring the maintenance as well as the evolution of a pervasive application remains very complex. In our view, the lack of resources by which properties of the real environment are made available to application developers is a major concern. Building a pervasive application involves implementing one or more logical control loops which include four stages (see figure -a): (1) data collection in the real environment, (2) the (re)construction of information that is meaningful for the application and (3) for decision making, and finally, (4) action within the environment. While many decision-algorithms have been proposed, the collection and construction of a reliable and relevant perception of the environment and, in return, action mechanisms within the environment still pose major challenges that the Tacoma project is prepared to deal with.

Most current solutions are based on a massive collection of raw data from the environment, stored on remote servers. Figure -a illustrates this type of approach. Exposure of raw sensor values to the decision-making process does not allow to build relevant contexts that a pervasive application actually needs in order to shrewdly act/react to changes in the environment. So, the following is left up to the developer:

  • To characterize more finely raw data beyond its simple value, for example, the acquisition date, the nature of network links crossed to access the sensor, the durability and accuracy of value reading, etc.

  • To exploit this raw data to calculate a relevant abstraction for the application, such as, whether the room is occupied, or whether two objects are in the same physical vicinity.

  • To modify the environment when possible.

Traditional software architectures isolate the developer from the real environment that he oftentimes has to depict according to complex, heavy and expensive processes. However, objects and infrastructure integrated into user environments could provide a more suitable support to pervasive applications: description of the actual system’s state can be richer, more accurate, and, meanwhile, easier to handle; the applications’ structure can be distributed by being built directly into the environment, facilitating scalability and resilience by the processing autonomy; finally, moving processing closer to the edge of the network avoids major problems of data sovereignty and privacy encountered in infrastructures very dependent on the cloud. We strongly believe in the advantages of specific approaches to the fields of edge computing and fog computing, which will reveal themselves with the development of Smart Spaces and an expansive growth of the number of connected objects. Indeed, ensuring the availability and reliability of systems that remain frugal in terms of resources will become in the end a major challenge to be faced in order to allow proximity between processing and end-users. Figure -b displays the principle of “using data at the best place for processing”. Fine decisions can be made closer to the objects producing and acting on the data, local data characterization and local processing de-emphasize the computing and storage resources of the cloud (which can be used for example to store selected/transformed data for global historical analysis or optimization).

Adaptation processes in pervasive environments

Tacoma aims at developing a comprehensive set of new interaction models and system architectures to considerably help pervasive application designers in the development phase with the side effect to ease the life cycle management. We follow two main principles:

  • Leveraging local properties and direct interactions between objects, we would be able to enrich and to manage locally data produced in the environment. The application would then be able to build their knowledge about their environment (perception) in order to adjust their behavior (eg. level of automation) to the actual situation.

  • Pervasive applications should be able to describe requirements they have on the quality of their environment perception. We would be able to achieve the minimum quality level adapting the diversity of the sources (data fusion/aggregation), the network mechanisms used to collect the data (network/link level) and the production of the raw data (sensors).

Last activity report : 2017

New Results

Smart City and ITS

The domain of Smart Cities is still young but it is already a huge market which attract number of companies and researchers. It is also multi-fold as the words “smart city” gather multiple meanings. Among them one of the main responsibilities of a city, is to organisation the transportation of goods and people. In intelligent transportation systems (ITS), ICT technologies have been involved to improve planification and more generally efficiency of journeys within the city. We are interested in the next step where efficiency would be improved locally relying on local interactions between vehicles, infrastructure and people (smartphones).

For the future “autonomous” vehicle are now in the spotlight, since a lot of works has been done in recent years in automotive industry as well as in academic research centers. Such unmanned vehicle could strongly impact the organisation of the transportation in our cities. However, due to the lack of a definition of what is an “autonomous” vehicle it remains still difficult to see how these vehicles will interact with their environment (eg. road, smart city, houses, grid, etc”). From augmented perception to fully cooperative automated vehicle, the autonomie cover various realities in terms of interaction the vehicle relies on. The extended perception relies on communication between the vehicle and surrounding roadside equipments. This help the driving system to build and maintain an accurate view of the environment. But at this first stage the vehicle only uses its own perception to make its decisions. At a second stage, it will take benefits of local interaction with other vehicles through car-to-car communications to elaborate a better view of its environment. Such “cooperative autonomy” does not try to reproduce the human behavior anymore, it strongly rely on communication between vehicles and/or with the infrastructure to make decision and to acquire information on the environment. Part of the decision could be centralized (almost everything for an automatic metro) or coordinated by a roadside component. The decision making could even be fully distributed but this put high constraints on the communications. Automated vehicles are just an exemple of smart city automated processes that will have to share information within the surrounding to make their decisions.

We participated in the definition of the distributed architecture that has been adopted by all partners of the SEAS project. The main principles of this architecture have been published and we developed several profs of concept that have been demonstrated in the project consortium. Our partner developed the components of the architecture that has been demonstrated in the final review of the project (in January). The principles of the architecture and data representation has been used to design an open reusable Data Manager in the context of the EkoHub projet. This modular software will be extended to fit the needs of Indra Ngurah and Rodrigo Silva works.

Convergence middleware for pervasive data

We are currently working on data driven middleware approaches dedicated to physical objects and smart spaces. We had previous contributions on the topic, where opportunistic collaborations between mobile devices were supported by Linda-like tuple space and IEEE 802.11 radios. However, these were adapted to relatively complex devices and the technological limitation at the time did not allow the full potential of the model. More recently, we investigated distributed storage spread over physical objects or fragments using RFID, enabling complex data to be directly reflected by passive objects (without energy). Yet other radio technologies, such as BLE, are emerging to support close range interactions with very low (or even zero) energy requirements.

Applications such as pervasive games (for ex. Pokemon Go), on the go data sharing, collaborative mobile app are often good candidates for opportunistic or dynamic interaction models. But they are not well supported by existing communication stacks, especially in context involving multiple technologies. Technological heterogeneity is not hidden, and high level properties associated with the interactions, such as proximity/range, or mobility-related parameters (speed, discovery latency) have to be addressed in an ad hoc manner. We think that a good way to solve these issues is to offer an abstract interaction model that could be mapped over the common proximity communication technologies, in a similar way as MOM (Message Oriented Middleware) such as MQTT abstract communications in many IoT and pervasive computing scenarios. However, they typically requires IP level communication, which far beyond the capabilities of ultra low energy proximity communication such as RFID and BLE. Moreover, they often rely on a coordinator node that is not adapted in highly dynamic context involving ephemeral communications and mobile nodes.

We started the implementation of an associative memory mechanism over BLE, as it is a common ground that can be shared with passive or semi passive communications (RFID, NFC). Such mechanism, although relatively low level, is still a very useful building block for opportunistic applications: it enables opportunistic data storage/sharing and signaling/synchronization (in space in particular). This approach is fully in line with more general trend developed in the team to build “smart” systems leveraging local resources and data oriented mediation.  We have started validation work with a few applications, in particular regarding energy aspects and scalability with respect to the communication load. This should lead to publishing on both infrastructure and application level aspects of the approach.  

Modeling activities and forecasting energy consumption and production to promote the use of self-produced electricity from renewable sources

This work began in 2017 and is carried out as part of a broader collaboration between Tacoma, the Diverse Team and OKWind, a company specialized in the production of renewable sources of energy. OKWind proposes to deploy self-production units directly where the consumption is done. It has developed expertise in vertical-axis wind turbines, photovoltaic trackers, and heat pump. This project aims at building a system that optimizes the use of different sources of renewable energy, choosing the most suitable source for the current demand and anticipating future needs. The goal is to favor the consumption of locally produced electricity and to maximize the autonomy of the equipped sites so as to reduce the infrastructure needed to distribute electricity, to set energy cost, and to reduce the ecological impact of energy consumption.

Modeling and forecasting production and consumption of a site is hard and raises several issues: how to precisely assess the consumption and production of energy on a given site with changing conditions ? How to adequately size energy sources and energy storage (wind turbine, solar panel and batteries) ? And what methods to use to optimize consumption and, whenever possible, act on installations and activities to reduce energy costs ? We aim to propose tools to predict the consumption of a site based on estimation and previous observation, monitor the site in real time and forecast evolution. We propose to build a DSL describing consumption and production processes, and a system providing recommendations based on the derived model at runtime.

The problem of forecasting is known from both a production and consumption point of view. OKWind has developed tools to predict the production of their renewable sources – the same goes for batteries – and a lot of theoretical work has been done on consumption in the literature. In our view little has been done to precisely model activities, their energy consumption and the associated variability. Indeed most of the current approaches are concerned with either large-scale forecasting for the Smart Grid, are based on coarse grain data (total energy consumption of the site), or focuses on modeling specific appliance without describing how and when they are used.

This is paradoxical considering that companies have spent a lot of time modeling their activities from a logistic point of view. Intuitively, the predictable and seasonal nature of a company’s activities would allow building activity schedulers that favor the consumption of certain energy sources (the cleanest or least expensive one for instance). The development of a DSL to describe the relationships between activities, their planning, and the production and environmental factors would make possible to simulate a given site at a given location, to make assumptions on sizing, and would be a basis to forecast energy consumption so as to provide recommendations for the organization of activities.

We already have developed part of this DSL that simulates activities and production. In particular, it is capable of simulating consumption and production over a given period based on available environmental data. This tool is in the experimentation phase. In particular, we are collecting information on several sites to measure the consumption of various activities.

Sharing knowledge and access-control

Smart spaces (Smart-city, home, building, etc.) are complex environments made up of resources (cars, smartphones, electronic equipment, applications, servers, flows, etc.) that cooperate to provide a wide range of services to a wide range of users. They are by nature extremely fluctuating, heterogeneous, and unpredictable. In addition, applications are often mobile and have to migrate or are offered by mobile platforms such as smartphones or vehicles.

To be relevant, applications must be able to adapt to users by understanding their environment and anticipating its evolutions. They are therefore based, explicitly or implicitly, on a representation of their surrounding environment based on available data provided by sensors, humans, objects and applications when available. The accuracy of the adaptations made by the applications depends on the precision of this representation. Building and maintaining such knowledge is resource-intensive in terms of network exchanges, computing time and incidentally energy consumption. It is, therefore, crucial to find ways to improve this process. In practice, many applications build their own models without sharing them or delegating calculations to remote services, which is not optimal because many processes are redundant. A huge improvement would be to find mechanisms that allows sharing the information so as to reduce as much as possible the treatments necessary to obtain it.

However, it seems extremely complex to provide a global, complete and unified view of the environment that reflects the applications’ concerns. If it were possible, such a single representation would by nature be incomplete or subjective. Our solution should be applicable to nowadays devices and applications with little adjustments to the underlying architectures. It should then be flexible enough to deal with the lack of standards in the domain without imposing architectural choices. Such lack of standard is very common in IT and mainly due to well-known factors: (1) for technical reasons, developers tend to think that their “standard” is better suited for their current use-case, or/and (2) for commercial reasons companies want to keep a closed siloed system to capture their users, or/and (3) because the domain is still new and evolving and no standard as emerged yet, or/and finally (4) because the problem is too complex to be standardized and most proposed standards tend to be bloated and hard to use. The IoT domain suffers from all of these impediments and solution targeting mid-term application have to take these factors into accounts. Many IoT applications are still organized in silos of information. This leads to the deployment of sensors with similar functions and redundant pieces of software providing exactly the same service. Many frameworks or ontologies have been developed in the field to provide a solution to this problem but their implementation depends on the goodwill of the companies who do not always see their interest in losing part of the control of their application and data. To be largely accepted, solutions should let companies decide what information to share and when with little impact on their current infrastructure.

We want to be able to develop collaborative mechanisms that allow applications to share some of their information about the immediate surrounding environment with their counterparts. The idea is to allow the construction of shared representations between groups of applications that manipulate the same concepts so that each group can construct a subjective and complete representation of the environment that corresponds to its concerns. In this context, we want to offer applications mechanisms allowing them to leave information about their environment by associating them directly with the flows, data, services and objects handled. This information will be stored by the environment so that it will be possible for the application to retrieve it and for its peers to access it. From a logical point of view, applications will have the illusion of annotating objects directly; we make no assumptions about where this information will be stored, which will depend on the characteristics of the environment or the sharing solution chosen. Data should be stored as close as possible to the environments they qualify for reasons of performance, confidentiality and autonomy. To experience that idea, we have developed:

  • Matriona, a globally distributed framework developed on top of OSGi. This project has been described in more details in the previous activity report. It is meant to be a global framework for exposing devices as REST-like resources. Resources functionalities can be extended through the mean of decorators. The system also provides access mechanisms. The main interest of Matriona with regards to the information enrichment is its ability to support the dynamic extension of resource meta-information by application and to provide means to share this meta-information with others. It implements the concept of groups of interest with access control on meta-information. The concept described in Matriona are in the process to be published.

  • Little Thumb Base (LithBase) is an independent knowledge base that provides the same enrichment capabilities than Matriona but imposes fewer constraints on the architecture of applications. It is a shared database implemented on simple low power nodes (esp32) that are cheap to deploy, flash and use. The idea behind LithBase is to decouple the storage from the framework and to provide a standard mechanism to share information. Ultimately we want to use its capabilities to implement a registry in the manner of Consul with meta-information enrichment and sharing mechanisms. By focussing only on the discovery mechanism and information sharing, LithBase imposes fewer constraints on applications and comply more with the goal of being ready to use in existing applications. This is still a work in progress. This solution also raises the issue of trust and control over access to this information. It is indeed necessary for applications to be able to determine the source of the additional information and to determine who will have access to the information they add. We have also been experimenting with access control mechanism that is implemented by LithBase. We are currently using elliptic cryptography to allow private information sharing between groups. Ultimately the goal of this project is to produce a coordinating object that implements generic mechanisms favouring opportunistic behaviours of surrounding applications.

Comments are closed.