Tag: bboulay

Mar 21

Multi-sensors fusion for daily living activities recognition in older people domain

I am currently researching the use of Information and Communication Technologies as tools for preventative care and diagnosis support in elderly population. Our current approach uses accelerometers and video sensors for the recognition of instrumental daily living activities (IADL, e.g., preparing coffee, making a phone call). Clinical studies have pointed the decline of elderly performance in IADL as a potential indicator of early symptoms of a dementia case (e.g., Alzheimer’s’ patients). IADL are modeled and detected using a constraint-based generic ontology (called SCREK). This ontology allows us to describe events based on spatial, temporal, and sensors data (e.g., MotionPOD) values.

Feb 03

ScReK tool

Feb 02

ViSEvAl software

ViSEvAl graphical user interface

ViSEvAl is under GNU Affero General Public License (AGPL)

At INRIA, an evaluation framework has been developed to assess the performance of Gerontechnologies and Videosurveillance. This framework aims at better understanding the added values of new technologies for home-care monitoring and other services. This platform is available to the scientific community and contains a set of metrics to evaluate automatically the performance of software given some ground-truth.

Description

The software ViSEvAl (ViSualisation and EvAluation) provides a GUI interface to visualise results of video processing algorithms (such as detection of object of interest, tracking or event recognition). Moreover this software can compute metrics to evaluate specific tasks (such as detection, classification, tracking or event recognition). The software is composed of two binaries (ViSEvAlGUI and ViSEvAlEvaluation), and several plugins. The users can add their own plugins to define a new metric for instance.

General schema of an evaluation platform

Installation

  • OS: Linux (tested on Fedora 12) and gcc 4.4.4
  • Three libraries are mandatory: QT4 (for GUI facilities and plugin facilities), and libxerces-c (for automatic xml parser)
  • xsdcxx must be installed on your computer (for automatic xml parser)
  • FFMpeg is optional (only use in the plugin to load .ASF video)
  1. Go in the ViSEvAl directory (call SoftwareDirectory in the next)
  2. Launch the script ./install.sh. The script will create all the makefile needed by the application and the plugins, and will compile all the code. If all is ok, you will find the executables in SoftwareDirectory/bin/appli directory
  3. Type the bash command:
    setenv LD_LIBRARY_PATH $SoftwareDirectory/lib:/usr/local/lib$LD_LIBRARY_PATH (to tell to the applicatin where is the ViSEvAlLib, and the optional libs for ffmpeg)
  4. Go in the directory bin/appli
  5. Run ViSEvAlGUI for the GUI tool or run ViSEvAlEvaluation for the command line tool
  • ViSEvAlGUI In the menu: File -> Open…, select the desired .conf file
  • ViSEvAlEvaluation file.conf result.res [0-1] [0-1]
  1. file.conf the desired configuration file
  2. result.res the file where the results while be wrote
  3. [0-1] optional value 0: the results are printed for each frame, 1: only the global results are printed
  4. [0-1] the evaluation of the detection (XML1) and of the fusion (XML2) is only done on the common frames

More details

ViSEvAl overview

XSD files

The XSD files describe the XML format of the different input files for the ViSEvAl software

  • Description of the data provided by video sensor: camera detection, fusion detection and event detection data.xsd
  • Description of the data provided by non video sensor: contact sensor, wearable sensor,… sensor.xsd
  • Description of the camera parameters: calibration, position,… camera.xsd

Download

This platform is available on demand to the scientific community (contact Annie.Ressouche @ inria.fr).

Dec 21

Evaluation description

Involved people:

  • Bernard Boulay
  • Julien Badie
  • Swaminathan Sankaranarayanan

Topics:

Evaluation is an important task to better understand the added values of new algorithms or technologies for intelligent video platform.

Main issues:

  • Criteria : which criteria is used according the task to evaluate?
  • Multi-criteria : how two combine several criteria to qualify a system?
  • Data set : how to select/create a meaning data set according teh task to evaluate?

 

Dec 21

Event recognition description

The event recognition is based on the human pre-definition of scenarios of interest corresponding to the events to be recognized. Scenarios are written in a formal language easily readable by humans. An event (or scenario) may involve detected objects (people, vehicle, groups…) and contextual objects (walls, equipment…) or zones. Detected objects arrive as a metadata stream from other algorithms (people tracking, group tracking…).

Issues of event recognition mainly concern the uncertainty of the input objects detection.

Involved people: Bernard BOULAY, Carolina GARATE, Sofia ZAIDENBERG, Veronique JOUMIER, Carlos CRISPIM, Rim ROMDHANE

Dec 14

Staying enough flexible on our datatypes

Hy all,

here is our current subject of discussion: How to code our datatype so that we can easily change it but still having the possibility to get some functions on those? VISEVAL has already this goal in mind. The DTK may have some generic way to handle it but how? We have been discussing on the VISEVAL implementation and we need to understand more the spirit of the DTK and what are those current capabilities? Let’s add some comments on this post to get some details.

Best Regards and keep thinking!