ViSEvAl software
ViSEvAl is under GNU Affero General Public License (AGPL)
At INRIA, an evaluation framework has been developed to assess the performance of Gerontechnologies and Videosurveillance. This framework aims at better understanding the added values of new technologies for home-care monitoring and other services. This platform is available to the scientific community and contains a set of metrics to evaluate automatically the performance of software given some ground-truth.
Description
The software ViSEvAl (ViSualisation and EvAluation) provides a GUI interface to visualise results of video processing algorithms (such as detection of object of interest, tracking or event recognition). Moreover this software can compute metrics to evaluate specific tasks (such as detection, classification, tracking or event recognition). The software is composed of two binaries (ViSEvAlGUI and ViSEvAlEvaluation), and several plugins. The users can add their own plugins to define a new metric for instance.
Installation
- OS: Linux (tested on Fedora 12) and gcc 4.4.4
- Three libraries are mandatory: QT4 (for GUI facilities and plugin facilities), and libxerces-c (for automatic xml parser)
- xsdcxx must be installed on your computer (for automatic xml parser)
- FFMpeg is optional (only use in the plugin to load .ASF video)
- Go in the ViSEvAl directory (call SoftwareDirectory in the next)
- Launch the script ./install.sh. The script will create all the makefile needed by the application and the plugins, and will compile all the code. If all is ok, you will find the executables in SoftwareDirectory/bin/appli directory
- Type the bash command:
setenv LD_LIBRARY_PATH $SoftwareDirectory/lib:/usr/local/lib$LD_LIBRARY_PATH (to tell to the applicatin where is the ViSEvAlLib, and the optional libs for ffmpeg) - Go in the directory bin/appli
- Run ViSEvAlGUI for the GUI tool or run ViSEvAlEvaluation for the command line tool
- ViSEvAlGUI In the menu: File -> Open…, select the desired .conf file
- ViSEvAlEvaluation file.conf result.res [0-1] [0-1]
- file.conf the desired configuration file
- result.res the file where the results while be wrote
- [0-1] optional value 0: the results are printed for each frame, 1: only the global results are printed
- [0-1] the evaluation of the detection (XML1) and of the fusion (XML2) is only done on the common frames
More details
- ViSEvAl description v1.0
- Metrics description v1.0
- Xmls output description v1.0
- Formats description v1.0
- ViSEvAl presentation 2012-07-09
ViSEvAl overview
XSD files
The XSD files describe the XML format of the different input files for the ViSEvAl software
- Description of the data provided by video sensor: camera detection, fusion detection and event detection data.xsd
- Description of the data provided by non video sensor: contact sensor, wearable sensor,… sensor.xsd
- Description of the camera parameters: calibration, position,… camera.xsd
Download
This platform is available on demand to the scientific community (contact Annie.Ressouche @ inria.fr).
Segmentation description
Involved people: Vasanth BATHRINARAYANAN, Ratnesh KUMAR
Evaluation description
Involved people:
- Bernard Boulay
- Julien Badie
- Swaminathan Sankaranarayanan
Topics:
Evaluation is an important task to better understand the added values of new algorithms or technologies for intelligent video platform.
Main issues:
- Criteria : which criteria is used according the task to evaluate?
- Multi-criteria : how two combine several criteria to qualify a system?
- Data set : how to select/create a meaning data set according teh task to evaluate?
Object detection – Introduction
People involved:
By definition, Object detection is a computer technology related to computer vision and image processing that deals with detecting, in digital images and videos, instances of semantic objects of a certain class, such as:
– humans
– buildings
– cars, etc.
Well-researched domains of object detection include face detection and pedestrian detection.
Object detection has applications in many areas of computer vision, including image retrieval and video surveillance.
Event recognition description
The event recognition is based on the human pre-definition of scenarios of interest corresponding to the events to be recognized. Scenarios are written in a formal language easily readable by humans. An event (or scenario) may involve detected objects (people, vehicle, groups…) and contextual objects (walls, equipment…) or zones. Detected objects arrive as a metadata stream from other algorithms (people tracking, group tracking…).
Issues of event recognition mainly concern the uncertainty of the input objects detection.
Involved people: Bernard BOULAY, Carolina GARATE, Sofia ZAIDENBERG, Veronique JOUMIER, Carlos CRISPIM, Rim ROMDHANE
Action recognition description
Human action recognition is a very important and challenging problem. Our goal is to learn and recognize short human actions in videos taken by various types of cameras.
Involved people: Piotr BILINSKI
Welcome to STARS Website!
Spatio-Temporal Activity Recognition Systems Team Leader : François Brémond Place : Sophia-Antipolis Subject : Perception, cognition, interaction Theme : Vision, Perception et interprétation multimédia
May the Research’Power be with you :-)
Sorry, this entry is only available in French.
An Object Tracking in Particle Filtering and Data Association Framework, Using SIFT Features

An article published in ICDP 2011. Authors: M. Souded, L. Giulieri and F. Bremond The authors address the problematic of the multi-object tracking in video surveillance context with single static cameras. They propose a novel approach for multi-object tracking in a particle filtering and data association framework allowing real-time tracking …
[Serialization] QSetting
For every class that can be considerate like a QVariant ( see http://qt-project.org/doc/qt-4.8/QVariant.html ), It’s easy to make a QMapStream from them and it’s easy to serialize the QMapStream in a file using QSettings. Therefor it’s easy to serialize class in C++ using Qt if our data respect the QVariant form, and …
Object tracking in SUP

Involved people: Duc Phu CHAU, Francois BREMOND and Monique THONNAT SUP (Scene Unsderstanding Platform, developped by Stars team) provides an object apperance-based tracking algorithm. This tracker includes two main plugins: ParametrableF2Ftracking and LTT. The objective of ParametrableF2Ftracking plugin is to establish object links with a sliding time window. For each …
Object tracking description
Involved people: Duc Phu CHAU, Julien BADIE and Malik SOUDED The aim of an object tracking algorithm is to generate the trajectories of objects over time by locating their positions in every frame of video. An object tracker may also provide the complete region in the image that is occupied …
ViSEvAl result comparison

The goal of this script is to compare different result files of the same sequence. To run the script, use the following command : python resultComparison.py <resultFile1.txt> … <resultFileN.txt> The result files must be ViSEvAl output file generated by the ViSEvAlEvaluation binary. The script displays useful information of the result files …
XML1 Viewer

XML1 Viewer is a Python script showing statistics and information about XML1 output of SUP. To run the script, use the following command : python XML1Viewer.py <XML1file> With only an XML1 file as input, the script displays all the detected object with the following statistics : number of frames total …
Communication for project integration in coolaborations
I have the task for the VICOMO project to integrate our SUP platform with other partners software, so the constraint is that every partner has its own thing. For the task I am creating some code excerpts that will be communicating XML code to a raw TCP server and to …