Return to Research

Recognition of Group Activities in Videos

Recognition of Group Activities in Videos Based on Single- and Two-Person Descriptors

Stéphane Lathuilière, Georgios Evangelidis, Radu Horaud
IEEE Winter Conference on Application of Computer Vision (WACV’17)

IEEE Publication | HAL Publication | Abstract | BibTex | Results | Acknowledgement

Abstract

Group activity recognition from videos is a very challenging problem that has barely been addressed. We propose an activity recognition method using group context. In order to encode both single-person description and two-person interactions, we learn mappings from high-dimensional feature spaces to low-dimensional dictionaries. In particular the proposed two-person descriptor takes into account geometric characteristics of the relative pose and motion between the two persons. Both single-person and two-person representations are then used to define unary and pairwise potentials of an energy function, whose optimization leads to the structured labeling of persons involved in the same activity. An interesting feature of the proposed method is that, unlike the vast majority of existing methods, it is able to recognize multiple distinct group activities occurring simultaneously in a video. The proposed method is evaluated with datasets widely used for group activity recognition, and is compared with several baseline methods.

Publication



Download PDF
Recognition of Group Activities in Videos Based on Single- and Two-Person Descriptors.

Stéphane Lathuilière, Georgios Evangelidis, Radu Horaud.

IEEE WACV

Bibtex:

@inproceedings{lathuiliere:hal-01430732,
  TITLE = {{Recognition of Group Activities in Videos Based on Single- and Two-Person Descriptors}},
  AUTHOR = {Lathuili{\`e}re, St{\'e}phane and Evangelidis, Georgios and Horaud, Radu},
  URL = {https://hal.inria.fr/hal-01430732},
  BOOKTITLE = {{IEEE Winter Conference on Applications of Computer Vision}},
  YEAR = {2017},
  }

Results


A video is coming soon

 

Acknowledgement


Funding from the European Union FP7 ERC Advanced Grant VHIA (#340113) is greatly acknowledged.