Collaborations

CoBTeK

Cognition Behavior Technology

CoBTeK is our strategic partnership with all Université Côte d’Azur psychiatry departments to study the impact of video understanding approaches for cognitive disorders. Since its creation in January 2012, CoBTeK’s mission has been to develop clinical research on assessment and care based on new technologies. CoBTeK’s central theme has been to develop research on novel computer science technologies for the prevention, diagnosis, and treatment of neuropsychiatric pathologies. CoBTeK is a multi-disciplinary unit, with computer scientists working alongside clinicians. CoBTeK is structured as a single university team (Unité Propre de Recherche), with teaching clinician-researchers focusing their research on children, adolescents, adults and seniors.

More information


MePheSTO

Digital Phenotyping for Psychiatric Disorders from Social Interaction

The vision of MePheSTO is to help break the the scientific ground for the next generation of precision psychiatry through AI-based social interaction analysis. By moving beyond isolated read outs from artificial laboratory settings and subjective patient self-reports or even clinician-based assessments, this project aims to give clinicians quantitative measures to better understand and address psychiatry needs. A multi-site, multi-national, cross-sectional as well as longitudinal study is being conducted that collects audiovisual and physiological data from psychiatric interviews.

More information


GAIN

Georgian Artificial Intelligence Networking and Twinning Initiative

GAIN takes a strategic step towards integrating Georgia, one of the Widening countries, into the system of European efforts aimed at ensuring the Europe’s leadership in artificial intelligence. It links the central Georgian ICT research institute, Muskhelishvili Institute of Computational Mathematics (MICM), to the European AI research and innovation community including DFKI and INRIA, two absolutely leading European research organizations, and the high-tech company EXOLAUNCH. Georgian colleagues get involved in research projects of the European partners addressing a clearly delineated set of AI topics.

More information


StressID

Multimodal Dataset for Stress Identification

StressID is a dataset specifically designed for stress identification from multimodal data. It contains RGB facial video, audio and physiological signals (ECG, EDA, respiration). Different stress-inducing stimuli are used: emotional video-clips, cognitive tasks and public speaking. The total dataset is consists of recordings from 65 participants that performed 11 tasks. Each task is labeled by the subjects in terms of stress, relaxation, valence and arousal. The experimental setup ensures synchronised, high-quality and low noise data.

More information


Toyota Smarthome

Real-World Activities of Daily Living

The Toyota Smarthome dataset was recorded in an apartment equipped with 7 Kinect v1 cameras with resolution of 640×480 and 3 modalities: RGB, depth, 3D skeleton. It contains the common daily living activities of 18 subjects, senior people in the age range 60-80 years old. The 3D skeleton joints were extracted from RGB. To preserve privacy, the face of the subjects is blurred. Currently, two versions of the dataset are provided: trimmed and untrimmed.

More information


MultiMediate

Multimodal Analysis of Social Interaction for Artificial Mediation

Artificial mediators are a promising approach to support conversations, but at present their abilities are limited by insufficient progress in behavior sensing and analysis. The Grand Challenge MultiMediate is designed to work towards the vision of effective artificial mediators by facilitating and measuring progress on key social behavior sensing and analysis tasks. Our challenge focuses on the estimation of behavior and engagement across different domains of social interaction. Tasks of the challenge include eye contact detection, next speaker prediction, backchannel recognition, multi-domain engagement estimation and newly proposed multi-domain behavior recognition. In collaboration with DFKI, Augsburg University and University of Stuttgart, this recurring challenge takes place at the ACM International Conference on Multimedia (ACMMM) every year since 2021 with improvements in evaluation, novel tasks and datasets.

More information


GDD

Generalizable Deepfake Detection

Manipulated images and videos, i.e., deepfakes, have become increasingly realistic due to the tremendous progress of deep learning methods. However, such manipulation has triggered social concerns, necessitating the introduction of robust and reliable methods for deepfake detection. With the understanding that it is now possible to automatically create convincing fake videos with commercially available hardware, the need for creating automated detection techniques has become critical. Early existing techniques to detect manipulations are based on semi-supervised and anomaly detection. In GDD, we aim at investigating the following questions. Why are detection methods not reliable for unknown manipulations? How can we improve feature learning techniques for generalized detection abilities? How can we detect unknown manipulations and perform incremental learning? Can we build deepfake detection methods without involving manipulated data? GDD is funded by Inria and CEFIPRA €61,000 for 3 years.

More information


Verbalia

Transforming Digital Learning into an Experience

Verbalia is an AI-powered platform that enables users to create multilingual, customizable video instructors with lifelike or cartoon-style avatars. From a single image, users can generate avatars, choose voices, set backgrounds and produce content in over 100 languages. The platform supports real-time interaction and offers a no-code interface along with a powerful API for integration and bulk video generation. It is ideal for corporate training, education, and customer support, streamlining video production and reducing costs. Verbalia helps deliver engaging, accessible learning experiences quickly and efficiently. Verbalia is funded by Inria Startup Studio.

More information


Facila

Facial Analysis for Communication, Intonation, Language and Articulation

Facila revolutionizes voice learning with its innovative web app, making vocal training easy and accessible to everyone. The platform offers interactive exercises and personalized feedback, supporting users at all skill levels. The project is currently focusing on refining its core technology and user experience. Facila is funded by Inria Startup Studio with a grant of €150,000 for 1 year.

More information


ThinkSync

AI-Powered Guide for Personalized Decisions, Meaning and Growth

What if people could easily draw on the vast, time-tested wisdom of philosophy and sociology? ThinkSync makes this possible by using AI to analyze individual thought patterns and deliver personalized mottos, actionable insights, and a clear visualization of their unique belief system. By helping users access and apply these insights effortlessly, the app simplifies decision-making while reducing overthinking. Features like gamified engagement, guided reflection, and personalized goal-setting further empower users to track their growth and enhance well-being. The app also offers the option to access wellness counselling through text or video. ThinkSync is funded by Inria Startup Studio with a grant of €150,000 for 1 year.

More information