Hui-yin Wu’s PhD Defense

Hui-yin Wu defended her PhD at IRISA  “Cinematic Discourse for Interactive 3D Storytelling”.  in room Métivier (C024), on Friday 10AM, October 7th.

PhD committee:

Ulrike Sperling, Professeur, Hochschule RheinMain / Rapporteur
Michael Young, Professeur, University of Utah / Rapporteur
Tim Smith, Senior lecturer, Birbeck University of London / Examinateur
Tsai-Yen Li, Professeur, National Chengchi University /Examinateur
Bruno Arnaldi, Professeur, INSA / examinateur
Stéphane Donikian, Directeur de recherche, INRIA / directeur de thèse
Marc Christie, Maitre de conférence, Université de Rennes 1 / co-directeur de thèse

Abstract:

With the advancement in 3D virtual environments, cinematographic storytelling is no longer confined to the rails and cranes surrounding actual people and props. In games or interactive installations, the virtual camera can cut, manoeuvre, change perspective, or slow down along with the choreographed characters and 3D environments while user interactions take place. Yet currently there are needs for (1) mechanisms for authorial control to guides the audience’s perception towards the author’s intentions, (2) methods to extract and analyse cinematgraphic storytelling that can in turn (3) be incorporated into dynamic environments for both interactive storytelling scenarios and smart pre-visualisation tools.

In this thesis, we present three contributions:

First, we address authorial control and cognition in interactive storytelling and dynamically generated non-linear narratives. We first provide an authorial control mechanism over the logic of dynamic, non-chronological stories; we then predict and evaluate viewer perceptions in these stories.

The remaining two contributions then move towards visual storytelling, focusing on virtual cinematography. The first of these is motivated by extracting knowledge from film practice. We design a query language coupled with a visualisation tool for the analysis of camera storytelling. The query language allows users to search and retrieve film sequences that fulfil a number of stylistic and visual constraints. Alongside this contribution is a film annotation tool extended from the existing Insight annotation tool to incorporate the vocabulary in our query language.

Our final contribution addresses the problem of dynamicity and creativity in interactive 3D environments. We transform the query language from the previous contribution into a generative one for stories. Two sub-contributions are presented in this part. The first specifically addresses the problem of smart camera placement for stories where the plot changes dynamically according to user decisions. The second is a prototype interface that acts both as a virtual cinematographer–helping the user select shot compositions, apply existing camera styles to a sequence, and place virtual cameras–and an editor–making cuts and rearranging shots in the sequence.

 

 

Comments are closed.