Presentation

Overview

In August 2017, the motto for the ACM Siggraph conference became “Enabling Everyone to Tell Their Stories”. Indeed, narrative contents such as interactive games and animated movies are a major application domain for computer graphics, with implications in entertainment, education, cultural heritage, scientific communication and professional training. In those applications, the creation of 3-D content cannot be limited to the production of shapes and motions; it should also include the necessary steps to organize shapes and motions into compelling stories, using adequate staging, directing and editing. As a result, it becomes essential to conduct research in authoring and directing animated story worlds.

Theatre has been described as the art of creating a virtual reality on the stage and compared to video games, which similarly create a virtual reality in real-time on the computer screen. A new research topic for ANIMA will be to provide the necessary tools to author and direct computational story worlds that can serve as stage and screen for virtual reality storytelling. In previous work, we have already tackled the problem of generating responsive shapes such as virtual paper, i.e. surfaces that “act like paper” (they can be folded, crumpled and torn apart like paper, even creating the same sounds as paper). This work was not a faithful physical simulation of paper, but a computational mimicry of some important semantic and aesthetic properties of paper. In future work, we would like to further emphasize the metaphor of a computer theatre of virtual props, actors and cameras that can be authored and directed by artists and domain experts to tell stories with computers.

In an animated story world, virtual props need affordances, i.e. actions that can be performed with them. Similarly, virtual actors need skills, i.e. actions that they can perform within the given virtual world. This opens new scientific problems – how do we represent the story computationally? How do we assemble virtual sets for a story? How do we assign characters and objects in the story to actors and props in the virtual world? How do we design virtual actors with the necessary skills, and virtual props with the necessary affordances? How do we sketch and sculpt the choreography of their movements and actions? To address those scientific problems, two central themes of the ANIMA team will be (i) to use the metaphor of movie storyboards as a multimodal depiction of a scene and to sketch entire stories from storyboards; and (ii) to use the metaphor of theatre rehearsals for iteratively improving the quality of virtual performances authored and directed with our systems.

In this context, the research program for ANIMA focuses on developing computer tools for authoring and directing animated movies, interactive games and mixed-reality applications, using virtual sets, actors, cameras and lights.

This raises several scientific challenges. Firstly, we need to build a representation of the story that the user/director has in mind, and this requires dedicated user interfaces for communicating the story. Secondly, we need to offer tools for authoring the necessary shapes and motions for communicating the story visually, and this requires a combination of high-level geometric, physical and semantic models that can be manipulated in real-time under the user’s artistic control. Thirdly, we need to offer tools for directing the story, and this requires new interaction models for controlling the virtual actors and cameras to communicate the desired story while maintaining the coherence of the story world.

Understanding stories

Stories can come in many forms. An anatomy lesson is a story. A cooking recipe is a story. A geological sketch is a story. Many paintings and sculptures are stories. Stories can be told with words, but also with drawings and gestures. For the purpose of creating animated story worlds, we are particularly interested in communicating the story with words in the form of a screenplay or with pictures in the form of a storyboard. We also foresee the possibility of communicating the story in space using spatial gestures.

The first scientific challenge for the ANIMA team will be to propose new computational models and representations for screenplays and storyboards, and practical methods for parsing and interpreting screenplays and storyboards from multimodal user input. To do this, we will reverse engineer existing screenplays and storyboards, which are well suited for generating animation in traditional formats. We will also explore new representations for communicating stories with a combination of speech commands, 3D sketches and 3D gestures, which promise to be more suited for communicating stories in new media including virtual reality, augmented reality and mixed reality.

Authoring story worlds

Telling stories visually creates additional challenges not found in traditional, text-based storytelling. Even the simplest story requires a large vocabulary of shapes and animations to be told visually. This is a major bottleneck for all narrative animation synthesis systems.

The second scientific challenge for the ANIMA team will be to propose methods for quickly authoring shapes and animations that can be used to tell stories visually. We therefore need to devise methods for generating shapes and shape families, understanding their functions and styles, their material properties and affordances; authoring animations for a large repertoire of actions, that can be easily edited and transfered between shapes; and printing and fabricating articulated and deformable shapes suitable for creating physical story worlds with tangible interaction.

Directing story worlds

Lastly, we aim to develop methods for controlling virtual actors and cameras in virtual worlds and editing them into movies in a variety of situations ranging from 2D and 3D professional animation, to virtual reality movies and real-time video games. Starting from the well-established tradition of the storyboard, we would like to offer new tools for directing movies in 3D animation, where the user is really the director, and the computer is in charge of its technical execution using a library of film idioms. We also would like to explore new areas, including the automatic generation of storyboards from movie scripts for use by domain experts, rather than graphic artists.

Our long-term goal will be to allow artists to simultaneously create geometry, animation and cinematography from a single sequence of storyboard sketches. This is a difficult problem in general and it is important to choose application domains where we can demonstrate progress along the way. In unrestricted domains, we will focus on the more tractable problem of quickly creating complex camera movements from sketches given a 3D scene and its animation. We will use the traditional conventions of storyboards to represent camera movements and compute suitable key pose interpolations.

Comments are closed.