Past ERC EXPRESSIVE Seminars
|DATE||SPEAKER||TITLE (click on ▶ /▽ to read/collapse the corresponding summary and bio.)|
|2016/10/25||Bernhard Thomaszewski||Computational Design Tools for the Age of Digital Fabrication
One of the most exciting aspects of digital fabrication is the promise of instant creativity: the immediate and effortless transition from an idea to its tangible realization. But in order for this technology to be useful for a broad range of creatives, we need software tools that can automate the technically difficult parts of the design process.
In this talk, I will argue for a new generation of design tools that rely on three key components: simulation models that are able to predict the functional aspects of a given design; optimization algorithms that can invert the simulation model in order to determine design parameters that lead to desired functional aspects; and graphical interfaces that integrate these two components to help users navigate the design space in an intuitive and efficient way. I will highlight a number of challenges that arise in this context and illustrate some key concepts on design tools for mechanical characters and physical surfaces.
Bio: Bernhard Thomaszewski is a Research Scientist at Disney Research Zurich, where he leads the group on Computational Design and Digital Fabrication. He obtained his Master’s degree (Diplom) and PhD (Dr. rer. nat.) in Computer Science from the University of Tübingen. Prior to joining Disney Research, he was a postdoctoral researcher at ETH Zürich, where he is an adjunct lecturer today. Bernhard’s research interests are in the areas of computational design, digital fabrication, and computer animation. His current focus is on design tools that allow non-expert users to create functional content for digital fabrication devices.
|2016/07/08||Nils Thuerey||Data-driven Fluid Simulation
Physics simulations for virtual smoke, explosions or water are by now crucial tools for special effects in feature films. Despite their wide spread use, there are central challenges getting these simulations to be controllable, fast enough for practical use and to make them believable.
In this talk I will explain challenges for simulations as special effects in games and movies, and then discuss recent research projects that are aiming for solving these challenges. In particular, a central part of this talk will be devoted to methods for interpolating fluid simulations. I will describe a method that uses 5D optical flow to register two space-time volumes of simulations. Once the registration is computed, in-between versions can be generated very efficiently. In contrast to previous work, this approach uses a volumetric representation, which is beneficial for smooth and robust registrations without user intervention.
I will show several examples of smoke and liquid animations generated with this interpolation method, and discuss limitations of the approach. The talk will be concluded by giving an outlook of open questions in the area.
Bio: Nils Thuerey is an Assistant-Professor at the Technical University of Munich. He works in the field of computer graphics, with a particular emphasis on physically-based animation. One focus area of his research targets the simulation of fluid phenomena, such as water and smoke. These simulations find applications as visual effects in computer generated movies and digital games. Examples of his work are novel algorithms to make simulations easier to control, to handle detailed surface tension effects, and to increase the amount of turbulent detail.
After studying computer science, Professor Thuerey acquired a PhD for his work on liquid simulations in 2006. He received both degrees from the University of Erlangen-Nuremberg. Until 2010 he held a position as a post-doctoral researcher at ETH Zurich, in collaboration with Ageia/Nvidia. Subsequently, he worked for three years as Research & Development Lead at ScanlineVFX, developing large scale physics-simulators for visual effects. He started at TUM in October 2013.
|2016/06/23||James Gain||Parallel, Realistic and Controllable Terrain Synthesis
The challenge in terrain synthesis for virtual environments is to provide a combination of precise user control over landscape form, with interactive response and visually realistic results. This talk covers a system that builds on parallel pixel-based texture synthesis to enable interactive creation of an output terrain from a database of heightfield exemplars. The system also provides modellers with control over height and surrounding slope by means of constraint points and curves; a paint-by-numbers interface for specifying the ²local character of terrain; and coherence controls that allow localization of changes to the synthesized terrain.
Together these contributions provide a level of realism that, based on user experiments, is indistinguishable from real source terrains; user control sufficient for precise placement of a variety of landforms, such as cliffs, ravines and mesas; and synthesis times of 165ms for a 1024^2 terrain grid.
Bio: James Gain is an Associate Professor in the Department of Computer Science at the University of Cape Town and a member of its High-Performance Computing Laboratory. He obtained his PhD entitled “Enhancing Spatial Deformation for Virtual Sculpting” in 2000 from the University of Cambridge. His research interests primarily involve geometric and procedural modelling. He has also worked on the application of high performance computing and visualization to computational chemistry, geomatics and radio astronomy.
|2016/05/26||Christian Jacquemin||Arts and science: examples in computer graphics and image processing, and critical analysis
I will first illustrate some of the issues in the art-science encounter by presenting several examples of projects involving a research collaboration between artists and scientists. To give a personal insight, some of them will be taken from my own experience in combining computer graphics research with artists’ works. The other ones will be taken from other collaborations that I have studied to outline what is the topic of the encounter and how it has been materialized in a production in each domain. In such a diverse field, it is necessary to propose critical approaches to contribute to the consolidation of this field, and to help the newcomers to step into this domain. And, because of the lack of strong institutional support, it is also necessary to highlight the potentials of these collaborations, in order to promote and consolidate this domain through better support. The analysis will rely on a recent white paper about the future of this domain and recommendations of some of its stakeholders. I will also say a few words about some of the art-science initiatives in Paris-Saclay: the VIDA theme at LIMSI-CNRS, the CURIOSITas festival, and the Diagonale Paris-Saclay calls and support for art-science projects.
Bio: Christian Jacquemin is a Professor in Computer Science at the University of Paris-Sud since 2000. His current research interests include interactive 3D graphics and audio, advanced graphics rendering, image analysis for Augmented Reality, and applications to visual arts. He is involved in several cooperations on artistic application of interactive graphics (theater, art installations, sound and graphic design…). He has collaborated with several artists and designers on the realization of augmented reality environments for art installation, theater plays, or multimedia performances. He has published in major conferences in Computational Linguistics, Information Retrieval, Information Visualization, Multimedia, Digital and Performing arts. Since 2012, he is adviser for arts and culture at University Paris-Sud and has coordinated the arts-science CURIOSITas festival in 2013 & 2014. This festival brings together artists, researchers and students into joint research and creation projects.
|2016/04/14||Rahul Narain||Adaptivity and Optimization for Physics-Based Animation
Many challenging problems in animation, from the wrinkling and creasing of thin sheets to multi-agent collision avoidance in large crowds, involve nonlinear interactions between a large number of degrees of freedom. Two key difficulties in such problems are the emergence of fine-scale features, which require a high-resolution discretization, and the presence of stiff nonlinear dynamics, which can be difficult to integrate robustly.
In my talk, I will describe techniques to enable efficient solutions of such simulation problems through the use of adaptive discretization techniques and novel optimization-based mathematical models. First, I will describe our work on adaptive remeshing for simulating thin materials such as cloth. Through remeshing, we can dynamically adapt the resolution of the simulation mesh as the simulation proceeds, focusing computational effort on regions containing emerging detail such as wrinkles and creases. Second, I will discuss optimization-based simulation techniques, in which the dynamics is reformulated in terms of potential energy minimization. This approach enables robust time integration of complex nonlinear systems such as pedestrian crowds and nonlinear elastic materials, while providing guarantees on stability and allowing recent advances in parallel numerical optimization to be leveraged. Both adaptive methods and optimization-based techniques yield substantial improvements in computational efficiency and robustness, making it practical to simulate complex systems to a high degree of fidelity.
Bio: Rahul Narain is an assistant professor at the University of Minnesota. He received his Ph.D. in Computer Science from the University of North Carolina at Chapel Hill and was a postdoctoral scholar at the University of California, Berkeley. His research interests lie in physics-based animation and computer graphics, particularly focusing on numerical methods and mathematical models for animating complex multiscale phenomena.
|2016/03/31||Eugene Fiume||Procedural Speech Synchronization for Facial Animation
The expressive animation of faces is among the most difficult problems in computer animation, and the mapping of speech to an animated face is the most difficult sub-problem of facial animation. The prevailing approaches to speech synchronization are either based on performance capture, which is fundamentally difficult to modify, or blend-shape animation, which requires deep animation expertise.
I will argue that procedural facial animation techniques have a significant place in facial animation workflow, and I will demonstrate some new ideas on the automatic generation of facial animation given only a soundtrack and a textual transcription that requires very little expertise from the animator. The results of our process are convincing animations in themselves that can be further edited by more expert animators.
Bio: Eugene Fiume is Professor and past Chair of the Department of Computer Science at the University of Toronto, where he also co-directs the Dynamic Graphics Project. He is Director of the Masters of Science in Applied Computing programme, Principal Investigator of a $6M project on the construction of a digital media and systems lab, and is a Fellow of the Royal Society of Canada.
|2016/02/04||Ariel Shamir||Creating visual stories
Similar to text, the amount of visual data in the form of videos and images is growing enormously. One of the key challenges is to understand this data, arrange it, and create content which is semantically meaningful. In this talk I will present several such efforts to “bridge the semantic gap” using humans as “agents”: capturing and utilizing eye movements, body movement or gaze direction, or following a storyline. This enables re-editing of existing videos, tracking of sports highlights, creating a coherent video from multiple sources, or creating visual stories.
Bio: Ariel Shamir is a Professor at the school of Computer Science at the Interdisciplinary Center in Israel, where he currently also serves as the vice-dean. Prof. Shamir received his Ph.D. in computer science in 2000 from the Hebrew University in Jerusalem. He spent two years at the center for computational visualization at the University of Texas in Austin. Prof. Shamir has numerous publications in journals and international refereed conferences, and a number of patents. He has a broad commercial experience working with, and consulting various companies including Mitsubishi Electric, Disney, PrimeSense (now Apple) and more. He is an associate editor for ‘Transactions on Visualization and Computer Graphics’ and ‘Graphical Models’ journals. Prof. Shamir specializes in geometric modeling, computer graphics, image processing and machine learning.
|2015/12/17||Lynda Hardman||Creating video stories
People have been communicating through stories and narratives since before the emergence of written historical records. Narrative continues to form the basis of modern forms of video-based communication, such as feature films, documentaries and news items. In the digital age, video search engines retrieve and present video clips as a ranked list. The viewer is forced to deduce the relations among the clips to infer a coherent story from them. This leads to unnecessary cognitive effort that does not directly satisfy the viewer’s information need.
We describe an example system, Vox Populi, which uses a specific discourse structure to create coherent video sequences from richly annotated video clips. In this case using a discourse model of argumentation. Our longer-term goal is to be able to assemble relevant video clips from a wider range of topics in a wider range of discourse structures into a coherent sequence.
We give example discourse models that have been incorporated into (video sequence) generation systems and explain our research vision. What can we achieve in the short term and what are our next steps for getting there?
Lynda is president elect of Informatics Europe, and was designated as an ACM Distinguished scientist in 2014. She obtained her PhD from the University of Amsterdam in 1998 on modelling and authoring hypermedia documents, having graduated in Mathematics and Physics from Glasgow University in 1982. During her time in the software industry she was the development manager for Guide, the first hypertext authoring system for personal computers (1986).
|Expressive Cinematography (see seminar website)|
|2015/11/12||Laurent Grisoni||An HCI view of sketch-based interaction
Prior to writing, in human history as in human life, drawing is one of the most common way to express information. It is one of the very few that can be largely understood beyond languages and cultures. We live in a society that is more and more structured around pictures, and drawing remains an important mean of expression, from sketches to technical drawings, from simple diagrams to artworks. ICT community already defined tools that allow to draw in numerical environment, however we still take very low benefit of the very rich semantic a drawing can contain, even a simple one. Can we allow user to control an application through drawing?Toward which usages? This presentation gathers few elements about this problem, relating them to recent and on-going works of the team.
|Expressive Cinematography (see seminar website)|
|2015/10/01||Adrien Bousseau||Computer Drawing Tools for Assisting Learners, Hobbyists, and Professionals
Drawing is a powerful support for creation and communication. Drawing provides a direct connection between an idea and its visual representation, which is why it is a popular tool among professional designers. Drawing is also an enjoyable hobby among amateurs, but the significant learning curve prevents many of us to feel confident in our ability to draw. In this talk I will present three projects that aim at:
– Helping learners practice traditional drawing techniques and improve their drawing skills
– Helping professional industrial designers recover 3D information from their drawings, in particular surface normals for automatic shading
– Helping hobbyist jewelry makers create and fabricate new designs from drawings
I will use these three projects to illustrate a common methodology to observe how artists work, deduce artistic principles from these observations, and implement these principles as algorithms.
Adrien Bousseau does research on image creation and manipulation, with a focus on drawings and photographs. Most notably he did some work on image stylization, image editing and relighting, vector graphics and sketch-based modeling.
|2015/09/10||Ludovic Hoyet||Perception of Biological Human Motion: Towards New Perception-Driven Virtual Character Simulations
While virtual characters are becoming more and more popular, especially in the entertainment industry (e.g., in video games and movies), the complexity of human motion still raises many challenges, mainly related to computation time, naturalness and controllability. As we have yet to find solutions providing a good balance between all these simulation parameters, current approaches usually either reuse motions captured on real actors or simulate movement using physical equations. Approaches based on motion capture retain the naturalness of the performance but require numerous motions for controllability, while physics-based simulations are computationally expensive and do not necessarily produce perceptually correct motions. However, these approaches have one thing in common in that they seldom take the perception of the viewer into account when animating virtual characters. In this talk, I will present some of our work on exploring the factors of human motion which make animations to be perceived as natural. Amongst others, we have been exploring the factors that make human motion recognizable and appealing (which is of great value when creating realistic motion is required) and how time and physical manipulations can affect the perceived naturalness of causal interactions between virtual characters. We are now working on incorporating further such perceptual insights for animating virtual characters, in order to provide a good balance between computation time, naturalness and controllability of the simulations.
|2015/07/03||Michiel van de Panne||Animation Potpourri: New Models for Animated Vector Graphics, Motion Optimization, and Data-driven Animation
Animation research draws on a rich variety of mathematical, computational, and data-driven methods,
as well as artist-driven creativity. We showcase this diversity by presenting a variety of recent
and ongoing work, including: topological modeling for vector graphics that cleanly models time-varying topology;
explorations of feasible human athletic motions using optimization; reinforcement learning for developing
highly dynamic terrain traversal skills for running dogs and bipeds; and motion-capture-based models for realizing
natural task-specific stepping behaviors.
|2015/06/05||Henri Gouraud||Histoire de l’ombrage de Gouraud
Cet exposé présente l’ombrage de Gouraud avec une approche historique, depuis la genèse à la mise en pratique de ce modèle désormais bien connu.
|2015/05/28||Jean-Michel Dischler||Procedural texturing from Example
Providing efficient solutions for rendering detailed realistic
environments in real-time applications, like games or flight/driving
simulators, has always been a major focus in computer graphics. Details
can be efficiently rendered using textures. But despite improvements of
acquisition technology, graphics hardware, memory capacity and data
streaming techniques, which allowed over the recent years for increased
scene complexity, creating and rendering efficiently textures remain
challenging issues. In this talk I’ll present our recent solutions for
creating (semi) procedural textures from single images. Two techniques
are presented. The first method reconstructs “noise” textures from
exemplars. It uses a novel noise representation called, local random
phase (LRP) noise. We show that this representation allows us to account
for some types of random structures. The second method manages
multi-scale detail transitions by using one input image per represented
scale. Undersampling artifacts are avoided by accounting for fine-scale
features while colors are transferred at runtime between scales. Details
are enlarged using fixed shaped patches with exchangeable contents.
|2015/04/23||Paul Kry||Balancing Speed and Fidelity in Physics Based Animation and Control
In this talk I will give an overview of work I have done over the years
exploring physically based simulation of contact, deformation, and
articulated structures where there are trade-offs between computational
speed and physical fidelity that can be made. I will also discuss
examples that mix data-driven and physically based approaches in
animation and control.
|2014/12/04||Pascal Bouchez||De la trace vidéographique du spectacle vivant à celle du “spectacle des répétitions” : retour sur expérience et perspectives
La “création théâtrale “, est-ce la seule représentation théâtrale, ou même l’éventail des représentations publiques d’un spectacle ? Une vision plus juste, plus profonde et plus contextualisée ne conduit-elle pas à prendre plutôt en compte une succession de “répétitions” fermées et ouvertes, ce processus rythmé englobant en lui-même la construction d’un spectacle, de la première répétition à la dernière représentation ? Si tel est le cas, quelle peut-être la mémoire audiovisuelle de ce processus “étendu”, singulier, et par certains égards “démesuré” ? Quel type d’archivage peut en rendre compte ? Selon quel dispositif ?
|2014/12/01||Karan Singh||Perception Drawing and 3D
I will present recent research and open challenges in the perception of shape from sketch input. Specifically, I will address the complete pipeline from 2D strokes to a 3D model using two systems presented at SIGGRAPH 2014: true2form: 3D curve networks from 2D sketches and Flow Complex based surfacing of 3D curves.
|2014/11/13||Bob Sumner||Character depiction, posing and synthesis
Since the days of Walt himself when Disney animation was first invented, the technical limitations of the animation pipeline have had a strong influence on the style of animation that can be achieved in a professional production environment. Disney Research Zurich’s Animation Group strives to bypass technical barriers in the production pipeline with new algorithms that expand the designerâ€™s creative toolbox in terms of depiction, movement, deformation, stylization, control, and efficiency. In addition to empowering our expert designers, we also consider the other end of the spectrum. Disney’s youngest customers have a huge creative spark of their own, and new technology can unlock new forms of creative play. In this talk, I will highlight research in the area of character depiction, animation, posing, and synthesis that is aimed at amplifying the creativity of our artists and customers. In particular, I will show OverCoat, our research platform for new looks, our rig-space physics system for secondary motion, and our work on sketch-based abstraction for character posing and synthesis.
|2014/10/23||Tamy Boubekeur||Spatial, Statistical and Morphological Analysis for 3D Shape Modeling
In this talk, I will present our recent advances in the field of 3D shape modeling, with a specific focus on geometric analysis. In particular, I will describe several complementary approaches to analyze captured 3D data, to process them automatically or interact with them efficiently. I will start by presenting a new approach to shape approximation based on a new representation called sphere-meshes. Then I will discuss how auto-similarity can play a major role in both signal-inspired processing and interactive modeling. Last, I will present a complete morphological analysis framework designed to act directly on raw captured point clouds. I will concludes with elements of comparison between these different analysis modalities and discuss possible research directions.
|2014/10/09||Wenzel Jakob||Capturing and simulating the interaction of light with the world around us
Driven by the increasing demand for photorealistic computer-generated images, graphics is currently undergoing a substantial transformation to physics-based approaches which accurately reproduce the interaction of light and matter. Progress on both sides of this transformation — physical models and simulation techniques — has been steady but mostly independent from another. When combined, the resulting methods are in many cases impracticably slow and require unrealistic workarounds to process even simple everyday scenes. My research lies at the interface of these two research fields; my goal is to break down the barriers between simulation techniques and the underlying physical models, and to use the resulting insights to develop realistic methods that remain efficient over a wide range of inputs.
I will cover three areas of recent work: the first involves volumetric modeling approaches to create realistic images of woven and knitted cloth. Next, I will discuss reflectance models for glitter/sparkle effects and arbitrarily layered materials that are specially designed to allow for efficient simulations. In the last part of the talk, I will give an overview of Manifold Exploration, a Markov Chain Monte Carlo technique that is able to reason about the geometric structure of light paths in high dimensional configuration spaces defined by the underlying physical models, and which uses this information to compute images more efficiently.
Wenzel Jakob is a Marie Curie Postdoctoral Fellow at ETH Zurich in the Institute for Visual Computing. He obtained his Ph.D. in 2013 under the supervision of Dr. Steve Marschner at Cornell University and conducted his undergraduate studies at the Karlsruhe Institute of Technology. Wenzel’s experience includes research and development work at Disney Research Zurich and Weta Digital, and he is the lead developer of Mitsuba, a research-oriented open source rendering system that has become a popular research platform in rendering and appearance modeling.
|2014/07/21||Mariët Theune, Nicolas Szilas, Ulrike Spierling, Paolo Petta, Remi Ronfard||Expressive storytelling seminar (one day) : Mariët Theune: Nicolas Szilas, Ulrike Spierling, Paolo Petta, Remi Ronfard
Speakers and talk titles:
Mariët Theune → “Generating Narratives in Natural Language”.
Nicolas Szilas → “Structures, paradoxes, graphs and simulations (in, for, and of narrative)”. Optional subtitle: A formal exploration of paradoxical deep narrative structures.
Ulrike Spierling → Exploring the Characteristics of a “Rashomon Effect”.
Paolo Petta → Title and informations will come later.
Remi Ronfard → “Where story and media meet: computer generation of narrative discourse”.
|2014/07/03||Mark Finlayson||Learning Narrative Structure from Annotated Stories
Narrative structure is an ubiquitous and intriguing phenomenon. By
virtue of this structure humans are able to perform high-level
processing of stories necessary for deep understanding; for example, we
recognize the presence of ‘villainy’ or ‘revenge’, even if those words
are not actually present in the text. Narrative structure is an anvil
for forging new artificial intelligence and machine learning techniques,
and is a window into abstraction and conceptual learning as well as into
culture and its influence on cognition. I will discuss various
components of my approach to learning narrative structure automatically.
First, I will present Analogical Story Merging (ASM), a new machine
learning algorithm for extracting plot patterns from sets of stories. I
demonstrate, for the first time, the learning of a theory of narrative
structure from text: ASM can learn a substantive portion of Vladimir
Propp’s influential theory of the structure of folktale plots. Second,
I will discuss the Story Workbench, a general-purpose linguistic text
annotation tool for supports the semi-automatic markup of over twenty
different syntactic and semantic representations that I have used to
assemble the largest and most deeply-annotated narrative corpus to date.
Third, I will briefly outline some hot-off-the-press work, a new
computational linguistic representation, StateML, that comprises the
missing piece necessary to the textually-grounded representation and
reliable extraction of “who does what to whom”, a necessary component of
a complete semantics of texts and narratives.
Dr. Mark Finlayson is a Research Scientist at the Computer Science and
|2014/06/17||Matthias Teschner||Particle-based Fluid Simulation
Particle-based representations have been established as one of the major
concepts for fluid animation in computer graphics. While particles
initially gained popularity for interactive free-surface scenarios, they
have emerged to be a fully fledged technique for state-of-the-art fluid
animation with versatile effects. Nowadays, complex scenes with millions
of particles, one- and two-way coupled rigid and elastic solids,
multiple phases and additional features such as foam or air bubbles can
be computed at reasonable expense. This presentation summarizes the
state-of-the-art in particle-based fluids and discusses potential future
Matthias Teschner is professor of Computer Science and head of the
|2014/06/05||Melina Skouras||Design and Fabrication of Deformable Objects
Deformable objects have a plethora of applications: they can be used for entertainment, advertisement, engineering or even medical purposes. However designing custom deformable objects remains a difficult task. Their creator must foresee and invert effects of external forces on the behavior of the figure in order to take the proper design decisions. In this talk, I will present novel approaches based on physics-based simulation and inverse optimization techniques which alleviate these difficulties and propose a complete framework to design custom deformable objects by automating some of the most tedious aspects of the design process. This framework is tailored to the design of various objects such as rubber balloons, skin for animatronics figures and custom actuated characters, for which optimization of diverse variables including rest shape, materials and actuation system is alternately considered. Validation of our method is performed by fabricating representative sets of physical prototypes designed with our method and compared to the results predicted by simulation.
Melina Skouras is currently finishing her PhD at the Computer Graphics Laboratory of ETH Zurich under the supervision of Prof. Dr. Markus Gross, and in collaboration with Disney Research Zurich. Her thesis focuses on the development of novel algorithms for the design of custom deformable objects. She received her Masterâ€™s degree in Computer Science and Applied Mathematics from ENSIMAG, INP Grenoble, in 2004. Before coming to Switzerland, she worked for five years at Dassault Systemes, in the CATIA Geometric Modeler team.
|2014/05/15||Boris Thibert||Flat torus and smooth fractals
In the mid 1950’s, J. Nash and N. Kuiper showed it was possible to isometrically embed a flat torus in three dimensional Euclidean space. This result was counter-intuitive since the Gauss curvature prevents those embeddings to be of class C2. The maps of Nash and Kuiper are of class C1, which means in particular the existence of a tangent space at each point of the embeddings. Based on a technic invented by M. Gromov, the convex integration theory, we have been able to build an isometric embedding of the square flat torus in the ambient space and to partly understand its paradoxal geometry.This work has been done within the hevea team.
Boris Thibert is an assistant professor in applied mathematics at the UniversitÃ© de Grenoble, in the Laboratoire Jean Kuntzmann. He received his PhD from the UniversitÃ© Lyon 1 in 2003 and was a postdoctoral researcher at the Buck Institute For Age Research in California in 2003-2004.
|2014/03/28||Olga Sorkine|| Reality-inspired constraints for shape modeling and editing
Digital shapes can be turned into physical objects using modern manufacturing processes, today easier than ever thanks to the advancements in 3D printing. Current digital modeling tools, however, often do not produce reality-ready 3D models: the shapes might look great as virtual objects, but be ridden with problems that prevent their direct manufacturing in practice, such as self-intersections, structural instability, imbalance and more. These problems are usually removed through a tedious, iterative post-process involving repeated simulations and manual corrections. In this talk, I will show that incorporating some physics laws directly into the interactive modeling framework can be done inexpensively and is beneficial for geometric modeling: while not being as restrictive and parameter-heavy as a full-blown physical simulation, this allows to creatively model shapes with improved realism and directly use them in fabrication.
Olga Sorkine-Hornung is an Assistant Professor of Computer Science at ETH Zurich, where she leads the Interactive Geometry Lab at the Institute of Visual Computing. Prior to joining ETH she was an Assistant Professor at the Courant Institute of Mathematical Sciences, New York University (2008-2011). She earned her BSc in Mathematics and Computer Science and PhD in Computer Science from Tel Aviv University (2000, 2006). Following her studies, she received the Alexander von Humboldt Foundation Fellowship and spent two years as a postdoc at the Technical University of Berlin. Olga is interested in theoretical foundations and practical algorithms for digital content creation tasks, such as shape representation and editing, artistic modeling techniques, computer animation and digital image manipulation. She also works on fundamental problems in digital geometry processing, including reconstruction, parameterization, filtering and compression of geometric data. Olga received the EUROGRAPHICS Young Researcher Award (2008), the ACM SIGGRAPH Significant New Researcher Award (2011), the ERC Starting Grant (2012), the ETH Latsis Prize (2012) and the Intel Early Career Faculty Award (2013).
|2014/02/27||Jernej Barbic||Model reduction for elasticity problems in computer graphics and animation
In nature, there are many dynamical systems that are reasonably well-understood from the viewpoint of physics, but arecomputationally slow to simulate with detailed models. The observed transients in many simulations are, however, often not very rich, but exhibit primarily simple, low-dimensional dynamics. Such observations form the cornerstone of model reduction, an interdisciplinary technique used in electrical and aerospace engineering, as well as solid and fluid mechanics, and computer graphics. In my career, I have studied how model reduction can be applied to elasticity problems incomputer graphics and animation. I will present my work on model reduction applied to fast simulation, optimal control, optimization, collision detection, sound simulation and interactive design of geometrically and materially complex models undergoing large deformations.
Jernej Barbic is an assistant professor of computer science at USC, working in the field of computer graphics and animation. In 2014, he was named a Sloan Fellow. In 2011, MIT Technology Review named him of the Top 35 Innovators under the age of 35 in the world (TR35). Jernej is the author of Vega FEM, an free C/C++ software physics library for deformable object simulation. He received his Ph.D. from CMU, and did postdoctoral research at MIT CSAIL. His research interests include computer graphics, animation, fast physics, special effects for film, medical simulation, FEM deformable objects, haptics, sound simulation, and model reduction and control of nonlinear systems. Jernej is a NSF CAREER Award winner and holds a Viterbi Early Career Chair position at USC.
|2014/02/20||Chris Wojtan||Compensating for Defects in Geometric Models and Liquid Surfaces
Modern algorithms for geometry processing and physics simulation can achieve amazing results, but they often impose so many constraints on their inputs that they become impractical. We consider two problems where modern algorithms are particularly intolerant of imperfect surface geometry: geometric modeling with topology changes, and liquid surface trackingIn the first part of this talk, I will discuss the challenges in geometric modeling with triangle meshes that exhibit defects like holes and self-intersections. Traditional constructive solid geometry techniques for merging together or splitting apart these shapes will fail, because the meshes do not represent a solid object. We present a simple method for interactively changing the topology of such meshes in a modeling context, and we discuss success and failure cases of the strategy.
The second part of this talk will focus on a recent development in liquid animation. Previous methods for simulating liquid surfaces result in alarming visual artifacts when the surface is more detailed than the simulatorâ€™s velocity field. We provide theoretical insight for the source of these artifacts, and we derive a novel error metric for measuring the extent of the nonphysical errors. We can then use this error metric as an objective function to facilitate physically-inspired surface fairing, or we can use it as a potential energy to produce error-reducing wave dynamics.
The subjects in this talk represent recently-published work from SIGGRAPH 2013.
Chris Wojtan received his B.S. in Computer Science in 2004 from the University of Illinois in Urbana Champaign and his Ph.D. in Computer Graphics from the Georgia Institute of Technology in 2010. He was a visiting researcher at Lawrence Livermore National Laboratory, Cernegie Mellon University, and ETH Zurich. He was awarded a National Science Foundation Graduate Research Fellowship, the Georgia Tech Sigma Xi Best Ph.D. Thesis Award, and the Microsoft Visual Computing Award. Chris is currently an Assistant Professor at the Institute of Science and Technology Austria (IST Austria), and his research interests are physically-based animation and geometry processing.
|2013/12/12||Bedrich Benes||Inverse procedural modeling
Procedural modeling has proven to be is a powerful set of algorithms and techniques and it has been used for generation of a wide-variety of objects and effects. However, their definition is a tedious and non-intuitive task that is usually done either by experts, or by a trial and error approach. In this presentation, we will show some of our results in the field of inverse procedural modeling where we attempt to find a procedural representation of an existing object or a scene. Various examples of learning models from biology, urban models, and other procedural representations will be presented.
Dr. Bedrich Benes completed his Master’s and Doctoral degrees from CS at Czech Technical University in Prague. Since 2005 he is at Purdue University, where he directs High Performance Computer Graphics Laboratory. His research focuses on 3D modeling, (inverse) procedural modeling, 3D printing, simulation of natural phenomena, and erosion simulation. He serves as an associate editor of Computers & Graphics and Computer Animation and Virtual Worlds. He regularly served at the IPC of Siggraph and Eurographics.
|2013/11/14||Paul Kry||Grasping Motion
Realistic animation of human hands involves a collection of problems,
including the simulation of contact and control, as well as the
rendering of deformation and appearance changes. In this talk, I will
present a number of techniques that address these challenges. I will
first describe a method for one-handed task based manipulation of
objects that uses a mid-level multi-phase approach to break the problem
into three parts. Exact trajectories are not specified as part of the
task, but instead the final orientation is set as the goal. We look at
the case of a dial, and a ball held in the hand, and learn policies that
can be evaluated in real time, providing appropriate controllers to
perform the desired task. I will also present a new method for
simulating compliant articulate structures using an approximate model
that focuses on endpoint interaction. The reduced model is fast, and
computation of the full structureâ€™s state may be parallelized. Our
technique can deal with complex chains, involving loops, and is suitable
for grippers in robotic simulation, or physics based characters under
static proportional derivative control. With respect to appearance, I
will present a data-driven model of fingertip appearance that can be
integrated into interactive simulations. Contact on a finger pad
results in deformation that redistributes blood within the fingertip
tissue in a manner correlated to the pressure. We model this with
changing hemoglobin concentrations, which permits the model to be easily
transferred across different fingers and different people. Finally,
wrinkles are also an important feature on hands, and for character
animation in general. I will present a method that combines high
resolution thin shells with coarse finite element lattices and defines
frequency based constraints, which allow the formation of wrinkles with
properties matching those predicted by physical parameters.
Paul Kry is an assistant professor in the School of Computer Science at
|2013/10/31||James Gain||Procedural Modelling
Procedural methods provide a powerful means of creating realistic models of natural phenomena, requiring little or no user intervention. In practice, however, users often do need to intervene so as to guide the overall process and even provide specific constraints, such as the angle of a particular tree branch or the silhouette line in a landscape. Painting and sketching interfaces are increasingly used for this purpose, but they need to be adapted to the challenges of procedural modelling, particularly in capturing variation among instances and features across multiple scales. In this talk we will present a set of guidelines for developing such interfaces, illustrated with examples from procedural terrain, city, tree and cloud generation.
James Gain is an Associate Professor in the Department of Computer Science at the University of Cape Town and a member of its Collaborative Visual Computing Laboratory. He obtained his PhD entitled “Enhancing Spatial Deformation for Virtual Sculpting” in 2000 from the University of Cambridge. His research interests cover primarily geometric and procedural modelling. He has also worked in the application of high performance computing and visualization to computational chemistry, geomatics and astronomy. James Gain has served two terms as President of Afrigraph, the African Computer Graphics Association.
|2013/09/23||Joaquim Jorge||Adding more than two dimensions to tabletop interfaces. Is Tony Stark home ?
Work on interactive tabletops and surfaces has focused mostly on two-dimensional issues, such as multi-finger gestures and tangible interaction. Interesting as it is however, this picture is missing several dimensions. I describe work on 2D and 3D semi-immersive environments and present novel on-and-above-the-surface techniques based on bi-manual models to take advantage of the continuous interaction space for creating and editing 3D models in stereoscopic environments. I will also discuss means to allow for more expressive interactions, including novel uses of sound and combining hand and finger tracking in the space above the table with multitouch gestures on its surface continuously. These combinations can provide alternative design environments and allow novel interaction modalities.
Joaquim Jorge is a Professor at Instituto Superior TÃ©cnico (IST/UTL), the School of Engineering of the University of Lisboa, Portugal, where he teaches User Interfaces and Computer Graphics. He received PhD and MSc degrees in Computer Science from Rensselaer Polytechnic Institute, Troy, NY, in 1995. He is Editor in Chief of Computers & Graphics Journal and a member of the ERCIM Editorial Board. He is a senior member of ACM/SIGGRAPH and IEEE Computer Society as well as Portuguese national representative to IFIPÂ´s TC13 (Human Computer Interaction). He has also served on the EG Education Board since its inception in 2001 until 2011. Joaquim Jorge’s interests are in Calligraphic and Multimodal User Interfaces, Visual Languages and Pattern Recognition techniques applied to Human-Computer Interaction. He was elected Fellow of the Eurographics Association in 2010.
|2013/07/11||Frédéric Cordier||Inferring 3D curves from sketches
We will describe two methods for reconstructing 3D curves
from their 2D sketch. These two methods take as input a hand-drawn
sketch and generate a set of 3D curves such that their orthogonal
projection matches the input sketch. In the first method, we assume
that the reconstructed curves are mirror symmetric. In the second
method, the reconstructed curves are piecewise helices.
Frederic Cordier is an associate professor at the UniversitÃ© de Haute
|2013/07/04||Karan Singh||Pose centric animation: support for a primitive artform
Character pose is central to how we perceive and communicate human
animation. While pose sequences have conveyed human action since early
cave drawings, support for pose centric motion editing in current
practice is still quite primitive. The root of the problem is that
while pose sequences in isolation are a worthy motion abstraction, they
are intricately coupled in detailed motion with timing and interaction
with the environment. Professional animation creation often starts out
as pose centric, but the lack of explicit support for poses in keyframe
animation systems causes moving characters to quickly degenerate into a
spaghetti of motion curves and environmental constraints. This makes
further use of poses for motion refinement, editing or reuse near
impossible. This talk will illustrate the problem and present
approaches to interactive pose control as well as solutions that enable
the preservation of pose with variations in timing and environment.
Karan Singh is a Professor in Computer Science at the Univ. of Toronto.
|2013/06/20||Julien Pettre||Velocity-based Models for Microscopic Crowd Simulation
A microscopic crowd simulation is based on a model of interaction between agents, which describes how do they influence each other’s motion. The simulation then combines these interactions together, and results into a whole crowd motion. The emergence of specific traffic conditions and patterns from simulation is expected at the macroscopic scale, as it would occur in similar real conditions. Interaction models have for long formulated atomic interactions between agents as functions of distance between them: the closer the agents the stronger their influence and the resulting maneuvers. Actually, real humans react more to the *future* conditions of distance with others than to the current ones: they predict these conditions based on current motion (velocities) and anticipate their reaction. This is what several *velocity-based models* attempt to reproduce in simulation. In this talk, we will present the foundations of velocity based models and present several solutions and models that our team developed in the context of crowd simulation.
Julien Pettre is research scientist at INRIA, the French National Institute for Research in Computer Science and Control (www.inria.fr) since 2006. He received M.S. degree in 2000. He prepared his thesis under the supervision of Jean-Paul Laumond and obtained his Ph.D. in 2003 from the University of Toulouse III in France. He then spent one 18 months post doc at VRlab, EPFL, Switzerland, headed by Daniel Thalmann. At INRIA in Rennes, he joined the Mimetic team headed by F. Multon. His research interests are : crowd simulation, motion planning, autonomous virtual humans, computer animation and virtual reality.
|2013/06/06||Ladislav Kavan||Elasticity-Inspired Deformers for Character Articulation
Current approaches to skeletally-controlled character articulation range from real-time, closed-form skinning methods to ofï¬‚ine, physically-based simulation. In this talk, we seek a closed-form skinning method that approximates nonlinear elastic deformations well while remaining very fast. Our contribution is two-fold: (1) we optimize skinning weights for the standard linear and dual quaternion skinning techniques so that the resulting deformations minimize an elastic energy function. We observe that this is not sufï¬cient to match the visual quality of the original elastic deformations and therefore, we develop (2) a new skinning method based on the concept of joint-based deformers. We propose a speciï¬c deformer which is visually similar to nonlinear variational deformation methods. Our ï¬nal algorithm is fully automatic and requires little or no input from the user other than a rest-pose mesh and a skeleton. The runtime complexity requires minimal memory and computational overheads compared to linear blend skinning, while producing higher quality deformations than both linear and dual quaternion skinning.
Ladislav Kavan is an Assistant Professor at the CIS Department, University of Pennsylvania. Prior to joining Penn, he was a Senior Researcher at ETH Zurich and Research Scientist for Disney Interactive Studios. Ladislav’s research focuses on real-time graphics and animation. His core area of expertise is deformation modeling, in particular skin deformation for virtual characters (skinning). His other areas of expertise include geometry processing and physically-based simulation, e.g., how to combine data-driven and physics-based techniques. Ladislav has also experience with quaternions and related algebras (dual quaternions), subspace methods, discrete differential geometry, collision detection, and to some extent real-time rendering in the context of computer games. Most recently he is studying applications of real-time graphics and geometry processing in medicine, anatomically-based modeling and simulation of the human body.
|2013/05/30||Yotam Gingold||Rescuing Computers from Hard Problems
Computers struggle with perceptual tasks that are easy for most humans, from high-level tasks like creativity and sketch recognition to low-level tasks like relative distance estimation. At the same time, computers excel at meticulous and repetitive tasks that humans struggle with. In this talk, humans will rescue computers from challenging perceptual problems, and computers will save humans from precise, tedious tasks. I will show an interactive modeling tool to rapidly create simple 3D models from sketches, where a non-convex optimization problem is solved interactively. I will show three algorithms for adding depth maps, normal maps, and bilateral symmetry maps to photographs. Finally, I will preview several types of computer-mediated crowd creativity (drawing, painting, and singing).
Yotam Gingold is an Assistant Professor in the computer science department at George Mason University. His research interests include interactive geometric modeling, creativity support, crowdsourcing, and game design. Previously he was a post-doctoral researcher in the computer science departments of Columbia University, Rutgers University, Tel-Aviv University, and Herzliya IDC. Yotam earned his Ph.D. in Computer Science from New York University in 2009, where he was awarded the Janet Fabri Prize for most outstanding dissertation.
|2013/05/23||Eftychios Sifakis||Detailed Functional Simulation of Human Anatomy: Design Challenges, Performance Considerations and Emerging Applications
Modeling and simulating not only the appearance, but also the function of human anatomical structures on the computer is a practice that has received significant attention in the entertainment and visual effects industries. In addition, advances in computer-aided simulation tools suggest the possibility of ambitious, high-reward applications in medical diagnostics, surgical planning and training. A trend visible across both current and developing applications is the demand for improved photorealism, enhanced biomechanical accuracy, better subject-specificity and faster simulation algorithms. As these demands often outgrow the evolution of computer hardware, new algorithms for biomechanical modeling and simulation are necessary to ensure that upcoming computational platforms are utilized to the best of their capacity. This talk will outline a number of techniques that were designed to facilitate modeling and simulation of digital doubles with high fidelity and efficiency.
Particular emphasis will be placed on algorithmic design decisions and uncommon implementation practices aimed at maximizing the benefit of parallel computing platforms, while preserving the feature set necessary for simulating heterogeneous anatomical models. Finally, I will discuss the cross-cutting impact of algorithmic advances in this area on the broader fields of graphics, animation and computational physics.
Eftychios Sifakis is an Assistant Professor of Computer Sciences and (by courtesy) Mechanical Engineering at the University of Wisconsin-Madison. Dr. Sifakis obtained his Ph.D. degree in Computer Science (2007) from Stanford University. Between 2007-2010 he was a postdoctoral researcher in the University of California Los Angeles, with a joint appointment in Computer Science and Mathematics. His research focuses on scientific computing, physics based modeling and computer graphics. He is particularly interested in biomechanical modeling for applications such as character animation, medical simulations and virtual surgical environments. Dr. Sifakis has served as a research consultant with Intel Corporation, Walt Disney Animation Studios and SimQuest LLC, and is a recipient of the NSF CAREER award (2013-2018).
|2013/04/05||Marc Christie||Directors Lens: an intelligent assistant for virtual cinematography
This talk will focus on Director’s Lens, an intelligent interactive assistant for crafting virtual cinematography using a motion-tracked hand-held device that can be aimed like a real camera.
The system employs an intelligent cinematography engine that can compute, at the request of the filmmaker, a set of suitable camera placements for starting a shot. These suggestions represent semantically and cinematically distinct choices for visualizing the current narrative. In computing suggestions, the system considers established cinema conventions of continuity and composition along with the filmmaker’s previous selected suggestions, and formalizes them in a way to rank the quality of shots. As a result, the tool enables a rapid and efficient exploration of a wide range of cinematographic possibilities (hundreds of shots a generated in a second), and fast production of computer-generated animated movies. The talk will then focus on key challenges and issues related to Director’s Lens, such as composition, balance and timing in edits.
Marc Christie is an Associate Professor at Rennes 1 University and a member of the INRIA Mimetic research team. His main center of interest is Interactive Virtual Cinematography. He defended his Ph.D in 2003 (Nantes University) in which he proposed a declarative approach to camera control using interval-based constraint solving techniques and explored aspects related to camera path planning. He contributed to publications in both computer graphics (Eurographics, SCA) and AI constraint solving (CP conference, Constraints Journal). With Patrick Olivier (Newcastle University), he authored the first state of the art report on the topic of camera control in computer graphics (Eurographics 2006) and presented a course on the topic at Siggraph Asia 2009. Marc Christie is currently working on visibility computation, screen composition, balance, editing, and interactive tools for virtual cinematography. He leads an ANR young researcher project on interactive cinematography, participates in a number of national and international projects, and leads the INRIA Associate Team FORMOSA with Taiwan.
|2013/03/07||Loic Barthe||Models for Intuitive Modeling
Nowadays, images are becoming more and more important in our everyday life. Display devices such as screens, mobile phones, tablet-PCs, notebooks, etc, are ubiquitous. Requests for the production of visual content are thus increasing and an economically effective solution is the generation of virtual content. This requires the use of very efficient but intricate and complicated modeling software, only accessible to professionals or eventually, very enthusiast users. One of the main reason is that no model for representing shape supports fast and immediate rendering, robust editing and simple implementation. Some methods suffer from numerical instability or do not currently support intuitive user interaction. In fact, there is no stable modeling methodology that enables the rapid creation of complex 3D models for non-expert users. This becomes even more problematic if we realize that 3D modeling software are an interface between users and 3D printers.
During this presentation, we will try to analyze the situation from the model point of view. Starting from subdivision surfaces, one of the most widely used surface representation, we will come-back to meshes, their advantages and some of the remaining limitations. This will lead us to sketch-based modeling systems for which field function based techniques are more popular. After pointing out the main reasons, we will have a closer look at these representations. We will end with a set of remarks and open questions.
Loïc Barthe has been assistant professor at the computer graphics department (VORTEX group) of the computer research institute of Toulouse (IRIT) since September 2003. He received his PhD in September 2000 at the University of Toulouse and spent two years working as research associate in the Rainbow group of the University of Cambridge (UK) and in the computer graphics and multimedia group of the RWTH of Aachen (Germany). He received his “Habilitation à Diriger les Recherches” in July 2011 at the University of Toulouse.
|2012/12/20||Mathieu Desbrun||The Power of Duals: from Poisson to Blue Noise
Computing on simplicial meshes is a workhorse in a variety of computer graphics applications. Mesh duals, extending the well known Delaunay/Voronoi duality, have been found increasingly useful in this context: they allow for better conditioning and better accuracy than traditional finite elements. In this talk, we discuss recent results on such primal-dual pairs of complexes, using results from algebraic topology and computational geometry. In particular, we will show that orthogonal primal-dual pairs of complexes (encoded via power diagrams) offer a perfect setup for routine computations such as Poisson equations, as well as for generating high-quality blue noise point distributions.
Joint work with Fernando De Goes, Pooran Memari, Katherine Breeden, Victor Ostromoukhov, and Patrick Mullen.
Mathieu Desbrun is a Professor at the California Institute of Technology (Caltech), where he is the head of the Computing & Mathematical Sciences department ( www.cms.caltech.edu ) and the director of the Information Science and Technology initiative ( www.ist.caltech.edu ). He leads the Applied Geometry lab, focusing on discrete differential modeling (the development of differential, yet readily discretizable foundations for computational modeling) and a wide spectrum of applications, ranging from discrete geometry processing to solid and fluid mechanics and field theory. He is the recipient of an ACM SIGGRAPH New Significant Researcher award, and of a NSF CAREER award.
|2012/12/04||Miguel Otaduy||Animation of objects that collide and deform: balancing the trade-off between speed and realism
In the interaction with virtual worlds, we expect objects to move, deform, and collide with each other in realistic ways. Simulating these effects efficiently requires the solution to problems spanning material modeling, mechanical simulation, computational geometry, or constrained optimization. Our research follows three major directions: methods for contact handling of rigid, deformable, and even granular materials; modeling of mechanical phenomena from examples; and methods for rich visuo-haptic interaction with virtual worlds.
This talk will summarize and discuss our progress in these three directions, with the overall goal of designing more effective and efficient algorithms for the simulation of diverse mechanical effects.
Miguel Otaduy is an associate professor in computer science at Universidad Rey Juan Carlos (URJC Madrid). He received his BS (2000) in Electrical Engineering from Mondragon Unibertsitatea (Spain), and his MS (2003) and PhD (2004) in Computer Science from the University of North Carolina at Chapel Hill. From 2005 to 2008, he worked as a research associate at the Computer Graphics Laboratory of ETH Zurich, and he is at URJC Madrid since February 2008.
|2012/10/25||Niloy J. Mitra||Smart Geometry: In Search of Geometric Simplicity
portable, simple to use, and affordable, sensing our surroundings is becoming simple. This, however, comes at the cost of
increased data volumes, without necessarily providing any real understanding of the incoming information.
In this talk, I will provide an overview my research on shape analysis over the last few years, culminating into recently formulated
Further details at: http://www.cs.ucl.ac.uk/staff/n.mitra/research/index.html
Dr. Mitra is a Reader in Geometric Modeling and Computer Graphics in the Department of Computer Science, University College London (UCL). He was a Senior Lecturer at UCL from 2011-2012. Earlier, Dr. Mitra cofounded the Geometric Modeling and Scientific Visualization (GMSV) center at KAUST (2009-2011) and was an Assistant Professor at Indian Institute of Technology (IIT) Delhi (2007-2009). In 2006-2007, Dr. Mitra was a postdoctoral scholar with Prof. Helmut Pottmann at Technical University Vienna. He received his MS (2002) and PhD (Aug. 2006) in Electrical Engineering from Stanford University under the guidance of Prof. Leonidas Guibas and Prof. Marc Levoy (associate advisor). He received his BS (advisor Prof. Prabir Biswas) from Indian Institute of Technology (IIT) Kharagpur.
Dr. Mitra’s research primarily centers around algorithmic issues in shape understanding and geometry processing. He is also interested in applying analysis findings (e.g., relations, constraints) to enable simple, smart, and captivating interaction possibilities, shape design, and design space exploration in general.
Dr. Mitra serves on the editorial board of Transactions on Graphics (TOG), Visual Computer, and Computer & Graphics. He was the program cochair for Symposium on Geometry Processing (SGP) 2012 and Shape Modeling International (SMI) 2011.
|2012/07/12||Ladislav Kavan||3D Virtual Characters: Skinning, Clothing, and Weird Math
This talk presents an overview of my research on real-time 3D graphics, focusing on technology related to virtual characters. First, I will talk about skinning, i.e., the problem of how to translate skeletal animation to full body deformations. I will explain the advantages of using dual quaternions as opposed to the more traditional matrix representation. Next, I will follow with my contribution to real-time cloth animation, discussing how to create upsampling operators that add fine-scale details to coarse mesh simulations. The techniques required to design efficient and robust upsampling operators include harmonic regularization (an extension of the classical Tikhonov approach) and “tracking,” i.e., constraining fine-scale physics to follow a given coarse-scale animation. I will conclude with some ideas for future work, for example, how to make digital content creation more intuitive.
Ladislav Kavan is currently a Senior Researcher at ETH Zurich, working in the Interactive Geometry Lab with Prof. Olga Sorkine. Prior to joining ETH, he was a Research Scientist for Disney Interactive Studios, exploring next generation technology for computer games with Peter-Pike Sloan. Ladislav’s recent work has focused on combining data-driven techniques with physically-based simulation, subspace methods, and real-time character animation. His results have been used in production in the game and film industries. Ladislav received his M.S. in computer science from Charles University in Prague, and Ph.D. from Czech Technical University.
|2012/07/06||Michael Wand||Shape Analysis with Correspondences
The talk will discuss recent work that has been done within the “Statistical Geometry Processing” group at MPI Informatics and Saarland University. We are working on “shape understanding” algorithms, i.e., algorithms that aim at discovering structure in geometric data sets and utilize it for analysis and modeling. Humans understand shapes already at an intuitive level. However, finding a formal model that explains “structure” in shapes to a certain extend is a major scientific challenge. In addition to being able to capture meaningful aspects, the models also need to be simple enough to permit efficient and robust algorithms for discovering such structure in data.
The talk will focus on correspondence analysis as one approach to this problem: First, we establish correspondences between shapes, i.e., detect pieces of geometry that are essentially similar and relate these to each other. I will discuss various techniques to efficiently and robustly compute correspondences between shapes, allowing for different types of invariance. Second, we can go up one level of abstraction and look at the structure of the obtained correspondences: Assuming we have discovered multiple, potentially overlapping pairs of regions of equivalence within a piece of geometry, what does this tell us about the shape? This question is addressed by “inverse procedural modeling” techniques that characterize families of shapes that are similar to an example piece. We use correspondence information to derive shape docking rules and, alternatively, algebraic invariants to describe such shape spaces in a constructive manner.
Michael Wand is currently a senior scientist and junior research group leader at the Max-Planck Institut Informatik and Saarland University. He received his Ph.D. degree from T?bingen University in 2004 and his Diploma in Computer Science from Paderborn University in 2000. From 2005 to 2007, he was a postdoc at Stanford University. His research interests include statistical techniques for geometry processing and geometric correspondence problems.
|2012/06/12||Michael Gleicher||From Art and Perception to Visualization, Video, and Virtual Reality
My research revolves around the question “How can we use our understanding of human perception and artistic traditions to improve our
tools for communicating and comprehending.” The former (Art and Perception) often point us in similar directions, and provide ideas for
a wide range applications. In this talk, I will survey some of our recent work where we apply insights from art and perception to some
practical problems in visualization and graphics. I will discuss animating conversational characters, viewing molecular shape and motion,
comparing gene sequences, stabilizing video, and replaying virtual reality experiences.
Michael Gleicher is a Professor in the Department of Computer Sciences at the University of Wisconsin, Madison. Prof. Gleicher is founder of
|2012/06/07||Jarek Rossignac||The Beauty of Motion
We propose to measure the beauty of an affine motion by its steadiness, which we define as the inverse of the Average Relative Acceleration (ARA).
Steady affine motions, for which ARA=0, include translations, rotations, rigid body screws, and the golden spiral. To facilitate the design of beautiful in-betweening motions that interpolate between an initial and a final pose (affine transformation), B and C, we propose the Steady Affine Morph (SAM), defined as AtB with A=CB-1. A SAM is affine-invariant and reversible. It preserves isometries (i.e., rigidity), similarities, and volume. Its velocity field is stationary both in the global and the local (moving) frames. Given a copy count, n, the series of uniformly sampled poses, Ai/nB, of a SAM form a regular pattern, which may be easily controlled by changing B, C, or n, and where consecutive poses are related by the same affinity A1/n. Although a real matrix At does not always exist, we show that it does for a convex and large subset of orientation-preserving affinities A. Our fast and accurate Extraction of Affinity Roots (EAR) algorithm computes At, when it exists, using closed form expressions in two or in three dimensions. We discuss SAM applications to pattern design and animation and to key-frame interpolation. In particular, we present a new curve subdivision scheme in the space of affine motions, which produces an apparently smooth, piecewise-steady motion that interpolates a given series of poses.
Jarek Rossignac is Professor in the College of Computing at Georgia Tech. His research focuses on the design, representation, simplification, compression, analysis and visualization of complex 3D shapes, structures, and animations. Before joining Georgia Tech in 1996 as the Director of the GVU Center, Jarek was Senior Manager and Visualization Strategist at the IBM T.J. Watson Research Center. He holds a Ph.D. in E.E. from the University of Rochester, a Dipl?me d’Ing?nieur from the Ecole Nationale Sup?rieure en ?lectricit? et M?canique (ENSEM), and a Ma?trise in M.E. from the University of Nancy, France. He authored 26 patents and 138 peer-reviewed articles for which he received 23 Awards. He created the ACM Symposia on Solid Modeling and expanded them into the annual Solid and Physical Modeling (SPM) conferences; chaired 20 conferences and 6 international program committees; delivered over 30 Distinguished or Invited Lectures and Keynotes; and served on the Editorial Boards of 7 professional journals and on 74 Technical Program committees. Currently he heads the NSF Aquatic Propulsion Lab (APL) and the Modeling, Animation, Graphic, Interaction, and Compression (MAGIC) Lab at Georgia Tech, which hosts the Disney-sponsored Feature Animation Production Automation (FAPA) project. Rossignac is a Fellow of the Eurographics association and the Editor-in-Chief of the GMOD (Graphical Models) journal.
|2012/05/24||Alla Sheffer||Geometry in action
I will describe a variety of geometry modeling and editing projects addressed by my recent work including research on resizing of man-made
shapes, sketch based modeling of garments, and a technique for generation of text-art images (micrograms) which uses readable text as a brush.
Alla Sheffer is an associate professor in the Computer Science department at the University of British Columbia. She investigates algorithms for
|2012/05/10||Karan Singh||Artist and Perception driven Interactive Graphics
Sketch, sculpt and touch interfaces have been touted as “natural” approaches to interactive design and animation. While these metaphrs are indeed a promising medium of visual communication, there are a number of inherent limitations in the motor control of the human hand, drawing or gesturing skill, perception and the ambiguities of inference, that make the leap from input to 3D modeling and animation a challenging task. In this talk I will present recent research and open challenges in the perception of shape and animation from sketch/sculpt style input and approaches that facilitate the leap from input to 3D models and animation despite these limitations.
Karan Singh is a Professor in Computer Science at the Univ. of Toronto. His research interests lie in artist driven interactive graphics, spanning geometric and anatomic modeling, character animation and sketch based interfaces. He has been a technical lead on two commercial projects that won technical Oscars (Maya, Paraform). These software systems are the current industry standards for animation and reverse engineering respectively. He is a co-founder of Arcestra, a sketch based software solution for architecture and industrial design. He lead the design of two research systems based on sketch and sculpt metaphors (www.ilovesketch.com, www.meshmixer.com), that have been featured on leading design forums, and co-directs a reputed graphics and HCI lab, DGP. He was the R&D Director for the 2005 Oscar winning animated short Ryan and had his first exhibition of electronic art titled Labyrinths, in 2010 www.karanshersingh.com. His current research focus is on 3D shape perception and understanding and sketch/touch based interfaces.
|2012/04/26||Nicolas Szilas||Interactive Storytelling: Models, Architecture
An interactive storytelling is a narrative environment in which the narrative experience is no more build by interpreting a story wrote by an author, and diffused, but by interacting with some narrative elements (characters,…) and by dynamically generating narrative events. Our general approach is to first build story models and interactive story, which would be simulate, in order to generate the story depending on user actions and will.
In this talk, we will first present the computer model of storytelling and its narrative standards, afterwards an application, based on the “TBIsim” project, a 3D simulation for psychological support created for helping teenagers whose one parent suffers a cranial trauma, will be showed.
From this example, we will talk about:
– Architecture issues: How the “narrative engine” can exchange datas with the other process needed by visualization and interaction (behavior, animations, rendering, music, HMI, …)
– Authoring issues: How an author could put every needed data into the system for a specific scenario
– Methodology issues (links to two last issues): Why and how we have to focus on new approach of conception and research in this field.
Nicolas Szilas has been working in the field of Cognitive Science for fifteen years till now. From research to industry, and from industry to research, he has been aiming at being at the heart of innovation, in the various domains that he works. After the completion of his Ph.D in 1995 and two postdoctoral positions in Montreal, he entered a video game studio in 1997, in order to manage the newly created R&D program on AI for video games. From 1999, he conducted his own research program on Interactive Drama, named IDtension. Since 2003, he has been working on this project in French, Australian and Swiss Universities. He is now associate professor at TECFA, University of Geneva, working at the intersection of games, narrative, learning and technology. He is now involved in Swiss and European projects related to Interactive Narrative in which IDtension narrative engine is employed for both entertainment and educational applications.