Publications

While at the moment I am primarily a C++ engineer and developer, I hold a PhD in Computer Science from Trinity College Dublin. This page is intended to provide a comprehensive list of my research publications.

PhD-related Publications

phd

Locomotion for Crowd Animation

Martin Pražák PhD Dissertation, Trinity College Dublin, May 2012

Real-time computer animation is an essential part of modern computer games and virtual reality applications. While rendering provides the main part of what can be described as “visual experience”, it is the movement of the characters that gives the final impression of realism. Unfortunately, realistic human animation has proven to be a very hard challenge.

Some fields of computer graphics have a compact and precise mathematical description of the underlying principles. Rendering, for example, has the rendering equation, and each realistic rendering technique provides its approximate solution. Due to its highly complex nature, character animation is not one of these fields. That is one of the reasons why even single character animation still provides significant research challenges. The challenges posed by a crowd simulator, required to populate a virtual world, are even larger. This is not only because of the large number of simultaneously displayed characters, which necessitate the use of level-of-detail approaches, but also the requirement of reactive behaviour, which can be provided only by a complex multi-level planning module.

In this thesis, we address the problem of human animation for crowds as a component of a crowd simulator.

icon_pdf PDF
icon_acm ACM
Perceptual Evaluation of Footskate Cleanup

Martin Pražák, Ludovic Hoyet and Carol O’Sullivan Proceedings of the 2011 ACM SIGGRAPH/Eurographics Symposium on Computer Animation, 287-294, 2011

When animating virtual humans for real-time applications such as games and virtual reality, animation systems often have to edit motions in order to be responsive. In many cases, contacts between the feet and the ground are not (or cannot be) properly enforced, resulting in a disturbing artifact know as footsliding or footskate. In this paper, we explore the perceptibility of this error and show that participants can perceive even very low levels of footsliding (<21mm in most conditions). We then explore the visual fidelity of animations where footskate has been cleaned up using two different methods. We found that corrected animations were always preferred to those with footsliding, irrespective of the extent of the correction required. We also determined that a simple approach of lengthening limbs was preferred to a more complex approach using IK fixes and trajectory smoothing.

apgv2011

 

Perceiving Human Motion Variety

Martin Pražák and Carol O’Sullivan Applied Perception in Graphics and Visualisation, 2011

In order to simulate plausible groups or crowds of virtual characters, it is important to ensure that the individuals in a crowd do not look, move, behave or sound identical to each other. Such obvious `cloning’ can be disconcerting and reduce the engagement of the viewer with an animated movie, virtual environment or game. In this paper, we focus in particular on the problem of motion cloning, i.e., where the motion from one person is used to animate more than one virtual character model. Using our database of motions captured from 83 actors (45M and 38F), we present an experimental framework for evaluating human motion, which allows both the static (e.g., skeletal structure) and dynamic aspects (e.g., walking style) of an animation to be controlled. This framework enables the creation of crowd scenarios using captured human motions, thereby generating simulations similar to those found in commercial games and movies, while allowing full control over the parameters that affect the perceived variety of the individual motions in a crowd. We use the framework to perform an experiment on the perception of characteristic walking motions in a crowd, and conclude that the minimum number of individual motions needed for a crowd to look varied could be as low as three. While the focus of this paper was on the dynamic aspects of animation, our framework is general enough to be used to explore a much wider range of factors that affect the perception of characteristic human motion.

i3d2010

icon_pdf PDF

 

Moving Crowds: A Linear Animation System for Crowd Simulation

Martin Pražák, Ladislav Kavan, Rachel McDonnell, Simon Dobbyn and Carol O’Sullivan Poster Proceedings, ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games, 2010

The animation of hundreds or even thousands of simultaneously displayed individuals is challenging because of the need for both motion variety and efficient runtime processing. We present a middle level-of-detail animation system optimised for handling large crowds which takes motion-capture data as input and automatically processes it to create a parametric model of human locomotion. The model is then used in a runtime system, driven by a linearised motion blending technique, which synthesises motions based on information from a motion-planning module. Compared to other animation methods, our technique provides significantly better runtime performance without compromising the visual quality of the result.

sa2010

icon_pdf PDF
icon_acm ACM

 

Perceptual Evaluation of Human Animation Timewarping

Martin Pražák, Rachel McDonnell and Carol O’Sullivan ACM SIGGRAPH Asia 2010 Sketches, pages 30:1-30:2, 2010

Understanding the perception of humanoid character motion can provide insights that will enable realism, accuracy, computational cost and data storage space to be optimally balanced. In this sketch we describe a preliminary perceptual evaluation of human motion timewarping, a common editing method for motion capture data. During the experiment, participants were shown pairs of walking motion clips, both timewarped and at their original speed, and asked to identify the real animation. We found a statistically significant difference between speeding up and slowing down, which shows that displaying clips at higher speeds produces obvious artifacts, whereas even significant reductions in speed were perceptually acceptable.

sa2009

icon_pdf PDF
icon_acm ACM

 

Synchronized Real-time Multi-sensor Motion Capture System

Jonathan Ruttle, Michael Manzke, Martin Pražák and Rozenn Dahyot SIGGRAPH Asia 2009 Sketches & Posters, pages 16-19, 2009

This work addresses the challenge of synchronizing multiple sources of visible and audible information from a variety of devices, while capturing human motion in realtime. Video and audio data will be used to augment and enrich a motion capture database that will be released to the research community. While other such augmented motion capture databases exist [Black and Sigal 2006], the goal of this work is to build on these previous works. Critical areas of improvement are in the synchronization between cameras and synchronization between devices. Adding an array of audio recording devices to the setup will greatly expand the research potential of the database, and the positioning of the cameras will be varied to give greater flexibility. The augmented database will facilitate the testing and validation of human pose estimation and motion tracking techniques, among other applications. This sketch briefly describes some of the interesting challenges faced in setting up the pipeline for capturing the synchronized data and the novel approaches proposed to solve them.

ei2009

icon_pdf PDF

 

A Perception Based Metric for Comparing Human Locomotion

Martin Pražák, Rachel McDonnell, Ladislav Kavan and Carol O’Sullivan Proceedings of the 9th Irish Workshop on Computer Graphics, pages 75-80, 2009

Metrics measuring differences between skeletal animation frames (poses) form the core of a large number of modern computer animation methods. A metric that accurately characterizes human motion perception could provide great advantages for these methods, by allowing the systems to focus exclusively on perceptually important aspects of the motion. In this paper we present a metric for human locomotion comparison, derived directly from the results of a perceptual experiment.

sca2008

icon_pdf PDF

 

Towards a Perceptual Metric for Comparing Human Motion

Martin Pražák, Rachel McDonnell, Ladislav Kavan and Carol O’Sullivan Poster Proceedings, ACM SIGGRAPH/Eurographics Symposium on Computer Animation, pages 13-14, 2008

Most of the commonly used approaches for editing human motion, such as motion graphs and motion blending, use some form of distance metric in order to compare character poses in keyframes. These metrics utilize a combination of three traditional methods – joint angular differences, dis- tances between points on an object and velocities of specified bodyparts. The presented method attempts to find a metric and its parameters (not limited to the usual Euclidean metric), which would match a dataset formed by a direct perceptual experiment as closely as possible. Previous meth- ods used peception for evaluation alone, but we use perception as the basis of our metric.

Masters-related Publications

cg2010

icon_pdf PDF

 

Rendering Fur Directly into Images

Tania Pouli, Martin Pražák, Pavel Zemcik, Diego Gutierrez and Erik Reinhard Computers and Graphics, 24(5):612-620, 2010

We demonstrate the feasibility of rendering fur directly into existing images, without the need to either painstakingly paint over all pixels, or to supply 3D geometry and lighting. We add fur to objects depicted in images by first estimating depth and lighting information and then re-rendering the resulting 2.5D geometry with fur. A brush-based interface is provided, allowing the user to control the positioning and appearance of fur, while all the interaction takes place in a 2D pipeline. The novelty of this approach lies in the fact that a complex, high-level image edit such as the addition of fur can yield perceptually plausible results, even in the presence of imperfect depth or lighting information.

icon_pdf PDF

 

Changing Object Appearance by Adding Fur

Martin Pražák, (supervisors – Erik Reinhard and Pavel Zemcik) Masters Thesis, 2008

The aim of this thesis is to demonstrate the feasibility of rendering fur directly into existing images without the need to either painstakingly paint over all pixels, or to supply 3D geometry and lighting. The fur is added to objects depicted on images by first recovering depth and lighting information, and then re-rendering the resulting 2.5D geometry with fur. The novelty of this approach lies in the fact that complex high-level image edits, such as the addition of fur, can successfully yield perceptually plausible results, even constrained by imperfect depth and lighting information. A relatively large set of techniques involved in this work includes HDR imaging, shape-from-shading techniques, research on shape and lighting perception in images and photorealistic rendering techniques. The main purpose of this thesis is to prove the concept of the described approach. The main implementation language was C++ with usage of wxWidgets, OpenGL and libTIFF libraries, rendering was realised in 3Delight, a Renderman-compatible renderer, with the help of a set of custom shaders written in Renderman shading language.