Full Program
Preliminary Schedule
Daily Schedule:
Monday – May 8, 2023
08:30 | Registration 08:30-17:00 |
||||
09:00 | Tutorial 1 09:00-10:30 |
Tutorial 2 09:00-10:30 |
Tutorial 3 09:00-10:30 |
||
09:30 | Coffee Break 09:30-10:00 |
||||
10:00 | EG Executive Committee 10:00-17:00 |
||||
10:30 | Coffee Break 10:30-11:00 |
||||
11:00 | Tutorial 1 11:00-12:30 |
Tutorial 2 11:00-12:30 |
Tutorial 3 11:00-12:30 |
||
11:30 | |||||
12:00 | |||||
12:30 | Lunch MPI INF 12:30-13:30 |
Lunch (Mensa) 12:30-13:30 |
|||
13:00 | |||||
13:30 | Tutorial 1 13:30-15:00 |
Tutorial 4 13:30-15:00 |
|||
14:00 | |||||
14:30 | |||||
15:00 | Coffee Break 15:00-15:30 |
Coffee Break 15:00-15:30 |
|||
15:30 | Tutorial 1 15:30-17:00 |
Tutorial 4 15:30-17:00 |
|||
16:00 | |||||
16:30 | |||||
17:00 | Opening Ceremony, Awards Ceremony, Fast Forwards 17:00-19:30 |
||||
17:30 | |||||
18:00 | |||||
18:30 | |||||
19:00 | |||||
19:30 | Welcome Reception 19:30-20:30 |
||||
20:00 | |||||
20:30 |
Tuesday – May 9, 2023
08:30 | Registration 08:30-17:00 |
|||||||
09:00 | Full Paper 1 09:00-10:30 |
Full Paper 2 09:00-10:30 |
Short Paper 1 09:00-10:00 |
|||||
09:30 | ||||||||
10:00 | ||||||||
10:30 | Coffee Break 10:30-11:00 |
|||||||
11:00 | Full Paper 3 11:00-12:30 |
Full Paper 4 11:00-12:30 |
Short Paper 2 11:00-12:00 |
|||||
11:30 | ||||||||
12:00 | ||||||||
12:30 | Lunch (Mensa) 12:30-14:00 |
|||||||
13:00 | ||||||||
13:30 | ||||||||
14:00 | Keynote: Elmar Eisemann 14:00-15:00 |
|||||||
14:30 | ||||||||
15:00 | Poster Session and Coffee Break 15:00-15:30 |
|||||||
15:30 | Full Paper 5 15:30-17:00 |
STAR 1 15:30-17:00 |
Short Paper 3 15:30-16:30 |
|||||
16:00 | ||||||||
16:30 | ||||||||
17:00 | ||||||||
17:30 | ||||||||
18:00 | ||||||||
18:30 | ||||||||
19:00 | EG Fellow Dinner 19:00-21:00 |
|||||||
19:30 | ||||||||
20:00 | ||||||||
20:30 |
Wednesday – May 10, 2023
08:30 | Registration 08:30-17:00 |
|||||||
09:00 | Full Paper 6 09:00-10:30 |
Full Paper 7 09:00-10:30 |
Short Paper 4 09:00-10:00 |
|||||
09:30 | ||||||||
10:00 | ||||||||
10:30 | Coffee Break 10:30-11:00 |
|||||||
11:00 | Full Paper 8 11:00-12:00 |
Full Paper 9 11:00-12:30 |
STAR 2 11:00-12:30 |
|||||
11:30 | ||||||||
12:00 | ||||||||
12:30 | Lunch (Mensa) 12:30-14:00 |
She Lunch 12:30-14:00 |
||||||
13:00 | ||||||||
13:30 | ||||||||
14:00 | Keynote: Gordon Wetzstein 14:00-15:00 |
|||||||
14:30 | ||||||||
15:00 | Poster Session and Coffee Break 15:00-15:30 |
|||||||
15:30 | Full Paper 10 15:30-17:00 |
Short Paper 5 15:30-16:30 |
Diversity Session 15:30-17:00 |
Education 1 15:30-17:00 |
||||
16:00 | ||||||||
16:30 | ||||||||
17:00 | EG General Assembly 17:00-18:30 |
|||||||
17:30 | ||||||||
18:00 | ||||||||
18:30 | ||||||||
19:00 | ||||||||
19:30 | Conference Dinner 19:30-23:30 |
|||||||
20:00 | ||||||||
20:30 | ||||||||
21:00 | ||||||||
21:30 | ||||||||
22:00 | ||||||||
22:30 | ||||||||
23:00 | ||||||||
23:30 |
Thursday – May 11, 2023
08:30 | Registration 08:30-17:00 |
|||||||
09:00 | Full Paper 11 09:00-10:30 |
Full Paper 12 09:00-10:30 |
STAR 3 09:00-10:30 |
|||||
09:30 | ||||||||
10:00 | ||||||||
10:30 | Coffee Break 10:30-11:00 |
|||||||
11:00 | Full Paper 13 11:00-12:30 |
Full Paper 14 11:00-12:00 |
STAR 4 11:00-12:30 |
|||||
11:30 | ||||||||
12:00 | ||||||||
12:30 | Lunch (Mensa) 12:30-14:00 |
|||||||
13:00 | ||||||||
13:30 | ||||||||
14:00 | Keynote: Ben Mildenhall 14:00-15:00 |
|||||||
14:30 | ||||||||
15:00 | Poster Session and Coffee Break 15:00-15:30 |
|||||||
15:30 | Full Paper 15 15:30-17:00 |
Education 2 15:30-17:00 |
STAR 5 15:30-17:00 |
|||||
16:00 | ||||||||
16:30 | ||||||||
17:00 | ||||||||
17:30 | ||||||||
18:00 | ||||||||
18:30 | ||||||||
19:00 | ||||||||
19:30 | IPC Dinner 19:30-21:30 |
|||||||
20:00 | ||||||||
20:30 |
Friday – May 12, 2023
08:30 | ||||||||
09:00 | Full Paper 16 09:00-10:30 |
Education 3 09:00-10:30 |
STAR 6 09:00-10:30 |
|||||
09:30 | ||||||||
10:00 | ||||||||
10:30 | Coffee Break 10:30-11:00 |
|||||||
11:00 | Keynote: Mirela Ben-Chen 11:00-12:00 |
|||||||
11:30 | ||||||||
12:00 | Closing Ceremony and Awards 12:00-13:30 |
|||||||
12:30 | ||||||||
13:00 | ||||||||
13:30 | Lunch (Mensa) 13:30-14:30 |
|||||||
14:00 | ||||||||
14:30 |
Track Schedule:
Tutorial Program
Tutorials Program
Monday, 8
T01Effective User Studies in Computer Graphics
09.00 – 17.00 | Günter Hotz Lecture Hall |
- Sandra Malpica (U. Zaragoza), Qi Sun (NYU), Petr Kellnhofer (TUDelft), Alejandro Beacco,(Universitat de Barcelona),
Rachel Mcdonnell(Trinity College Dublin) and Mauricio Flores Vargas (Trinity College Dublin)
Abstract: User studies are a useful tool for researchers, allowing them to collect data on how users perceive, interact
with and process different types of sensory information. If planned in advance, user experiments can be leveraged
in every stage of a research project, from early design, prototyping and feature exploration to applied proofs of
concept, passing through validation and data collection for model training. User studies can provide the researcher
with different types of information depending on the chosen methodology: user performance metrics, surveys and interviews,
field studies, physiological data, etc. Considering human perception and other cognitive processes is particularly important
in computer graphics, where most research produces outputs whose ultimate purpose is to be seen or perceived by a human. Being able
to measure in an objective and systematic way how the information we generate is integrated into the representational space humans
create to situate themselves in the world means that researchers will have more information to implement optimal algorithms, tools and
techniques. In this tutorial we will give an overview of good practices for user studies in computer graphics with a particular
focus on virtual reality use cases. We will cover the basics on how to design, carry out and analyze good user studies, as well as
different particularities to be taken into account in immersive environments.
Monday, 8
T02Using Vulkan for graphics research
09.00 – 12.00 | CS Lecture Hall |
- M. Castorina (Senior Software Engineer at AMD) and G. Sassone (Principal Rendering Engineer at Multiplayer Group)
Abstract: The Vulkan API has has been released in 2016 and it has continued to evolve to include the latest hardware capabilities.
Compared to OpenGL, it is a more verbose API that requires a deeper knowledge of the underlying hardware architecture.
While this can make the API more difficult to get started with, it also rewards developers with finer grained control over
resource management, multi-threading and work submission. This flexibility allows developers to achieve better performance
over older APIs and opens the door to novel techniques that would have been harder, if not impossible, to implement before.
In this tutorial we are going to provide an introduction to the core Vulkan API concepts and how they map to the underlying hardware.
We are going to demonstrate how to leverage async compute to overlap graphics and compute work for better performance. We will
provide detailed examples that make use of cutting-edge features like Mesh Shaders and Ray Tracing to achieve state of the art results
in real-time rendering.
Monday, 8
T03Learning with Music Signals: Technology Meets Education
09.00 – 12.00 | MPI SWS Lecture Hall |
- Meinard Mueller (International Audio Laboratories Erlangen)
Abstract: Music information retrieval (MIR) is an exciting and challenging research area that aims to develop techniques
and tools for organizing, analyzing, retrieving, and presenting music-related data. Being at the intersection
of engineering and humanities, MIR relates to different research disciplines, including signal processing, machine
learning, information retrieval, musicology, and the digital humanities. In this tutorial, using music as a tangible
and concrete application domain, we will approach the concept of learning from different angles, addressing technological
and educational aspects. When talking about learning in an engineering context, one immediately thinks of data-driven
techniques such as deep learning (DL), where computer-based systems are trained to extract complex features and hidden
relationships from given examples. In this tutorial, we will introduce various music analysis and retrieval tasks, where
we start with classical engineering approaches. We then show how such approaches may be rephrased or simulated by DL-based
systems, thus indicating new avenues toward building more explainable and hybrid machine-learning systems by learning from
the experience of traditional engineering approaches and integrating knowledge from the music domain. Beyond this technical
perspective, another aim of this tutorial is to approach the concept of learning from an educational perspective. We argue that
music, being an essential part of our lives that everyone feels connected to, yields an intuitive entry point to support
education in technical disciplines. In this tutorial, we will show how music may serve as a vehicle to make learning in
signal processing and machine learning an interactive pursuit. In this context, we will also introduce a novel collection
of educational material for teaching and learning fundamentals of music processing (FMP). This collection, referred to as
FMP notebooks (https://www.audiolabs-erlangen.de/FMP)
can be used to study both theory and practice, generate educational material for lectures, and provide baseline implementations
for many MIR tasks. The tutorial’s novelty lies in how it presents a holistic approach to learning using music as a challenging
and tangible application domain. In this way, the tutorial serves several purposes: it gives a gentle introduction to MIR while
introducing a new software package for teaching and learning music processing, it highlights avenues for developing explainable
machine-learning models, and it discusses how recent technology can be applied and communicated in interdisciplinary research and education.
Monday, 8
T04Modern High Dynamic Range Imaging at the Time of Deep Learning
13.30 – 17.00 | CS Lecture Hall |
- Francesco Banterle (ISTI-CNR) and Alessandro Artusi (DeepCamera, CYENS)
Abstract: In this tutorial, we introduce how the High Dynamic Range (HDR) imaging field has evolved in this new era where machine
learning approaches have become dominant. The main reason of this success is that the use of machine learning and deep
learning have automatized many tedious tasks achieving high-quality results overperforming classic methods. After an
introduction on classic HDR imaging and its open problem, we will summarize the main approaches for: merging of multiple
exposures, single image reconstructions or inverse tone mapping, tone mapping, and display visualization. Finally, we will
highlights the still open problems in this machine learning era, and possible direction on how to solve them.
Short Paper Program
Short Papers Program
Tuesday, 9 | Wednesday, 10 |
---|---|
09.00 – 10.00
|
09.00 – 10.00
|
11.00 – 12.00
|
|
15.30 – 16.30
|
15.30 – 16.30
|
Tuesday, 9
SP01Procedural Modeling & Reconstruction
09.00 – 10.00 | MPI SWS Lecture Hall | Guillaume Cordonnier |
- Photogrammetric Reconstruction of a Stolen Statue
Z. Liu, E.L. Doubrovski, J.M.P. Geraedts, W. Wang, Y. Yam and C.C.L. Wang
In this paper, we propose a method to reconstruct a digital 3D model of a stolen/damaged statue using photogrammetric methods.
This task is challenging because the number of available photos for a stolen statue is in general very limited – especially
the side/back view photos. Besides using standard structure-from-motion and multi-view stereo methods, we match image pairs
with low overlap using sliding windows and maximize the normalized cross-correlation (NCC) based patch-consistency so that
the image pairs can be well aligned into a complete model to build the 3D mesh surface. Our method is based on the prior of
the planar side on the statue’s pedestal, which can cover a large range of statues. We hope this work will motivate more
research efforts for the reconstruction of those stolen/damaged statues and heritage preservation. - Quick-Pro-Build: A Web-based Approach for Quick Procedural 3D Reconstructions of Buildings
B. Bohlender, M. Mühlhäuser and A. Sanchez Guinea
We present Quick-Pro-Build, a web-based approach for quick procedural 3D reconstruction of buildings. Our approach allows
users to quickly and easily create realistic 3D models using two integrated reference views: street view and satellite view.
We introduce a novel conditional and stochastic shape grammar to represent the procedural models based on the well-established
CGA shape grammar. Based on our grammar and user interface, we propose 3 modalities for procedural modeling: 1) model from
scratch, 2) copy, paste, and adapt, and 3) summarize, select and adapt. The third modality enables users to model a building
by summarizing similar models into an architectural style description, selecting a model from the style description, and adapting
it to the target building. Summarizing and selecting allows the third modality to be the most efficient option when modeling a
building with a style similar to existing buildings. The third modality is enabled by a novel algorithm that can find and combine
similarities from procedural models into a style description and allows learning the preference of the users for one model inside
the style description. - Towards L-System Captioning for Tree Reconstruction
J. S. Magnusson , A. Hilsmann and P. Eisert
This work proposes a novel concept for tree and plant reconstruction by directly inferring a Lindenmayer-System (L-System) word
representation from image data in an image captioning approach. We train a model end-to-end which is able to translate given images
into L-System words as a description of the displayed tree. To prove this concept, we demonstrate the applicability on 2D tree
topologies. Transferred to real image data, this novel idea could lead to more efficient, accurate and semantically meaningful tree
and plant reconstruction without using error-prone point cloud extraction, and other processes usually utilized in tree reconstruction.
Furthermore, this approach bypasses the need for a predefined L-System grammar and enables species-specific L-System inference without
biological knowledge.
SP02Rendering & Simulation
11.00 – 12.00 | MPI SWS Lecture Hall | Saghi Hajisharif |
- Efficient Needle Insertion Simulation using Hybrid Constraint Solver and Isolated DOFs
C. Martin, Z. Zeng and H. Courtecuisse
This paper introduces a real-time compatible method to improve the location of constraints between a needle and tissues in the context of
needle insertion simulation. This method is based on intersections between the Finite Element (FE) meshes of the needle and the tissues.
It is coupled with the method of isolating mechanical DOFs and a hybrid solver (implying both direct and iterative resolutions) to
respectively generate and solve the constraint problem while reducing the computation time. - Guiding Light Trees for Many-Light Direct Illumination
E. Hamann , A. Jung and C. Dachsbacher
Path guiding techniques reduce the variance in path tracing by reusing knowledge from previous samples to build adaptive sampling distributions.
The Practical Path Guiding (PPG) approach stores and iteratively refines an approximation of the incident radiance field in a spatio-directional
data structure that allows sampling the incident radiance. However, due to the limited resolution in both spatial and directional dimensions,
this discrete approximation is not able to accurately capture a large number of very small lights. We present an emitter sampling technique to
guide next event estimation (NEE) with a global light tree and adaptive tree cuts that integrates into the PPG framework. In scenes with many
lights our technique significantly reduces the RMSE compared to PPG with uniform NEE, while adding close to no overhead in scenes with few light
sources. The results show that our technique can also aid the incident radiance learning of PPG in scenes with difficult visibility. - Out-of-the-loop Autotuning of Metropolis Light Transport with Reciprocal Probability Binning
K. Herveau, H. Otsu and C. Dachsbacher
The performance of Markov Chain Monte Carlo (MCMC) rendering methods depends heavily on the mutation strategies and their parameters.
We treat the underlying mutation strategies as black-boxes and focus on their parameters. This avoids the need for tedious manual parameter
tuning and enables automatic adaptation to the actual scene. We propose a framework for out-of-the-loop autotuning of these parameters.
As a pilot example, we demonstrate our tuning strategy for small-step mutations in Primary Sample Space Metropolis Light Transport.
Our σ-binning strategy introduces a set of mutation parameters chosen by a heuristic: the inverse probability of the local direction
sampling, which captures some characteristics of the local sampling. We show that our approach can successfully control the parameters
and achieve better performance compared to non-adaptive mutation strategies.
SP03Stylization & Point Clouds
15.30 – 16.30 | MPI SWS Lecture Hall | Thomas Leimkühler |
- CLIP-based Neural Neighbor Style Transfer for 3D Assets
S. Mishra and J. Granskog
We present a method for transferring the style from a set of images to the texture of a 3D object. The texture of an asset
is optimized with a differentiable renderer and losses using pretrained deep neural networks. More specifically, we utilize
a nearest-neighbor feature matching (NNFM) loss with CLIP-ResNet50 that we extend to support multiple style images. We improve
color accuracy and artistic control with an extra loss on user-provided or automatically extracted color palettes. Finally,
we show that a CLIP-based NNFM loss provides a different appearance over a VGG-based one by focusing more on textural details
over geometric shapes. However, we note that user preference is still subjective. - Text2PointCloud: Text-Driven Stylization for Sparse PointCloud
Inwoo Hwang, Hyeonwoo Kim, Donggeun Lim, Inbum Park and Young Min Kim
We present Text2PointCloud, a method to process sparse, noisy point cloud input and generate high-quality stylized output.
Given point cloud data, our iterative pipeline stylizes and deforms points guided by a text description and gradually densifies
the point cloud. As our framework utilizes the existing resources of image and text embedding, it does not require dedicated
3D datasets with high-quality textures, which are produced by skillful artists or high-resolution colored 3D models. Also,
since we represent 3D shapes as a point cloud, we can visualize fine-grained geometric variations with a complex topology
such as flowers or fire. To the best of our knowledge, it is the first approach for directly stylizing the uncolored, sparse
point cloud input without converting it into a mesh or implicit representation, which might fail to express the original
information in the measurements, especially when the object exhibits complex topology. - PointCloudSlicer: Gesture-based segmentation of point clouds
Hari Hara Gowtham, Amal Dev Parakkat and Marie-Paule Cani
Segmentation is a fundamental problem in point-cloud processing, addressing points classification into consistent regions,
the criteria for consistency being based on the application. In this paper, we introduce a simple, interactive framework
enabling the user to quickly segment a point cloud in a few cutting gestures in a perceptually consistent way. As the user
perceives the limit of a shape part, they draw a simple separation stroke over the current 2D view. The point cloud is then
segmented without needing any intermediate meshing step. Technically, we find an optimal, perceptually consistent cutting
plane constrained by user stroke and use it for segmentation while automatically restricting the extent of the cut to the
closest shape part from the current viewpoint. This enables users to effortlessly segment complex point clouds from an
arbitrary viewpoint with the possibility of handling self-occlusions.
Wednesday, 10
SP04Perception for Sketches, VR, and Vision
09.00 – 10.00 | MPI SWS Lecture Hall | Zahra Montazeri |
- Is Drawing Order Important?
Sherry Qiu, Zeyu Wang, Leonard McMillan, Holly Rushmeier and Julie Dorsey
The drawing process is crucial to understanding the final result of a drawing. There has been a long history of understanding
human drawing; what kinds of strokes people use and where they are placed. An area of interest in Artificial Intelligence is
developing systems that simulate human behavior in drawing. However, there has been little work done to understand the order
of strokes in the drawing process. Without sufficient understanding of natural drawing order, it is difficult to build models
that can generate natural drawing processes. In this paper, we present a study comparing multiple types of stroke orders to
confirm findings from previous work and demonstrate that multiple orderings of the same set of strokes can be perceived as
human-drawn and different stroke order types achieve different perceived naturalness depending on the type of image prompt. - Velocity-Based LOD Reduction in Virtual Reality: A Psychophysical ApproachDavid Petrescu, Paul A. Warren , Zahra Montazeri and Steve Pettifer
Virtual Reality headsets enable users to explore the environment by performing self-induced movements. The retinal velocity
produced by such motion reduces the visual system’s ability to resolve fine detail. We measured the impact of self-induced
head rotations on the ability to detect quality changes of a realistic 3D model in an immersive virtual reality environment.
We varied the Level of Detail (LOD) as a function of rotational head velocity with different degrees of severity. Using a
psychophysical method, we asked 17 participants to identify which of the two presented intervals contained the higher quality
model under two different maximum velocity conditions. After fitting psychometric functions to data relating the percentage
of correct responses to the aggressiveness of LOD manipulations, we identified the threshold severity for which participants
could reliably (75%) detect the lower LOD model. Participants accepted an approximately four-fold LOD reduction even in the
low maximum velocity condition without a significant impact on perceived quality, suggesting that there is considerable
potential for optimisation when users are moving (increased range of perceptual uncertainty). Moreover, LOD could be degraded
significantly more (around 84%) in the maximum head velocity condition, suggesting these effects are indeed speed-dependent. - Luminance-Preserving and Temporally Stable Daltonization
P. Ebelin, C. Crassin, G. Denes, M. Oskarsson, K. Åström and T. Akenine-Möller
We propose a novel, real-time algorithm for recoloring images to improve the experience for a color vision deficient observer.
The output is temporally stable and preserves luminance, the most important visual cue. It runs in 0.2 ms per frame on a GPU.
SP05Subdivision & SDFs
15.30 – 16.30 | CS Lecture Hall | Tobias Günther |
- Parallel Loop Subdivision with Sparse Adjacency Matrix
Kechun Wang and Renjie Chen
Subdivision surface is a popular technique for geometric modeling. Recently, several parallel implementations have been developed
for Loop subdivision on the GPU. However, these methods are built on complex data structures which complicate the implementation
and affect the performance, especially on the GPU. In this work, we propose to simply use the sparse adjacency matrix which enables
us to implement the Loop subdivision scheme in the most straightforward manner. Our implementation run entirely on the GPU and
achieves high performance in runtime with significantly lower memory consumption than the state-of-the-art. Through extensive
experiments and comparisons, we demonstrate the efficacy and efficiency of our method. - Tight Bounding Boxes for Voxels and Bricks in a Signed Distance Field Ray Tracer
H. Hansson-Söderlund and T. Akenine-Möller
We present simple methods to compute tight axis-aligned bounding boxes for voxels and for bricks of voxels in a signed distance
function renderer based on ray tracing. Our results show total frame time reductions of 20–31% in a real-time path tracer. - Automatic Step Size Relaxation in Sphere Tracing
Róbert Bán and Gábor Valasek
We propose a robust auto-relaxed sphere tracing method that automatically scales its step sizes based on data from previous
iterations. It possesses a scalar hyperparemeter that is used similarly to the learning rate of gradient descent methods.
We show empirically that this scalar degree of freedom has a smaller effect on performance than the step-scale hyperparameters
of concurrent sphere tracing variants. Additionally, we compare the performance of our algorithm to these both on procedural
and discrete signed distance input and show that it outperforms or performs up to par to the most efficient method, depending
on the limit on iteration counts. We also verify that our method takes significantly fewer robustness-preserving sphere trace
fallback steps, as it generates fewer invalid, over-relaxed step sizes.
Full Paper Program
Full Papers Program
Tuesday, 9 | Wednesday, 10 | Thursday, 11 | Friday, 12 |
---|---|---|---|
09.00 – 10.30
|
09.00 – 10.30
|
09.00 – 10.30
|
09.00 – 10.30
|
11.00 – 12.30
|
11.00 – 12.30
|
11.00 – 12.30
|
|
15.30 – 17.00
|
16.00 – 17.30
|
15.30 – 17.00
|
Tuesday, 9
FP01Human Object Interaction
09.00 – 10.30 | Günter Hotz Lecture Hall | Rene Weller |
- IMoS: Intent-Driven Full-Body Motion Synthesis for Human-Object Interactions, Anindita Ghosh, Rishabh Dabral, Vladislav Golyanik, Christian Theobalt and Philipp Slusallek
Can we make virtual characters in a scene interact with their surrounding objects through simple instructions?
Is it possible to synthesize such motion plausibly with a diverse set of objects and instructions?
Inspired by these questions, we present the first framework to synthesize the full-body motion of virtual
human characters performing specified actions with 3D objects placed within their reach.
Our system takes as input textual instructions specifying the objects and the associated `intentions’ of
the virtual characters and outputs diverse sequences of full-body motions. This contrasts existing works,
where full-body action synthesis methods generally do not consider object interactions and human-object
interaction methods focus mainly on synthesizing hand or finger movements for grasping objects. We accomplish
our objective by designing an intent-driven full-body motion generator, which uses a pair of decoupled conditional
variational auto-regressors to learn the motion of the body parts in an autoregressive manner. We also optimize
the 6DoF pose of the objects such that they plausibly fit within the hands of the synthesized characters. We compare
our proposed method with the existing methods of motion synthesis and establish a new and stronger state-of-the-art for the task of intent-driven motion synthesis. - Online Avatar Motion Adaptation to Morphologically-similar Spaces, Soojin Choi, Seokpyo Hong, Kyungmin Cho, Chaelin Kim and Junyong Noh
In avatar-mediated telepresence systems, a similar environment is assumed for involved spaces,
so that the avatar in a remote space can imitate the user’s motion with proper semantic intention performed in a
local space. For example, touching on the desk by the user should be reproduced by the avatar in the remote space
to correctly convey the intended meaning. It is unlikely, however, that the two involved physical spaces are exactly
the same in terms of the size of the room or the locations of the placed objects. Therefore, a naive mapping of the
user’s joint motion to the avatar will not create the semantically correct motion of the avatar in relation to the remote environment.
Existing studies have addressed the problem of retargeting human motions to an avatar for telepresence applications. Few studies, however,
have focused on retargeting continuous full-body motions such as locomotion and object interaction motions in a unified manner.
In this paper, we propose a novel motion adaptation method that allows to generate the full-body motions of a human-like avatar
on-the-fly in the remote space. The proposed method handles locomotion and object interaction motions as well as smooth transitions
between them according to given user actions under the condition of a bijective environment mapping between morphologically-similar
spaces. Our experiments show the effectiveness of the proposed method in generating plausible and semantically correct full-body motions of an avatar in room-scale space. - Learning to Transfer In-Hand Manipulations Using a Greedy Shape Curriculum, Yunbo Zhang, Alexander Clegg, Sehoon Ha, Greg Turk and Yuting Ye
In-hand object manipulation is challenging to simulate due to complex contact dynamics, non-repetitive finger gaits, and the need to
indirectly control unactuated objects. Further adapting a successful manipulation skill to new objects with different shapes and
physical properties is a similarly challenging problem. In this work, we show that natural and robust in-hand manipulation of simple
objects in a dynamic simulation can be learned from a high quality motion capture example via deep reinforcement learning with careful
designs of the imitation learning problem. We apply our approach on both single-handed and two-handed dexterous manipulations of diverse
object shapes and motions. We then demonstrate further adaptation of the example motion to a more complex shape through curriculum learning
on intermediate shapes morphed between the source and target object. While a naive curriculum of progressive morphs often falls short,
we propose a simple greedy curriculum search algorithm that can successfully apply to a range of objects such as a teapot, bunny, bottle, train, and elephant.
FP02Logos and Clip-Art
09.00 – 10.30 | CS Lecture Hall | Justus Thies |
- Img2Logo: Generating Golden Ratio Logos from Images, Kai-Wen Hsiao, Yong-Liang Yang, Yung-Chih Chiu, Min-Chun Hu, Chih-Yuan Yao and Hung-Kuo Chu
Logos are one of the most important graphic design forms that use an abstracted shape to clearly represent the spirit of a community.
Among various styles of abstraction, a particular golden-ratio design is frequently employed by designers to create a concise and
regular logo. In this context, designers utilize a set of circular arcs with golden ratios (i.e., all arcs are taken from circles
whose radii form a geometric series based on the golden ratio) as the design elements to manually approximate a target shape. This
error-prone process requires a large amount of time and effort, posing a significant challenge for design space exploration. In this work,
we present a novel computational framework that can automatically generate golden ratio logo abstractions from an input image. Our framework
is based on a set of carefully identified design principles and a constrained optimization formulation respecting these principles. We also
propose a progressive approach that can efficiently solve the optimization problem, resulting in a sequence of abstractions that approximate
the input at decreasing levels of detail. We evaluate our work by testing on images with different formats including real photos, clip arts,
and line drawings. We also extensively validate the key components and compare our results with manual results by designers to demonstrate
the effectiveness of our framework. Moreover, our framework can largely benefit design space exploration via easy specification of design
parameters such as abstraction levels, golden circle sizes, etc. - Interactive Depixelization of Pixel Art through Spring Simulation, Marko Matusovic, Amal Dev Parakkat and Elmar Eisemann
We introduce an approach for converting pixel art into high-quality vector images. While much progress has been made on
automatic conversion, there is an inherent ambiguity in pixel art, which can lead to a mismatch with the artist’s
original intent. Further, there is room for incorporating aesthetic preferences during the conversion. In consequence,
this work introduces an interactive framework to enable users to guide the conversion process towards high-quality vector
illustrations. A key idea of the method is to cast the conversion process into a spring-system optimization that can be
influenced by the user. Hereby, it is possible to resolve various ambiguities that cannot be handled by an automatic algorithm. - Subpixel Deblurring of Anti-Aliased Raster Clip-Art, Jinfan Yang, Nicholas Vining, Shakiba Kheradmand, Nathan Carr, Leonid Sigal and Alla Sheffer
Artist generated clip-art images typically consist of a small number of distinct, uniformly colored regions with clear boundaries.
Legacy artist created images are often stored in low-resolution (100x100px or less) anti-aliased raster form. Compared to anti-aliasing
free rasterization, anti-aliasing blurs inter-region boundaries and obscures the artist’s intended region topology and color palette; at
the same time, it better preserves subpixel details. Recovering the underlying artist-intended images from their low-resolution anti-aliased
rasterizations can facilitate resolution independent rendering, lossless vectorization, and other image processing applications. Unfortunately,
while human observers can mentally deblur these low-resolution images and reconstruct region topology, color and subpixel details, existing
algorithms applicable to this task fail to produce outputs consistent with human expectations when presented with such images. We recover these
viewer perceived blur-free images at subpixel resolution, producing outputs where each input pixel is replaced by four corresponding (sub)pixels.
Performing this task requires computing the size of the output image color palette, generating the palette itself, and associating each pixel in
the output with one of the colors in the palette. We obtain these desired output components by leveraging a combination of perceptual and domain
priors, and real world data. We use readily available data to train a network that predicts, for each antialiased image, a low-blur approximation
of the blur-free double-resolution outputs we seek. The images obtained at this stage are perceptually closer to the desired outputs but typically
still have hundreds of redundant differently colored regions with fuzzyboundaries. We convert these low-blur intermediate images into blur-free outputs
consistent with viewer expectations using a discrete partitioning procedure guided by the characteristic properties of clip-art images, observations about
the antialiasing process, and human perception of anti-aliased clip-art. This step dramatically reduces the size of the output color palettes, and the
region counts bringing them in line with viewer expectations and enabling the image processing applications we target. We demonstrate the utility of our
method by using our outputs for a number of image processing tasks, and validate it via extensive comparisons to prior art. In our comparative study,
participants preferred our deblurred outputs over those produced by the best-performing alternative by a ratio of 75 to 8.5.
FP03Shape Correspondance
11.00 – 12.30 | Günter Hotz Lecture Hall | Rhaleb Zayer |
- Unsupervised Template Warp Consistency for Implicit Surface Correspondences, Mengya Liu, Ajad Chhatkuli, Janis Postels, Luc Van Gool and Federico Tombari
Unsupervised template discovery via implicit representation in a category of shapes has recently shown strong performance.
At the core, such methods deform input shapes to a common template space which allows establishing correspondences as well as
implicit representation of the shapes. In this work we investigate the inherent assumption that the implicit neural field
optimization naturally leads to consistently warped shapes, thus providing both good shape reconstruction and correspondences. Contrary to this
convenient assumption, in practice we observe that such is not the case, consequently resulting in sub-optimal point correspondences.
In order to solve the problem, we re-visit the warp design and more importantly introduce explicit constraints using unsupervised sparse
point predictions, directly encouraging consistency of the warped shapes. We use the unsupervised sparse keypoints in order to further
condition the deformation warp and enforce the consistency of the deformation warp. Experiments in dynamic non-rigid DFaust and ShapeNet
categories show that our problem identification and solution provide the new state-of-the-art in unsupervised dense correspondences. - Scalable and Efficient Functional Map Computations on Dense Meshes, Robin Magnet and Maks Ovsjanikov
We propose a new scalable version of the functional map pipeline that allows to efficiently compute correspondences between
potentially very dense meshes. Unlike existing approaches that process dense meshes by relying on ad-hoc mesh simplification,
we establish an integrated end-to-end pipeline with theoretical approximation analysis. In particular, our method overcomes
the computational burden of both computing the basis, as well the functional and pointwise correspondence computation by
approximating the functional spaces and the functional map itself. Errors in the approximations are controlled by theoretical
upper bounds assessing the range of applicability of our pipeline.With this construction in hand, we propose a scalable
practical algorithm and demonstrate results on dense meshes, which approximate those obtained by standard functional map
algorithms at the fraction of the computation time. Moreover, our approach outperforms the standard acceleration procedures
by a large margin, leading to accurate results even in challenging cases. - Surface Maps via Adaptive Triangulations, Patrick Schmidt, Dörte Pieper and Leif Kobbelt
We present a new method to compute continuous and bijective maps (surface homeomorphisms) between two or more genus-0 triangle meshes.
In contrast to previous approaches, we decouple the resolution at which a map is represented from the resolution of the input meshes.
We discretize maps via common triangulations that approximate the input meshes while remaining in bijective correspondence to them.
Both the geometry and the connectivity of these triangulations are optimized with respect to a single objective function that
simultaneously controls mapping distortion, triangulation quality, and approximation error. A discrete-continuous optimization
algorithm performs both energy-based remeshing as well as global second-order optimization of vertex positions, parametrized
via the sphere. With this, we combine the disciplines of compatible remeshing and surface map optimization in a unified formulation
and make a contribution in both fields. While existing compatible remeshing algorithms often operate on a fixed pre-computed surface
map, we can now globally update this correspondence during remeshing. On the other hand, bijective surface-to-surface map optimization
previously required computing costly overlay meshes that are inherently tied to the input mesh resolution. We achieve significant
complexity reduction by instead assessing distortion between the approximating triangulations. This new map representation is
inherently more robust than previous overlay-based approaches, is less intricate to implement, and naturally supports mapping
between more than two surfaces. Moreover, it enables adaptive multi-resolution schemes that, e.g., first align corresponding
surface regions at coarse resolutions before refining the map where needed. We demonstrate significant speedups and increased
flexibility over state-of-the-art mapping algorithms at similar map quality, and also provide a reference implementation of the method.
FP04Image and Video Processing
11.00 – 12.30 | CS Lecture Hall | Petr Kellnhofer |
- Test-Time Optimization for Video Depth Estimation Using Pseudo Reference Depth, Libing Zeng and Nima Khademi Kalantari (CGF)
In this paper, we propose a learning-based test-time optimization approach for reconstructing geometrically consistent depth
maps from a monocular video. Specifically, we optimize an existing single image depth estimation network on the test example
at hand. We do so by introducing pseudo reference depth maps which are computed based on the observation that the optical
flow displacement for an image pair should be consistent with the displacement obtained by depth-reprojection. Additionally,
we discard inaccurate pseudo reference depth maps using a simple median strategy and propose a way to compute a confidence
map for the reference depth. We use our pseudo reference depth and the confidence map to formulate a loss function for performing
the test-time optimization in an efficient and effective manner. We compare our approach against the state-of-the-art methods on
various scenes both visually and numerically. Our approach is on average 2.5× faster than the state of the art and produces depth maps with higher quality. - Video frame interpolation for high dynamic range sequences captured with dual-exposure sensors, Uğur Çoğalan, Mojtaba Bemana, Hans-Peter Seidel and Karol Myszkowski
Video frame interpolation (VFI) enables many important applications such as slow motion playback and frame rate conversion. However,
one major challenge in using VFI is accurately handling high dynamic range (HDR) scenes with complex motion. To this end, we explore
the possible advantages of dual-exposure sensors that readily provide sharp short and blurry long exposures that are spatially registered and
whose ends are temporally aligned. This way, motion blur registers temporally continuous information on the scene motion that, combined with the sharp reference, enables more precise motion sampling within a single camera shot. We demonstrate that this facilitates a more complex motion reconstruction in the VFI task, as well as HDR frame reconstruction that so far has been considered only for the originally captured frames, not in-between interpolated frames. We design a neural network trained in these tasks that clearly outperforms existing solutions. We also propose a
metric for scene motion complexity that provides important insights into the performance of VFI methods at test time. - Simulating analogue film damage to analyse and improve artefact restoration on high-resolution scans, Daniela Ivanova, John Williamson and Paul Henderson
Digital scans of analogue photographic film typically contain artefacts such as dust and scratches. Automated removal of
these is an important part of preservation and dissemination of photographs of historical and cultural importance.
While state-of-the-art deep learning models have shown impressive results in general image inpainting and denoising,
film artefact removal is an understudied problem. It has particularly challenging requirements, due to the complex
nature of analogue damage, the high resolution of film scans, and potential ambiguities in the restoration. There are no
publicly available high quality datasets of real-world analogue film damage for training and evaluation, making quantitative
studies impossible. We address the lack of ground-truth data for evaluation by collecting a dataset of 4K damaged analogue
film scans paired with manually-restored versions produced by a human expert, allowing quantitative evaluation of restoration
performance. We havemade the dataset available at https://doi.org/10.6084/m9.figshare.21803304. We construct a larger synthetic
dataset of damaged images with paired clean versions using a statistical model of artefact shape and occurrence learnt from real,
heavily-damaged images. We carefully validate the realism of the simulated damage via a human perceptual study, showing that even
expert users find our synthetic damage indistinguishable from real. In addition, we demonstrate that training with our
synthetically damaged dataset leads to improved artefact segmentation performance when compared to previously proposed
synthetic analogue damage overlays. The synthetically damaged dataset can be found at https://doi.org/10.6084/m9.figshare.21815844,
and the annotated authentic artefacts along with the resulting statistical damage model at https://github.com/daniela997/FilmDamageSimulator.
Finally, we use these datasets to train and analyse the performance of eight state-of-the-art image restoration methods on high-resolution scans.
We compare both methods which directly perform the restoration task on scans with artefacts, and methods which require a damage mask
to be provided for the inpainting of artefacts. We modify the methods to process the inputs in a patch-wise fashion to operate on original high resolution film scans.
FP05Learning Deformations and Fluids
15.30 – 17.00 | Günter Hotz Lecture Hall | Mario Botsch |
- Differentiable Depth for Real2Sim Calibration of Soft Body Simulations, Kasra Arnavaz, Max Kragballe Nielsen, Paul G. Kry, Miles Macklin and Kenny Erleben (CGF)
In this work, we present a novel approach for calibrating material model parameters for soft body simulations using real data.
We use a fully differentiable pipeline, combining a differentiable soft body simulator and differentiable depth rendering,
which permits fast gradient-based optimizations. Our method requires no data pre-processing, and minimal experimental set-up,
as we directly minimize the L2-norm between raw LIDAR scans and rendered simulation states. In essence, we provide the first
marker-free approach for calibrating a soft-body simulator to match observed real-world deformations. Our approach is inexpensive
as it solely requires a consumer-level LIDAR sensor compared to acquiring a professional marker-based motion capture system.
We investigate the effects of different material parameterizations and evaluate convergence for parameter optimization in both
single and multi-material scenarios of varying complexity. Finally, we show that our set-up can be extended to optimize for dynamic behaviour as well. - How Will It Drape Like? Capturing Fabric Mechanics from Depth Images, Carlos Rodriguez-Pardo, Melania Prieto-Martin, Dan Casas, and Elena Garces
We propose a method to estimate the mechanical parameters of fabrics using a casual capture setup with a depth camera.
Our approach enables to create mechanically-correct digital representations of real-world textile materials, which is
a fundamental step for many interactive design and engineering applications. As opposed to existing capture methods,
which typically require expensive setups, video sequences, or manual intervention, our solution can capture at scale,
is agnostic to the optical appearance of the textile, and facilitates fabric arrangement by non-expert operators. To this end,
we propose a sim-to-real strategy to train a learning-based framework that can take as input one or multiple images and
outputs a full set of mechanical parameters. Thanks to carefully designed data augmentation and transfer learning protocols,
our solution generalizes to real images despite being trained only on synthetic data, hence successfully closing the sim-to-real loop.
Key in our work is to demonstrate that evaluating the regression accuracy based on the similarity at parameter space leads to an
inaccurate distances that do not match the human perception. To overcome this, we propose a novel metric for fabric drape similarity
that operates on the image domain instead on the parameter space, allowing us to evaluate our estimation within the context of a
similarity rank. We show that out metric correlates with human judgments about the perception of drape similarity, and that our
model predictions produce perceptually accurate results compared to the ground truth parameters. - Physics-Informed Neural Corrector for Deformation-based Fluid Control, Jingwei Tang, Byungsoo Kim, Vinicius Azevedo and Barbara Solenthaler
Controlling fluid simulations is notoriously difficult due to its high computational cost and the fact that user control inputs can cause unphysical motion.
We present an interactive method for deformation-based fluid control. Our method aims at balancing the direct deformations of fluid fields and the
preservation of physical characteristics. We train convolutional neural networks with physics-inspired loss functions together with a differentiable
fluid simulator, and provide an efficient workflow for flow manipulations at test time. We demonstrate diverse test cases to analyze our carefully
designed objectives and show that they lead to physical and eventually visually appealing modifications on edited fluid data.
Wednesday, 10
FP06Reconstruction and Remeshing
09.00 – 10.30 | Günter Hotz Lecture Hall | Jean-Marc Thiery |
- Robust Pointset Denoising of Piecewise-Smooth Surfaces through Line Processes, Jiay Wei, Jiong Chen, Damien Rohmer, Pooran Memari and Mathieu Desbrun
Denoising is a common, yet critical operation in geometry processing aiming at recovering high-fidelity models
of piecewise-smooth objects from noise-corrupted pointsets. Despite a sizable literature on the topic, there
is a dearth of approaches capable of processing very noisy and outlier-ridden input pointsets for which no
normal estimates and no assumptions on the underlying geometric features or noise type are provided. In this paper,
we propose a new robust-statistics approach to denoising pointsets based on line processes to offer robustness to
noise and outliers while preserving sharp features possibly present in the data. While the use of robust statistics
in denoising is hardly new, most approaches rely on prescribed filtering using data-independent blending expressions
based on the spatial and normal closeness of samples. Instead, our approach deduces a geometric denoising strategy
through robust and regularized tangent plane fitting of the initial pointset, obtained numerically via alternating
minimizations for efficiency and reliability. Key to our variational approach is the use of line processes to identify
inliers vs. outliers, as well as the presence of sharp features. We demonstrate that our method can denoise sampled
piecewise-smooth surfaces for levels of noise and outliers at which previous works fall short. - Evocube: a Genetic Labeling Framework for Polycube-Maps, Corentin Dumery, François Protais, Sébastien Mestrallet, Christophe Bourcier and Franck Ledoux (CGF)
Polycube-maps are used as base-complexes in various fields of computational geometry, including the generation
of regular all-hexahedral meshes free of internal singularities. However, the strict alignment constraints
behind polycube-based methods make their computation challenging for CAD models used in numerical simulation
via finite element method (FEM). We propose a novel approach based on an evolutionary algorithm to robustly compute
polycube-maps in this context.We address the labelling problem, which aims to precompute polycube alignment by assigning
one of the base axes to each boundary face on the input. Previous research has described ways to initialize and improve
a labelling via greedy local fixes. However, such algorithms lack robustness and often converge to inaccurate solutions
for complex geometries. Our proposed framework alleviates this issue by embedding labelling operations in an evolutionary
heuristic, defining fitness, crossover, and mutations in the context of labelling optimization. We evaluate our method on a
thousand smooth and CAD meshes, showing Evocube converges to accurate labellings on a wide range of shapes. The limitations
of our method are also discussed thoroughly. - One Step Further Beyond Trilinear Interpolation and Central Differences: Triquadratic Reconstruction and its Analytic Derivatives at the Cost of One Additional Texture Fetch, Balazs Csebfalvi
Recently, it has been shown that the quality of GPU-based trilinear volume resampling can be significantly improved if
the six additional trilinear samples evaluated for the gradient estimation also contribute to the reconstruction of
the underlying function [Cse19]. Although this improvement increases the approximation order from two to three without
any extra cost, the continuity order remains C0. In this paper, we go one step further showing that a C1 continuous
triquadratic B-spline reconstruction and its analytic partial derivatives can be evaluated by taking only one more
trilinear sample into account. Thus, our method is the first volume-resampling technique that is nearly as fast as
trilinear interpolation combined with on-the-fly central differencing, but provides a higher-quality reconstruction
together with a consistent analytic gradient calculation. Furthermore, we show that our fast evaluation scheme can
also be adapted to the Mitchell-Netravali [MN88] notch filter, for which a fast GPU implementation has not been known so far.
FP07BRDFs and Environment Maps
09.00 – 10.30 | CS Lecture Hall | Valentin Deschaintre |
- Learning to Learn and Sample BRDFs, Chen Liu, Michael Fischer and Tobias Ritschel
We propose a method to accelerate the joint process of physically acquiring and learning neural Bi-directional Reflectance Distribution
Function (BRDF) models. While BRDF learning alone can be accelerated by meta-learning, acquisition remains slow as it relies on a
mechanical process. We show that meta-learning can be extended to optimize the physical sampling pattern, too. After our method has
been meta-trained for a set of fully-sampled BRDFs, it is able to quickly train on new BRDFs with up to five orders of magnitude
fewer physical acquisition samples at similar quality. Our approach also extends to other linear and non-linear BRDF models, which we show in an extensive evaluation. - Investigation and Simulation of Diffraction on Rough Surfaces, Olaf Clausen, Yang Chen, Arnulph Fuhrmann and Ricardo Marroquim (CGF)
Simulating light–matter interaction is a fundamental problem in computer graphics. A particular challenge is the simulation
of light interaction with rough surfaces due to diffraction and multiple scattering phenomena. To properly model these phenomena,
wave-optics have to be considered. Nevertheless, the most accurate BRDF models, including wave-optics, are computationally expensive,
and the resulting renderings have not been systematically compared to real-world measurements. This work sheds more light on reflectance
variations due to surface roughness. More specifically, we look at wavelength shifts that lead to reddish and blueish appearances.
These wavelength shifts have been scarcely reported in the literature, and, in this paper, we provide the first thorough analysis from
precise measured data. We measured the spectral in-plane BRDF of aluminium samples with varying roughness and further acquired the surface
topography with a confocal microscope. The measurements show that the rough samples have, on average, a reddish and blueish appearance
in the forward and back-scattering, respectively. Our investigations conclude that this is a diffraction-based effect that dominates the
overall appearance of the samples. Simulations using a virtual gonioreflectometer further confirm our claims. We propose a linear model
that can closely fit such phenomena, where the slope of the wavelength shifts depends on the incident and reflection direction. Based
on these insights, we developed a simple BRDF model based on the Cook–Torrance model that considers such wavelength shifts. - CubeGAN: Omnidirectional Image Synthesis Using Generative Adversarial Networks, Christopher May and Daniel Aliaga
We propose a framework to create projectively-correct and seam-free cube-map images using generative adversarial learning.
Deep generation of cube-maps that contain the correct projection of the environment onto its faces is not
straightforward as has been recognized in prior work. Our approach extends an existing framework, StyleGAN3,
to produce cube-maps instead of planar images. In addition to reshaping the output, we include a cube-specific
volumetric initialization component, a projective resampling component, and a modification of augmentation operations
to the spherical domain. Our results demonstrate the network’s generation capabilities trained on imagery from various
3D environments. Additionally, we show the power and quality of our GAN design in an inversion task, combined with navigation capabilities, to perform novel view synthesis.
FP08Simulation: Material Interactions
11.00 – 12.00 | Günter Hotz Lecture Hall | Jan Bender |
- An Optimization-based SPH Solver for Simulation of Hyperelastic Solids, Min Hyung Kee, Kiwon Um, Hyunmo Kang and JungHyun Han
This paper proposes a novel method for simulating hyperelastic solids with Smoothed Particle Hydrodynamics (SPH).
The proposed method extends the coverage of the state-of-the-art elastic SPH solid method to include different types
of hyperelastic materials, such as the Neo-Hookean and the St. Venant-Kirchoff models. To this end, we reformulate an
implicit integration scheme for SPH elastic solids into an optimization problem and solve the problem using a general-purpose
quasi-Newton method. Our experiments show that the Limited-memory BFGS (L-BFGS) algorithm can be employed to efficiently
solve our optimization problem in the SPH framework and demonstrate its stable and efficient simulations for complex materials
in the SPH framework. Thanks to the nature of our unified representation for both solids and fluids, the SPH formulation simplifies coupling between different materials and handling collisions. - Monolithic Friction and Contact Handling for Rigid Bodies and Fluids using SPH, Timo Probst and Matthias Teschner (CGF)
We propose a novel monolithic pure SPH formulation to simulate fluids strongly coupled with rigid bodies. This includes fluid incompressibility,
fluid–rigid interface handling and rigid–rigid contact handling with a viable implicit particle-based dry friction formulation. The resulting
global system is solved using a new accelerated solver implementation that outperforms existing fluid and coupled rigid–fluid simulation approaches.
We compare results of our simulation method to analytical solutions, show performance evaluations of our solver and present a variety of new and challenging simulation scenarios.
FP093D Representation and Acceleration Structures
11.00 – 12.30 | CS Lecture Hall | Vlastimil Havran |
- Editing Compressed High-resolution Voxel Scenes with Attributes, Mathijs Molenaar and Elmar Eisemann
Sparse Voxel Directed Acyclic Graphs (SVDAGs) are an efficient solution for storing high-resolution voxel geometry. Recently,
algorithms for the interactive modification of SVDAGs have been proposed that maintain the compressed geometric representation.
Nevertheless, voxel attributes, such as colours, require an uncompressed storage, which can result in high memory usage over
the course of the application. The reason is the high cost of existing attribute-compression schemes which remain unfit for
interactive applications. In this paper, we introduce two attribute compression methods (lossless and lossy), which enable the
interactive editing of compressed high-resolution voxel scenes including attributes. - Parallel Transformation of Bounding Volume Hierarchies into Oriented Bounding Box Trees, Nick Vitsas, Iordanis Evangelou, Georgios Papaioannou and Anastasios Gkaravelis
Oriented bounding box (OBB) hierarchies can be used instead of hierarchies based on axis-aligned bounding boxes (AABB), providing tighter fitting to the
underlying geometric structures and resulting in improved interference tests, such as ray-geometry intersections. In this paper, we present a method for
the fast, parallel transformation of an existing bounding volume hierarchy (BVH), based on AABBs, into a hierarchy based on oriented bounding boxes.
To this end, we parallelise a high-quality OBB extraction algorithm from the literature to operate as a standalone OBB estimator and further extend
it to efficiently build an OBB hierarchy in a bottom up manner. This agglomerative approach allows for fast parallel execution and the formation of
arbitrary, high-quality OBBs in bounding volume hierarchies. The method is fully implemented on the GPU and extensively evaluated with ray intersections. - Stochastic Subsets for BVH Construction, Lorenzo Tessari, Addis Dittebrandt, Michael Doyle and Carsten Benthin
BVH construction is a critical component of real-time and interactive ray-tracing systems. However, BVH construction
can be both compute and bandwidth intensive, especially when a large degree of dynamic geometry is present. Different
build algorithms vary substantially in the traversal performance that they produce, making high quality construction
algorithms desirable. However, high quality algorithms, such as top-down construction, are typically more expensive,
limiting their benefit in real-time and interactive contexts. One particular challenge of high quality top-down construction
algorithms is that the large working set at the top of the tree can make constructing these levels bandwidth-intensive,
due to O(nlog(n)) complexity, limited cache locality, and less dense compute at these levels. To address this limitation,
we propose a novel stochastic approach to GPU BVH construction that selects a representative subset to build the upper
levels of the tree. As a second pass, the remaining primitives are clustered around the BVH leaves and further processed
into a complete BVH. We show that our novel approach significantly reduces the construction time of top-down GPU BVH
builders by a factor up to 1.8x, while achieving competitive rendering performance in most cases, and exceeding the performance in others.
FP10Faces
15.30 – 17.00 | Günter Hotz Lecture Hall | Marc Habermann |
- Face Editing using Part-Based Optimization of the Latent Space, Mohammad Amin Aliari, Andre Beauchamp, Tiberiu Popa and Eric Paquette
We propose an approach for interactive 3D face editing based on deep generative models. Most of the current face modeling
methods rely on linear methods and cannot express complex and non-linear deformations. In contrast to 3D morphable face
models based on Principal Component Analysis (PCA), we introduce a novel architecture based on variational autoencoders.
Our architecture has multiple encoders (one for each part of the face, such as the nose and mouth) which feed a single
decoder. As a result, each sub-vector of the latent vector represents one part. We train our model with a novel loss function
that further disentangles the space based on different parts of the face. The output of the network is a whole 3D face. Hence,
unlike partbased PCA methods, our model learns to merge the parts intrinsically and does not require an additional merging process.
To achieve interactive face modeling, we optimize for the latent variables given vertex positional constraints provided by a user.
To avoid unwanted global changes elsewhere on the face, we only optimize the subset of the latent vector that corresponds to the
part of the face being modified. Our editing optimization converges in less than a second. Our results show that the proposed
approach supports a broader range of editing constraints and generates more realistic 3D faces. - What’s in a Decade? Transforming Faces Through Time, Eric Ming Chen, Jin Sun, Apoorv Khandelwal, Dani Lischinski, Noah Snavely and Hadar Averbuch-Elor
How can one visually characterize photographs of people over time? In this work, we describe the Faces
Through Time dataset, which contains over a thousand portrait images per decade from the 1880s to the
present day. Using our new dataset, we devise a framework for resynthesizing portrait images across time,
imagining how a portrait taken during a particular decade might have looked like had it been taken in other
decades. Our framework optimizes a family of per-decade generators that reveal subtle changes that differentiate
decades—such as different hairstyles or makeup—while maintaining the identity of the input portrait. Experiments
show that our method can more effectively resynthesizing portraits across time compared to state-of-the-art
image-to-image translation methods, as well as attribute-based and language-guided portrait editing models.
Our code and data will be available at facesthroughtime.github.io. - Makeup Extraction of 3D Representation via Illumination-Aware Image Decomposition, Xingchao Yang, Takafumi Taketomi and Yoshihiro Kanamori
Facial makeup enriches the beauty of not only real humans but also virtual characters; therefore,
makeup for 3D facial models is highly in demand in productions. However, painting directly on 3D
faces and capturing real-world makeup are costly, and extracting makeup from 2D images often struggles
with shading effects and occlusions. This paper presents the first method for extracting makeup for 3D
facial models from a single makeup portrait. Our method consists of the following three steps. First, we
exploit the strong prior of 3D morphable models via regression-based inverse rendering to extract coarse
materials such as geometry and diffuse/specular albedos that are represented in the UV space. Second, we
refine the coarse materials, which may have missing pixels due to occlusions. We apply inpainting and optimization.
Finally, we extract the bare skin, makeup, and an alpha matte from the diffuse albedo. Our method offers various
applications for not only 3D facial models but also 2D portrait images. The extracted makeup is well-aligned in the UV space,
from which we build a large-scale makeup dataset and a parametric makeup model for 3D faces. Our disentangled materials also
yield robust makeup transfer and illumination-aware makeup interpolation/removal without a reference image.
Thursday, 11
FP11Topological and Geometric Shape Understanding
09.00 – 10.30 | Günter Hotz Lecture Hall | Pooran Memari |
- A Variational Loop Shrinking Analogy for Handle and Tunnel Detection and Reeb Graph Construction on Surfaces, Alexander Weinrauch, Daniel Mlaka, Hans-Peter Seidel, Markus Steinberger and Rhaleb Zayer
The humble loop shrinking property played a central role in the inception of modern topology but it has been eclipsed
by more abstract algebraic formalisms. This is particularly true in the context of detecting relevant non-contractible
loops on surfaces where elaborate homological and/or graph theoretical constructs are favored in algorithmic solutions.
In this work, we devise a variational analogy to the loop shrinking property and show that it yields a simple, intuitive,
yet powerful solution allowing a streamlined treatment of the problem of handle and tunnel loop detection. Our formalization
tracks the evolution of a diffusion front randomly initiated on a single location on the surface. Capitalizing on a diffuse
interface representation combined with a set of rules for concurrent front interactions, we develop a dynamic data structure
for tracking the evolution on the surface encoded as a sparse matrix which serves for performing both diffusion numerics and
loop detection and acts as the workhorse of our fully parallel implementation. The substantiated results suggest our approach
outperforms state of the art and robustly copes with highly detailed geometric models. As a byproduct, our approach can be used
to construct Reeb graphs by diffusion thus avoiding commonly encountered issues when using Morse functions. - Priority-based encoding of triangle mesh connectivity for a known geometry, Jan Dvořák, Zuzana Káčereková, Petr Vaněček and Libor Váša (CGF)
In certain practical situations, the connectivity of a triangle mesh needs to be transmitted or stored given a fixed set of 3D
vertices that is known at both ends of the transaction (encoder/decoder). This task is different from a typical mesh compression
scenario, in which the connectivity and geometry (vertex positions) are encoded either simultaneously or in reversed order (connectivity first),
usually exploiting the freedom in vertex/triangle re-indexation. Previously proposed algorithms for encoding the connectivity for a
known geometry were based on a canonical mesh traversal and predicting which vertex is to be connected to the part of the mesh that
is already processed. In this paper, we take this scheme a step further by replacing the fixed traversal with a priority queue of open
expansion gates, out of which in each step a gate is selected that has the most certain prediction, that is one in which there is a
candidate vertex that exhibits the largest advantage in comparison with other possible candidates, according to a carefully designed
quality metric. Numerical experiments demonstrate that this improvement leads to a substantial reduction in the required data rate
in comparison with the state of the art. - Evolving Guide Subdivision, Kestutis Karciauskas and Jorg Peters
To overcome the well-known shape deficiencies of bi-cubic subdivision surfaces, Evolving Guide subdivision (EG subdivision) generalizes
C2 bi-quartic (bi-4) splines that approximate a sequence of piecewise polynomial surface pieces near extraordinary points. Unlike guided
subdivision, which achieves good shape by following a guide surface in a two-stage, geometry-dependent process, EG subdivision is defined
by five new explicit subdivision rules. While formally only C1 at extraordinary points, EG subdivision applied to an obstacle course of
inputs generates surfaces without the oscillations and pinched highlight lines typical for Catmull-Clark subdivision. EG subdivision surfaces
joinC2 with bi-3 surface pieces obtained by interpreting regular sub-nets as bi-cubic tensor-product splines and C2 with adjacent EG surfaces.
The EG subdivision control net surrounding an extraordinary node can have the same structure as Catmull-Clark subdivision: two rings of 4-sided
facets around each extraordinary nodes so that extraordinary nodes are separated by at least one regular node.
FP12Materials And Textures
09.00 – 10.30 | CS Lecture Hall | Tobias Ritschel |
- In-the-wild Material Appearance Editing using Perceptual Attributes, José Daniel Subías and Manuel Lagunas
Intuitively editing the appearance of materials from a single image is a challenging task given the complexity of the interactions between
light and matter, and the ambivalence of human perception. This problem has been traditionally addressed by estimating additional factors
of the scene like geometry or illumination, thus solving an inverse rendering problem and subduing the final quality of the results to the
quality of these estimations. We present a single-image appearance editing framework that allows us to intuitively modify the material
appearance of an object by increasing or decreasing high-level perceptual attributes describing such appearance (e.g., glossy or metallic).
Our framework takes as input an in-the-wild image of a single object, where geometry, material, and illumination are not controlled, and
inverse rendering is not required. We rely on generative models and devise a novel architecture with Selective Transfer Unit (STU) cells that
allow to preserve the high-frequency details from the input image in the edited one. To train our framework we leverage a dataset with pairs
of synthetic images rendered with physically-based algorithms, and the corresponding crowd-sourced ratings of high-level perceptual attributes.
We show that our material editing framework outperforms the state of the art, and showcase its applicability on synthetic images, in-the-wild real-world photographs, and video sequences. - A Semi-procedural Convolutional Material Prior, Xilong Zhou, Miloš Hašan, Valentin Deschaintre, Paul Guerrero, Kalyan Sunkavalli and Nima Khademi Kalantari (CGF)
Lightweight material capture methods require a material prior, defining the subspace of plausible textures within the large space
of unconstrained texel grids. Previous work has either used deep neural networks (trained on large synthetic material datasets)
or procedural node graphs (constructed by expert artists) as such priors. In this paper, we propose a semi-procedural differentiable
material prior that represents materials as a set of (typically procedural) grayscale noises and patterns that are processed by a
sequence of lightweight learnable convolutional filter operations. We demonstrate that the restricted structure of this architecture
acts as an inductive bias on the space of material appearances, allowing us to optimize the weights of the convolutions per-material,
with no need for pre-training on a large dataset. Combined with a differentiable rendering step and a perceptual loss, we enable
single-image tileable material capture comparable with state of the art. Our approach does not target the pixel-perfect recovery
of the material, but rather uses noises and patterns as input to match the target appearance. To achieve this, it does not require
complex procedural graphs, and has a much lower complexity, computational cost and storage cost. We also enable control over the results,
through changing the provided patterns and using guide maps to push the material properties towards a user-driven objective. - Preserving the autocovariance of texture tilings by using importance sampling, Nicolas Lutz, Basile Sauvage and Jean-Michel Dischler
By-example aperiodic tilings are popular texture synthesis techniques that allow a fast, on-the-fly generation of unbounded
and non-periodic textures with an appearance matching an arbitrary input sample called the “exemplar”. But by relying on
uniform random sampling, these algorithms fail to preserve the autocovariance function, resulting in correlations that do
not match the ones in the exemplar. The output can then be perceived as excessively random. In this work, we present a new
method which can well preserve the autocovariance function of the exemplar. It consists in fetching contents with an importance
sampler taking the explicit autocovariance function as the probability density function (pdf) of the sampler. Our method can be
controlled for increasing or decreasing the randomness aspect of the texture. Besides significantly improving synthesis quality
for classes of textures characterized by pronounced autocovariance functions, we moreover propose a real-time tiling and blending
scheme that permits the generation of high-quality textures faster than former algorithms with minimal downsides by reducing the number of texture fetches.
FP13Capturing Human Pose and Appearance
11.00 – 12.30 | Günter Hotz Lecture Hall | Yiorgos Chrysanthou |
- Variational Pose Prediction with Dynamic Sample Selection from Sparse Tracking Signals, Nicholas Milef, Shinjiro Sueda and Nima Khademi Kalantari
We propose a learning-based approach for full-body pose reconstruction from extremely sparse upper body tracking data, obtained
from a virtual reality (VR) device. We leverage a conditional variational autoencoder with gated recurrent units to synthesize
plausible and temporally coherent motions from 4-point tracking (head, hands, and waist positions and orientations). To avoid
synthesizing implausible poses, we propose a novel sample selection and interpolation strategy along with an anomaly detection
algorithm. Specifically, we monitor the quality of our generated poses using the anomaly detection algorithm and smoothly transition
to better samples when the quality falls below a statistically defined threshold. Moreover, we demonstrate that our sample selection
and interpolation method can be used for other applications, such as target hitting and collision avoidance, where the generated motions
should adhere to the constraints of the virtual environment. Our system is lightweight, operates in real-time, and is able to produce temporally coherent and realistic motions. - Scene-Aware 3D Multi-Human Motion Capture from a Single Camera, Diogo Luvizon, Marc Habermann, Vladislav Golyanik, Adam Kortylewski and Christian Theobalt
In this work, we consider the problem of estimating the 3D position of multiple humans in a scene
as well as their body shape and articulation from a single RGB video recorded with a static camera.
In contrast to expensive marker-based or multi-view systems, our lightweight setup is ideal for private
users as it enables an affordable 3D motion capture that is easy to install and does not require expert
knowledge. To deal with this challenging setting, we leverage recent advances in computer vision using
large-scale pre-trained models for a variety of modalities, including 2D body joints, joint angles, normalized
disparity maps, and human segmentation masks. Thus, we introduce the first non-linear optimization-based
approach that jointly solves for the absolute 3D position of each human, their articulated pose, their individual
shapes as well as the scale of the scene. In particular, we estimate the scene depth and person unique scale from
normalized disparity predictions using the 2D body joints and joint angles. Given the per-frame scene depth, we
reconstruct a point-cloud of the static scene in 3D space. Finally, given the per-frame 3D estimates of the humans
and scene point-cloud, we perform a space-time coherent optimization over the video to ensure temporal, spatial and
physical plausibility. We evaluate our method on established multi-person 3D human pose benchmarks where we consistently
outperform previous methods and we qualitatively demonstrate that our method is robust to in-the-wild conditions including challenging scenes with people of different sizes. - Generating Texture for 3D Human Avatar from a Single Image using Sampling and Refinement Networks, Sihun Cha, Kwanggyoon Seo, Amirsaman Ashtari and Junyong Noh
There has been significant progress in generating an animatable 3D human avatar from a single image. However, recovering texture for the 3D human
avatar from a single image has been relatively less addressed. Because the generated 3D human avatar reveals the occluded texture of the given
image as it moves, it is critical to synthesize the occluded texture pattern that is unseen from the source image. To generate a plausible texture
map for 3D human avatars, the occluded texture pattern needs to be synthesized with respect to the visible texture from the given image. Moreover,
the generated texture should align with the surface of the target 3D mesh. In this paper, we propose a texture synthesis method for a 3D human
avatar that incorporates geometry information. The proposed method consists of two convolutional networks for the sampling and refining process.
The sampler network fills in the occluded regions of the source image and aligns the texture with the surface of the target 3D mesh using the
geometry information. The sampled texture is further refined and adjusted by the refiner network. To maintain the clear details in the given
image, both sampled and refined texture is blended to produce the final texture map. To effectively guide the sampler network to achieve its
goal, we designed a curriculum learning scheme that starts from a simple sampling task and gradually progresses to the task where the alignment
needs to be considered. We conducted experiments to show that our method outperforms previous methods qualitatively and quantitatively.
FP14Good Visualization Practices
11.00 – 12.00 | CS Lecture Hall | Anna Vilanova |
- Learning Human Viewpoint Preferences from Sparsely Annotated Models, Sebastian Hartwig, Michael Schelling, Christian van Onzenoodt, Pere-Pau Vázquez, Pedro Hermosilla and Timo Ropinski (CGF)
View quality measures compute scores for given views and are used to determine an optimal view in viewpoint selection tasks.
Unfortunately, despite the wide adoption of these measures, they are rather based on computational quantities, such as entropy,
than human preferences. To instead tailor viewpoint measures towards humans, view quality measures need to be able to capture human
viewpoint preferences. Therefore, we introduce a large-scale crowdsourced data set, which contains 58k annotated viewpoints for 3220
ModelNet40 models. Based on this data, we derive a neural view quality measure abiding to human preferences. We further demonstrate that
this view quality measure not only generalizes to models unseen during training, but also to unseen model categories. We are thus able
to predict view qualities for single images, and directly predict human preferred viewpoints for 3D models by exploiting point-based
learning technology, without requiring to generate intermediate images or sampling the view sphere. We will detail our data collection
procedure, describe the data analysis and model training and will evaluate the predictive quality of our trained viewpoint measure on unseen
models and categories. To our knowledge, this is the first deep learning approach to predict a view quality measure solely based on human preferences. - Decision Boundary Visualization for Counterfactual Reasoning, Jan-Tobias Sohns, Christoph Garth and Heike Leitte (CGF)
Machine learning algorithms are widely applied to create powerful prediction models. With increasingly complex models, humans’
ability to understand the decision function (that maps from a high-dimensional input space) is quickly exceeded. To explain a
model’s decisions, black-box methods have been proposed that provide either non-linear maps of the global topology of the decision
boundary, or samples that allow approximating it locally. The former loses information about distances in input space, while the
latter only provides statements about given samples, but lacks a focus on the underlying model for precise ‘What-If’-reasoning. In
this paper, we integrate both approaches and propose an interactive exploration method using local linear maps of the decision space.
We create the maps on high-dimensional hyperplanes—2D-slices of the high-dimensional parameter space—based on statistical and personal
feature mutability and guided by feature importance. We complement the proposed workflow with established model inspection techniques to
provide orientation and guidance. We demonstrate our approach on real-world datasets and illustrate that it allows identification of
instance-based decision boundary structures and can answer multi-dimensional ‘What-If’-questions, thereby identifying counterfactual scenarios visually.
FP15Garment Design
15.30 – 17.00 | Günter Hotz Lecture Hall | Michael Guthe |
- Detail-Aware Deep Clothing Animations Infused with Multi-Source Attributes, Tianxing Li, Rui Shi and Takashi Kanai (CGF)
This paper presents a novel learning-based clothing deformation method to generate rich and reasonable detailed deformations for garments worn by bodies of various
shapes in various animations. In contrast to existing learning-based methods, which require numerous trained models for different garment topologies or poses and
are unable to easily realize rich details, we use a unified framework to produce high fidelity deformations efficiently and easily. Specifically, we first found
that the fit between the garment and the body has an important impact on the degree of folds. We then designed an attribute parser to generate detail-aware encodings
and infused them into the graph neural network, therefore enhancing the discrimination of details under diverse attributes. Furthermore, to achieve better
convergence and avoid overly smooth deformations, we proposed to reconstruct output to mitigate the complexity of the learning task. Experimental results show
that our proposed deformation method achieves better performance over existing methods in terms of generalization ability and quality of details. - Designing Personalized Garments with Body Movement, Katja Wolff, Philipp Herholz, Verena Ziegler, Frauke Link, Nico Brügel and Olga Sorkine-Hornung (CGF)
The standardized sizes used in the garment industry do not cover the range of individual differences in body shape for most people, leading
to ill-fitting clothes, high return rates and overproduction. Recent research efforts in both industry and academia, therefore, focus on
virtual try-on and on-demand fabrication of individually fitting garments. We propose an interactive design tool for creating custom-fit
garments based on 3D body scans of the intended wearer. Our method explicitly incorporates transitions between various body poses to ensure
a better fit and freedom of movement. The core of our method focuses on tools to create a 3D garment shape directly on an avatar without
an underlying sewing pattern, and on the adjustment of that garment’s rest shape while interpolating and moving through the different
input poses. We alternate between cloth simulation and rest shape adjustment based on stretch to achieve the final shape of the garment.
At any step in the real-time process, we allow for interactive changes to the garment. Once the garment shape is finalized for production,
established techniques can be used to parameterize it into a 2D sewing pattern or transform it into a knitting pattern. - Directionality-Aware Design of Embroidery Patterns, Liu Zhenyuan, Michal Piovarči, Christian Hafner, Raphaël Charrondière and Bernd Bickel
Embroidery is a long-standing and high-quality approach to making logos and images on textiles. Nowadays, it can also be performed
via automated machines that weave threads with high spatial accuracy. A characteristic feature of the appearance of the
threads is a high degree of anisotropy. The anisotropic behavior is caused by depositing thin but long strings of thread.
As a result, the stitched patterns convey both color and direction. Artists leverage this anisotropic behavior to enhance
pure color images with textures, illusions of motion, or depth cues. However, designing colorful embroidery patterns with
prescribed directionality is a challenging task, one usually requiring an expert designer. In this work, we propose an
interactive algorithm that generates machine-fabricable embroidery patterns from multi-chromatic images equipped with user-specified
directionalityfields. We cast the problem of finding a stitching pattern into vector theory. To find a suitable stitching pattern,
we extract sources and sinks from the divergence field of the vector field extracted from the input and use them to trace streamlines.
We further optimize the streamlines to guarantee a smooth and connected stitching pattern. The generated patterns approximate the color
distribution constrained by the directionality field. To allow for further artistic control, the trade-off between color match and
directionality match can be interactively explored via an intuitive slider. We showcase our approach by fabricating several embroidery paths.
Friday, 12
FP162D Animation and Interaction
09.00 – 10.30 | Günter Hotz Lecture Hall | Holger Theisel |
- Non-linear rough 2D animation using transient embeddings, Melvin Even, Pierre Bénard, and Pascal Barla
Traditional 2D animation requires time and dedication since tens of thousands of frames need to be drawn by hand for a
typical production. Many computer-assisted methods have been proposed to automatize the generation of inbetween frames from
a set of clean line drawings, but they are all limited by a rigid workflow and a lack of artistic controls, which is in the
most part due to the one-to-one stroke matching and interpolation problems they attempt to solve. In this work, we take a novel
view on those problems by focusing on an earlier phase of the animation process that uses rough drawings (i.e., sketches). Our
key idea is to recast the matching and interpolation problems so that they apply to transient embeddings, which are groups of strokes
that only exist for a few keyframes. A transient embedding carries strokes between keyframes both forward and backward in time
through a sequence of transformed lattices. Forward and backward strokes are then cross-faded using their thickness to yield rough
inbetweens. With our approach, complex topological changes may be introduced while preserving visual motion continuity. As demonstrated
on state-of-the-art 2D animation exercises, our system provides unprecedented artistic control through the non-linear exploration of movements and dynamics in real-time. - Interactive design of 2D car profiles with aerodynamic feedback, Nicolas Rosset, Guillaume Cordonnier, Régis Duvigneau and Adrien Bousseau
The design of car shapes requires a delicate balance between aesthetic and performance. While fluid simulation provides
the means to evaluate the aerodynamic performance of a given shape, its computational cost hinders its usage during the
early explorative phases of design, when aesthetic is decided upon. We present an interactive system to assist designers
in creating aerodynamic car profiles. Our system relies on a neural surrogate model to predict fluid flow around car shapes,
providing fluid visualization and shape optimization feedback to designers as soon as they sketch a car profile. Compared to
prior work that focused on time-averaged fluid flows, we describe how to train our model on instantaneous, synchronized observations
extracted from multiple pre-computed simulations, such that we can visualize and optimize for dynamic flow features, such
as vortices. Furthermore, we architectured our model to support gradient-based shape optimization within a learned latent
space of car profiles. In addition to regularizing the optimization process, this latent space and an associated encoder-decoder
allows us to input and output car profiles in a bitmap form, without any explicit parameterization of the car boundary. Finally,
we designed our model to support pointwise queries of fluid properties around car shapes, allowing us to adapt computational cost
to application needs. As an illustration, we only query our model along streamlines for flow visualization, we query it in the
vicinity of the car for drag optimization, and we query it behind the car for vortex attenuation. - Delaunay Painting: Perceptual image coloring from raster contours with gaps, Amal Dev Parakka, Pooran Memari and Marie-Paule Cani (CGF)
We introduce Delaunay Painting, a novel and easy-to-use method to flat-colour contour-sketches with gaps. Starting from a Delaunay
triangulation of the input contours, triangles are iteratively filled with the appropriate colours, thanks to the dynamic update of
flow values calculated from colour hints. Aesthetic finish is then achieved, through energy minimisation of contour-curves and further
heuristics enforcing the appropriate sharp corners. To be more efficient, the user can also make use of our colour diffusion framework,
which automatically extends colouring to small, internal regions such as those delimited by hatches. The resulting method robustly handles
input contours with strong gaps. As an interactive tool, it minimizes user’s efforts and enables any colouring strategy, as the result does
not depend on the order of interactions. We also provide an automatized version of the colouring strategy for quick segmentation of contours
images, that we illustrate with applications to medical imaging and sketch segmentation.
Education Program
Education Program
Wednesday, 10 | Thursday, 11 | Friday, 12 |
---|---|---|
15.30 – 17.00
|
15.30 – 17.00
|
09.00 – 10.30
|
Wednesday, 10
EDU01Panel
15.30 – 17.00 | MPI INF Room 024 | Bedrich Benes |
- A Constructivism-based Approach to Treemap Literacy in the Classroom
Elif E. Firat, Colm Lang, Ilena Peng, Bhumika Srinivas, Robert Laramee and Alark JoshiTreemaps are a popular representation to show hierarchical as well as part-to-whole relationships in data. While most students
are aware of node-link representations / network diagrams based on their K-12 education, treemaps are often a novel representation
to them. We present our experience of developing a software using principles from constructivism to help students understand treemaps
using linked, side-by-side views of a node-link diagram and a treemap of the same data. Based on the qualitative survey conducted at
the end of the intervention, students found the linked views to be beneficial for understanding hierarchical representation of data
using treemaps. - Panel on Teaching AI for CG
The panel is open to all educators and researchers who want to share their experience teaching Artificial Intelligence techniques applicable to
Computer Graphics. We are seeking teachers who combine AI and CG education in one course or have plans to start such a course soon.
Thursday, 11
EDU02Projects
15.30 – 17.00 | CS Lecture Hall | Alejandra Magana |
- Game-based Transformations: A playful approach to learning transformations in computer graphics
Martin EisemannIn this paper we present a playful and game-based learning approach to teaching transformations in a 2nd year undergraduate computer graphics
course. While the theoretical concepts were taught in class the exercise consists of two web-based tools that help the students to get a
playful grasp on the complex topic which is the foundation for many of the later concepts typically taught in computer graphics. The final
students’ projects and feedback indicate that the game-based introduction was well received by the students. - Draw-Cut-Glue: Comparison of Paper House Model Creation in Virtual Reality and on Paper in Museum Education Programme – Pilot Study
Ivo Malý, Iva Vachkova and David SedlacekIn this paper we describe the incorporation of virtual reality (VR) into an educational program in a museum as part of research education with
the aim of interpreting cultural heritage. We have created a VR application in which students will experience the creation of Langweil’s model
of Prague, more precisely, they will virtually draw and cut out one facade of a house and insert it into the rest of the model. As part of the
educational program, students will also experience a similar activity in real life, which leads students to compare the creation in virtual reality
and in reality with real tools. The educational program we describe consists of 4 activities. In a pilot study, we tested it with 31 students and
describe the observed findings from both the students’ and the organization’s perspectives. - Towards a Formal Education of Visual Effects Artists
Adam Redford and Eike AndersonThe rapid growth of the visual effects industry over the past three decades and increasing demand for high quality visual effects for film,
television and similar media, in turn increasing demand for graduates in this field have highlighted the need for formal education in visual
effects. In this paper, we explore the design of a visual effects undergraduate degree programme and discuss our aims and objectives in
implementing this programme in terms of both curriculum and syllabus.
Friday, 12
EDU03Methods
09.00 – 10.30 | MPI SWS Lecture Hall | Jiri Zara |
- Project Elements: A computational entity-component-system in a scene-graph pythonic framework, for a neural, geometric computer graphics curriculum
George Papagiannakis, Manos Kamarianakis, Antonis Protopsaltis, Dimitris Angelis and Paul ZikasWe present the Elements project, a computational science and computer graphics (CG) framework, that offers for the first
time the advantages of an Entity-Component-System (ECS) along with the rapid prototyping convenience of a Scenegraph-based
pythonic framework. This novelty allows advances in the teaching of CG: from heterogeneous directed acyclic graphs and
depth-first traversals, to animation, skinning, geometric algebra and shader-based components rendered via unique systems
all the way to their representation as graph neural networks for 3D scientific visualization. Taking advantage of the unique
ECS in a a Scenegraph underlying system, this project aims to bridge CG curricula and modern game engines, that are based on
the same approach but often present these notions in a black-box approach. It is designed to actively utilize software design
patterns, under an extensible open-source approach. Although Elements provides a modern, simple to program pythonic approach
with Jupyter notebooks and unit-tests, its CG pipeline is not black-box, exposing for teaching for the first time unique
challenging scientific, visual and neural computing concepts. - Towards Immersive Visualization for Large Lectures: Opportunities, Challenges, and Possible Solutions
Voicu Popescu, Alejandra Magana and Bedrich BenesIn this position paper, we discuss deploying immersive visualization in large lectures (IVLL). We take the position that IVLL has great
potential to benefit students and that IVLL implementation is now possible. We argue that IVLL is best done using mixed reality (MR)
headsets, which, compared to virtual reality (VR) headsets, have the advantages of allowing students to see important elements of the
real world and avoiding cybersickness. We argue that immersive visualization can be beneficial at any point on the student engagement
continuum. We argue that immersive visualization allows reconfiguring large lectures dynamically, partitioning the class with great
flexibility in groups of students of various sizes, or accommodating 3D visualizations of monumental size. We inventory the challenges
that have to be overcome to implement IVLL, and we argue that they currently have acceptable solutions, opening the door to developing
a first IVLL system.
STAR Program
STAR Papers Program
Tuesday, 9 | Wednesday, 10 | Thursday, 11 | Friday, 12 |
---|---|---|---|
|
09.00 – 10.30
|
09.00 – 10.30
|
|
11.00 – 12.30
|
11.00 – 12.30
|
||
15.30 – 17.00
|
15.30 – 17.00
|
Tuesday, 9
ST01
15.30 – 17.00 | CS Lecture Hall | Gurprit Singh |
- A survey of Optimal Transport for Computer Graphics and Computer Vision, Nicolas, Bonnet, Julie Digne
Wednesday, 10
ST02
11.00 – 12.30 | MPI SWS Lecture Hall | Rhaleb Zayer |
- A Survey of Indicators for Mesh Quality Assessment, Tommaso Sorgente, Silvia Biasotti, Gianmarco Manzini, Michela Spagnuolo
Thursday, 11
ST03
09.00 – 10.30 | MPI SWS Lecture Hall | Adrien Bousseau |
- State of the Art in Dense Monocular Non-Rigid 3D Reconstruction, Edith Tretschk*, Navami Kairanda*, Mallikarjun B R, Rishabh Dabral, Adam Kortylewski, Bernhard Egger, Marc Habermann, Pascal Fua, Christian Theobalt, Vladislav Golyanik
ST04
11.00 – 12.30 | MPI SWS Lecture Hall | Eduard Zell |
- A Survey on Discrete Laplacians for General Polygonal and Polyhedral Meshes, Astrid Bunge, Mario Botsch
ST05
15.30 – 17.00 | MPI SWS Lecture Hall | Petr Kellnhofer |
- Neurosymbolic Models for Computer Graphics, Daniel Ritchie, Paul Guerrero, R. Kenny Jones, Niloy Mitra, Adriana Schulz, Karl Willis, Jiajun Wu
Friday, 12
ST06
09.00 – 10.30 | MPI SWS Lecture Hall | Mohamed Elgharib |
- A Comprehensive Review of Data-Driven Co-Speech Gesture Generation, Simbarashe Nyatsanga, Taras Kucherenko, Chaitanya Ahuja, Gustav Eje Henter, Michael Neff
Keynote Program
Keynotes Program
Tuesday, 9
Zooming into the Details
14.00 – 15.00 | Günter Hotz Lecture Hall |
- Elmar Eisemann (TU Delft)
Abstract: For realistic image synthesis, simulating complex environments in all detail can lead to prohibitive rendering costs.
In visual analytics, large-scale datasets pose significant challenges for analysis, and a simple subsampling can result in missing
structures. While seemingly different contexts, both scenarios require scalable solutions. In this talk, we will discuss several
principles to handle complexity and will show examples for how data representations, algorithms, but also perception can be key
in overcoming such computationally intensive challenges.Bio: Elmar Eisemann is a professor at TU Delft, heading the Computer Graphics and Visualization Group. Before he
was an associated professor at Telecom ParisTech (until 2012) and a senior scientist heading a research group in the Cluster of
Excellence (Saarland University / MPI Informatik) (until 2009). He studied at the Ecole Normale Superieure in Paris (2001-2005) and
received his PhD from the University of Grenoble at INRIA Rhone-Alpes (2005-2008). He spent several research visits abroad; at the Massachusetts
Institute of Technology (2003), University of Illinois Urbana-Champaign (2006), Adobe Systems Inc. (2007,2008). His interests include real-time and perceptual rendering,
visualization, alternative representations, shadow algorithms, global illumination, and GPU acceleration techniques. He coauthored the
book “Real-time shadows” and participated in various committees and editorial boards. He was local organizer of EGSR 2010, 2012, HPG 2012,
and paper chair of HPG 2015, EGSR 2016, GI 2017, and general chair of Eurographics 2018 in Delft. His work received several distinction
awards and he was honored with the Eurographics Young Researcher Award 2011 and the Netherlands Prize for ICT Research 2019.
Wednesday, 10
A Trip Down the Generative Neural Graphics Pipeline
14.00 – 15.00 | Günter Hotz Lecture Hall |
- Gordon Wetzstein (Stanford University)
Abstract: Generative neural radiance fields offer unprecedented capabilities for photorealistic scene
representation, generation, novel-view synthesis, among other tasks. In this talk, we discuss expressive scene representation
network architectures, efficient neural rendering approaches, and generative AI strategies that allow us to create
photorealistic multi-view-consistent digital humans.Bio: Gordon Wetzstein is an Associate Professor of Electrical Engineering and, by courtesy, of Computer Science
at Stanford University. He is the leader of the Stanford Computational Imaging Lab and a faculty co-director of the
Stanford Center for Image Systems Engineering. At the intersection of computer graphics and vision, artificial intelligence,
computational optics, and applied vision science, Prof. Wetzstein’s research has a wide range of applications in next-generation
imaging, wearable computing, and neural rendering systems. Prof. Wetzstein is a Fellow of Optica and the recipient of numerous
awards, including an NSF CAREER Award, an Alfred P. Sloan Fellowship, an ACM SIGGRAPH Significant New Researcher Award, a Presidential
Early Career Award for Scientists and Engineers (PECASE), an SPIE Early Career Achievement Award, an Electronic Imaging Scientist of
the Year Award, an Alain Fournier Ph.D. Dissertation Award as well as several Best Paper and Demo Awards.
Thursday, 11
Capturing, Compressing, and Creating Neural Radiance Fields
14.00 – 15.00 | Günter Hotz Lecture Hall |
- Ben Mildenhall (Google)
Abstract: Over the past few years, neural volumetric rendering has proven to be a flexible and useful framework for a
wide variety of 3D reconstruction and inverse rendering scenarios. In this talk, I will discuss our work toward
creating and engaging with high-quality digital 3D content. To start, we extend NeRF’s ability to capture larger
and richer spaces, allowing for the realistic recreation of full immersive environments. Given that these high-fidelity
models can be slow to render, we also investigate methods for real-time rendering on consumer hardware. Finally, we explore
how it is possible to harness the power of 2D generative models to create new 3D content from only a text prompt.Bio: Ben Mildenhall is a research scientist at Google, where he works on problems at
the intersection of graphics and computer vision, specializing in view synthesis
and inverse rendering. He completed his PhD in computer science from UC Berkeley
in 2020, advised by Ren Ng and supported by a Hertz Fellowship, and received the
ACM Doctoral Dissertation Award Honorable Mention and David J. Sakrison Memorial
Prize for his thesis work on neural radiance fields. He has received Best Paper
Honorable Mentions at ECCV 2020, ICCV 2021, and CVPR 2022.
Friday, 12
From curved to flat and back again: mesh processing for fabrication
11.00 – 12.00 | Günter Hotz Lecture Hall |
- Mirela Ben-Chen (Technion)
Abstract: Assume that for a craft project you were given a task: create a (doubly) curved surface.
What are your options? With applications varying from art and space exploration to health care and architecture,
making shapes is a fundamental problem. In this talk we will explore the challenges of creating curved shapes from
different materials, and describe the math and practice of a few solutions. We will additionally consider the limitations
of existing approaches, and conclude with a few open problems.Bio: Prof. Ben-Chen is an Associate Professor at the Center for Graphics and Geometric Computing of the CS Department
at the Technion. She received her Ph.D. from the Technion in 2009, was a Fulbright postdoc at Stanford from 2009-2012, and then
started as an Assistant Prof. at the Technion in 2012. Prof. Ben Chen is interested in modeling and understanding the geometry of
shapes. She uses mathematical tools, such as discrete differential geometry, numerical optimization and harmonic analysis, for
applications such as animation, shape analysis, fluid simulation on surfaces and computational fabrication. She has won an ERC
Starting grant, the Henry Taub Prize for Academic Excellence, the Science Prize of the German Technion Society and multiple best
paper awards.
Diversity and Inclusion Program
Diversity and Inclusion Program
Wednesday, 10
She Lunch
12.30 – 14.00 | Ausländer Café |
- The 4th Eurographics She-Lunch, is a celebration of “she” academic or industry professionals attending Eurographics 2023. This only-she event provides you the opportunity to meet or reconnect with fellow students, graduates and experienced professionals, and discuss life and workplace challenges most pertinent to women.
Hosted by the Eurographics Diversity & Inclusion team and sponsored by the Eurographics association, guests will enjoy a delicious meal in the Auslander Café.
Wednesday, 10
Diversity panel session: Diversity and inclusion in the publication selection process
13.30 – 17.00 | MPI SWS Lecture Hall |
- Published research work results from a selection process through peer reviewing. This age-old process is still recognized as essential to guarantee scientific quality. It also plays a major role in research dissemination. For many professionals, publication rate has become an important metric to evaluate achievements. Therefore, it is crucial to uphold objective selection.
To ensure equal chances, research has developed strategies and tools to ensure fairness and representativeness. In this panel, experts will share their experience with the publication selection process. On the one side, discussions will be conducted on how program committees and editors have taken into consideration the importance of diversity and inclusion. On the other side, questions will be raised on strategies to ensure access to everyone, both to publishing and in the decision making process, regardless of their origin and work conditions.
During the Eurographics diversity panel, attendees will be given the opportunity to share their experience. By collecting experiences and ideas, we aim at developing a road map to consolidate diversity and inclusion in future reviewing endeavors. - Panel members:
Pierre Alliez, INRIA, France
Ursula Augsdorfer, TU Graz, Austria
Mathieu Desbrun, Ecole Polytechnique/INRIA, France
Niloy Mitra, University College London, UK
Nuria Pelechano, Universitat Politècnica de Catalunya, Spain - Eurographics Diversity & Inclusion Program Chairs
Celine Loscos, Principal Research Engineer, Huawei France
Gurprit Singh, Senior Researcher, Max Planck Institute for Informatics, Germany