Eurographics

EUROPEAN ASSOCIATION FOR COMPUTER GRAPHICS

competition

upload

winner

Image 0

The image shows a render of an iceberg made of compacted snow, where the snow scatterers have positive correlation, and therefore the iceberg shows super-exponential power-law-based transmittance, following our radiative model for correlated participating media.

Our model extends classic uncorrelated radiative transfer equation, accounting for non-exponential probability of extinction arising from scatterers spatial correlation. It accounts for multiple sources of correlation, boundary conditions, and mixtures of scatterers.



Image 1

The original generalized winding number was used in geometry processing to robustly segment a shape's inside from outside, even in the presence of holes, self-intersections. However, it was slow for disconnected triangle "soups", and had no definition for oriented point clouds. The fast winding number accelerates the generalized winding number, making triangle soups feasible input, and point clouds possible input. The arrows from the top row to the bottom row show that any of our inputs can be used to generate any of the outputs. The bold arrows indicate which input was used to generate the depicted output. The input (from right to left): A clean triangle mesh, a disconnected triangle soup, a regularly-sampled point cloud, and an irregularly-sampled point cloud. The output (right to left): A 3D printer path, a voxelization, a watertight isosurface, a signed distance field.

Image 2

The image is a diffraction pattern produced by light interference in a multilayer laptop LCD screen (Lenovo Yoga). It was directly measured on the screen and rendered under incandescent light.

Photorealistic rendering of diffraction of light is very challenging in computer graphics both due to the absence of practical measurements (diffraction appears when the microgeometry of a surface is at the micrometer scale hence requiring expensive microscopes for measurements) and to the expensive Fourier optics simulations required for wave optics simulations. The method used to render the pattern tackled these two issues. It only required a simple flash light with a spectral filter in order to measure and render the pattern under any light spectrum. The acquired data bypassed expensive Fourier computations and therefore allowed real-time rendering of complex diffraction patterns. The full technical paper was accepted in Transactions of Graphics and presented at SIGGRAPH 2017.


Image 3

The image is a real-time rendering of a holographic paper under environmental illumination.

Photorealistic rendering of spatially varying holographic papers has remained challenging in computer graphics until now. Here the effect has been measured on a real holographic paper using spectral illumination and polarized light. To our knowledge it is the first measurement setup based on wave optics that allows measurements of anisotropy caused by surface variations at the wave optics scale. The measured data allows real-time rendering under arbitrary illumination and captures intrinsic properties of the paper that can be observed on the real paper. The full paper has been published at SIGGRAPH Asia 2018.

Image 4

This scene showcases a selection of high-quality materials that were synthesized by our learning-based technique, which works by learning a set of user preferences and recommending new, high-quality materials that can be visualized in real time. We endeavored to create a system that is able to empower novice users without any material modeling experience to perform mass-scale material synthesis for complex scenes. We credit Bhavin Solanki for the geometry of the scene.

Image 5



This image depicts a complex cloud rendered using our framework for non-exponential transport. For this image, the underlying equations simulate photon interactions with scatterers that exhibit strong correlations at large scales and are approximately pink noise distributed. Recent work from atmospheric sciences shows that this is a more faithful model for real clouds than the Poisson model commonly used in classical radiative transport, and our framework allows us to incorporate such correlations in a heterogeneous medium for the first time.

Image 6



This image depicts a complex cloud rendered using our framework for non-exponential transport. For this image, the underlying equations simulate photon interactions with scatterers that exhibit strong correlations at large scales and are approximately pink noise distributed. Recent work from atmospheric sciences shows that this is a more faithful model for real clouds than the Poisson model commonly used in classical radiative transport, and our framework allows us to incorporate such correlations in a heterogeneous medium for the first time.

Image 7

The image shows a distance field to the damaged areas of Lion 9 from the Court of the Lions in the Alhambra palace.
The image has been generated using CHISel 2 software developed by the Virtual Reality lab of the University of Granada.

Image 8

Fluid Simulation rendered with Path Tracing
-------------------------------------------

The simulation models the impact of a solid object in a viscous fluid (from left to right), during which the object dissolves. The image shows the density of the object mass that is present in the fluid shortly after the impact. The shapes, colors, and positions of the light sources highlight the 3D structure of the density distribution.

The image was generated using the production renderer "Cycles" which utilizes physically-based path tracing to compute volume absorption and scattering.

Image 9

This image shows a triangulation of three ballet dancers at sunset. This stylization was obtained with our interactive and user-centered image triangulation algorithm, which is based on an interactive optimization that places triangles with constant color or linear color gradients to fit a target image.

Image 10

This image is taken from a decision support system visualizing a simulated river flood in an Austrian village. Many rendering techniques are combined to provide relevant information to the viewer, which in this case is the efficacy of the deployed flood protection measures. The resulting heightfield of the GPU-based shallow-water simulation is rendered with additional surface effects to indicate flow directions, flow speed, and water depths. In order to show the currently selected flood protection walls without obstruction, adaptive cutaways are applied to the buildings in the foreground. The spatial context is preserved by showing thousands of street, land use and building polygons with stencil-buffer-based polygon rendering.

Image 11

This image is taken from a decision support system visualizing a simulated river flood in Tyrol in Austria. The resulting heightfield of the GPU-based shallow-water simulation is mapped to shades of blue, building damages resulting from flooding are colored from gray (no damage) to red (high damage). In this real-time visualization, a digital elevation model of the entire country of Austria with tens of millions of cells at three static levels of detail is rendered with adaptive, recursive hardware tessellation. It is combined with thousands of land use and street polygons queried from open data services that are rendered with a stencil buffer approach. The time-dependent water data provided by the simulation are updated interactively and interpolated C1-continuously.

Image 12

This image is a single frame of a dynamic scene compressed and decompressed using our compression method for rigid body simulations. The method is able to reproduce the complete input simulation while retaining all collision events and fine motions. Here the bunny was thrown through a stacked block tower, and the resulting animation was compressed to over 300 times smaller than the input simulation.

The image was rendered within Blender.

Image 13

This image shows a forest consisting of about 30,000 trees generated and rendered at interactive framerates using approach, described in our paper published in Computer Graphics Forum. The trees are constructed on the fly on a GPU in the given level of details.

Image 14

This image shows a forest generated in concordance to simulation of 100 years of forest secondary succession at a forest fire site. The trees are generated on the GPU and rendered interactively using the technique described in our paper

Image 15

This image depicts the visualization of a DNA nanorobot. Our technique smoothly transitions the semantic scales from atoms to the targeted geometry for the effective modeling of these complex nanostructures. It demonstrates the application of computer graphics techniques on the emerging DNA nanotechnology field.

Image 16

This image depicts the visualization of a dodecahedron nanostructure built from DNA. Our technique smoothly transitions the semantic scales from atoms to the targeted geometry for the effective modeling of these complex nanostructures. It demonstrates the application of computer graphics techniques on the emerging DNA nanotechnology field.

Image 17

The image shows a sequence of timesteps resulting from our parametrised method for drying and decaying vegetable matter from the fruits category, taking into account the biological characteristics of the decaying fruit. Our simulation primarily addresses mould propagation and volume shrinking, extending existing fruit decay approaches. Our approach improves the shrinking behaviour of decaying fruits, aiming for photorealistic outcomes with a greater degree of biological accuracy, which can be observed in the resulting renders. The image is a render of a full CG simulation obtained inside Houdini and created using Mantra PBR. The apple model was built using photogrammetry of a healthy fruit, while the final look is achieved by applying our decay method to the model.

Image 18

The image shows a color-coded time-resolved render of a juice glass and a few ice cubes within a participating media using Progressive Transient Photon Beams. From early (blue) to later timings (red), light propagates through the scene, taking around 1200 picoseconds to reach all objects and media.

Image 19

This image shows one frame of an animation in which a bunny caustic is artistically controlled to "flow" into a bird shape. The motion of photons is described by a vector field that is controllable through constraints drawn by the user, allowing us to redirect the flow, change its appearance and use additional standard tools from vector field design such as synthetic turbulence, made of small and fast vortices in this example. For image synthesis, we used a physically-based path tracer and progressive photon mapping.

Image 20

The image shows a physically-based rendering of a piece of fabric draped over a glass bowl, produced through pathtracing in Mitsuba. The scene geometry was created in Blender via a physical simulation of a piece of silk dropped over a bowl. The appearance of the silk was rendered with a new plugin based on a compressed representation of Bidirectional Texture Functions using neural networks, from our unpublished work that is currently under review. The fabric material exhibits complex spatial variation and specularities as well as strong anisotropy that our compressed neural material descriptor manages to accurately reproduce.

Image 21

The image shows a physically-based rendering of a piece of fabric draped over a glass sphere, produced through pathtracing in Mitsuba. The scene geometry was created in Blender via a physical simulation of the cloth falling onto rigid objects. The appearance of the silk was rendered with a new plugin based on a compressed representation of Bidirectional Texture Functions using neural networks, from our unpublished work that is currently under review. The fabric material exhibits complex spatial variation and specularities as well as strong anisotropy that our compressed neural material descriptor manages to accurately reproduce. The glass sphere reflects some of the environment as well as some close ups of the textile.

Image 22

The image represents a sketch book cover captured outdoors under natural illumination. The reflectance properties and normals are computed from polarisation imaging and the reflectance maps (diffuse albedo, normals, specular roughness and specular albedo) are used to render in the uffizi gallery light probe in real-time with a custom OpenGL shader