eXploreMaps

Visual Computing Lab ISTI-CNR


Visual Computing CRS4

Links

Viewer

What is this?

How is it done?

Paper (EG14 to appear)

Journal Log

2014/01/28::: web-site is up

How it works.

The eXplore Maps consists of a graph that goes everywhere outside and inside the scene, so as every part of the model can be seen from at least a position on the graph. For each node of the graph (the red sphere on the image) we show a panoramic image and for each path connecting two nodes we show a panoramic video. Images and videos are precomputed using a rendering engine (mental ray in these examples) so that when your browse the scene the rendering is as much photorealistic as you like.

How it is built.

Input The input is a 3D scene (at the present only .obj format is supported). The scene is loaded and stored locally on the server in a ad hoc data structure that will make more efficient the next step of the pipeline.



Exploration An unassisted algorithm starts exploring the scene to determine "good" point of views. The algorithm starts with one probe and add new probes until all the visible surface is seen. The positions of the probe are found taking into account perceptive criteria. Then, nearby probes are connected by smooth paths and the graph is ready.



Rendering For each node of the graph a panoramic rendering is done (made of 6 axis-aligned renderings). For each arc of the graph a panoramic video is created by densely sampling the path making a panoramic rendering for each sample. These renderings can be simple OpenGL renderings (that is, with local illumination model of simple effects such as shadow mapping or ambient occlusion), or complex global lighting. It all depends of what we want our final result to be, how much computational power we have and hoe much time are we willing to wait.