DEAK Software

Panoramic Rendering

Dominik Deák

1 Introduction

Panoramas, or pictures with a wide field-of-view is not an entirely new concept, such visualization existed since the last decades of the 18th century. There are many methods for creating panoramas. For instance, early artists used to hand paint panoramic pictures in a meticulous manner. Later on, the creation of panoramas became more straightforward with the advent of optics and photography. These days, we can easily experiment with panoramic imagery with the help of computers. Panoramas can be rendered cheaply in different variations, forms and shapes.

Most computers use a traditional perspective view rendering system, consisting of a 2D display. Objects are displayed by projecting 3D geometric shapes onto a 2D view plane. This view plane corresponds to the area of the display screen.

There are other methods for rendering objects. Ray-tracers fire incident ray vectors trough the view plane, and perform ray-object intersection testing in the virtual scene. The colour at the object's intersection point is stored at the corresponding screen location where the ray intersected the view plane. Similar rendering techniques can be employed with panoramic rendering. However, in this case the viewing surface is no longer a plane, it is a curved surface that possibly provides a greater field of view.

2 Project Details

This project explored techniques for generating and displaying panoramic images on concave surfaces of revolution. The implementation included a ray-tracer and a real-time rendering system. Only a brief overview of the project is presented here. Those who wish to delve into the technical details of this project may download the thesis: Panoramic Rendering for Concave Surfaces of Revolution (PDF, 3.2 MB).

A surface of revolution can be constructed by revolving a 2D curve around a line, the principal axis. The geometric shape of the symmetrical surface is governed by the 2D function, the profile curve. Since most curved displays can be considered to be symmetrical about its principal axis, 2D profile curves provide a convenient way for modelling a display's shape.

Figure 1
Figure 1: Profile Curve.
Figure 2
Figure 2: Virtual Camera.

The symmetrical property of the surface allows 3D points to be transformed in the 2D profile curve space. To illustrate this idea, the projection surface (Figure 1) can be represented with an infinite number of 2D profile curve "slices" revolving around the Z axis. This means that any point in 3D space will lie in the plane of a slice, the profile curve space.

Figure 3
Figure 3: Relationship between the 2D image plane and the panorama.

The view origin for the virtual camera and the user is illustrated in Figure 2. The rendered panorama is displayed into the surface by a projector.

When rendering a panoramic image, the resulting picture is eventually stored in a 2D image space (or screen space). To be more precise, the panoramic transformation of a 3D object is considered as a two-step process: Initially the object is projected onto a surface of revolution, then it is re-projected onto a 2D image plane.

The reason why the final projection is represented in 2D image space is because most display technologies, such as overhead projectors, are inherently based on two dimensional raster image planes.

After the two-step transformation process, the resulting 2D image plane is assumed to be coincident with the XY-plane of the viewing coordinate system. The view origin and the 3D surface are centered on the image plane, where the principal axis runs parallel with the plane's normal. This arrangement is intended to create an orthographic projection of the surface on the image plane.

2D image planes have a finite resolution when rasterised, therefore in most cases they will clip the 3D surface. Figure 3 illustrates the 2D image plane and the visible portion of the surface.

3 Results

Figure 4 demonstrates how the shape of the profile curve affects the final panoramic image. Each curve (E to H) illustrate the actual cross-section of the surface (red highlights the visible regions of the surface), accompanied with their respective ray-traced panoramas (A to D). The table below defines the functions used for each profile curve.

Curve Function
E \( f(r) = -r^3 + r^2 + r + 1 \)
F \( f(r) = -0.1 r^4 + 2.25 r^2 + 0.2 cos(8r) + 1 \)
G \( f(r) = \sqrt{1 - r^2} \)
H \( f(r) = -2.5 r^2 \)

Rendered image B in Figure 4 shows an example where the view vectors intersect the surface more than once, giving a warped appearance in the ray-traced image. In real world situations, it would be impossible to project this panorama onto a physical surface, because it would cast shadows on itself.

Figure 4
Figure 4: Output images A to D corresponding to curves E to H respectively.

The following set of images, Figures 8 to 11 demonstrate the real time rendering system in wire frame and in Gouraud shading mode. The projection surface is no longer planar, and hence the rendering of flat polygons on curved surfaces will look incorrect. Therefore, it was necessary to subdivide polygons to approximate the curvature of the surface. Refer to the able below for curve functions. The wire frame views in Figures 8 and 9 illustrate how the polygons were subdivided. The sub-images in the top-right corner show the perspective view of the scene with no polygonal subdivision.

Panorama Function
Figure 5 \( f(r) = -r^2 + 1 \)
Figure 6 \( f(r) = -log_e(r + 0.1) \)
Figure 7 \( f(r) = -r^2 + 1 \)
Figure 8 \( f(r) = -2r + 1 \)

Using polygon subdivision to approximate the curvature of the surface introduced a discontinuity problem, which manifested themselves as cracks within the geometric shape of the models (see Figures 10 and 11). A crack would develop when a midpoint division along the edge of two neighbouring polygons were not co-incident. One possible solution is to represent geometric objects in a winged-edge data structure, which would simplify the tracking of neighbouring polygons affected by a midpoint division. Unfortunately, there was not enough time to implement solutions.

Figure 5
Figure 5: Adaptive triangle subdivision for a parabolic panorama.
Figure 6
Figure 6: Adaptive triangle subdivision for a panorama based on a logarithmic profile curve.
Figure 7
Figure 7: Cracking artefacts for a parabolic panorama.
Figure 8
Figure 8: Cracking artefacts for a conic panorama.

Figures 12 to 15 illustrate how the panoramas may be used in practical situations. Each Figure illustrates the profile curve (top right), which was used to render the panorama (left). The profile curve was also used to model a 3D panoramic display (bottom-right). The panorama was projected onto the 3D surface. The user's viewpoint inside the panoramic display was simulated by a perspective camera. The camera's perspective view was illustrated in centre-right. The camera saw a perspective correct view of the scene.

Figure 9
Figure 9: Spherical panorama.
Figure 10
Figure 10: Parabolic panorama.
Figure 11
Figure 11: Hyperbolic panorama, ray-traced.
Figure 12
Figure 12: Hyperbolic panorama, rendered in real-time.

Appendix

Documentation

Video Clips

Source Code

The source code and the sample program used in this project is available to download under the MIT License. Pre-compiled binaries and the necessary 3D models are also included. The source code is not exactly a shining example of good C++ programming practices, but it did the job. (The code is actually hilariously bad; it was written decades ago, when I was still a C++ novice.)