Environmental Imaging --
Photographic Capture of Visual Environments for Virtual Reality Systems

Richard A. Ainsworth, Ainsworth & Partners, Inc.
January 19, 2010



Conventional digital cameras can be used to capture images in full stereo vision for use in virtual reality systems.  These images can cover any field of view, including 360° by 180° spherical panoramas. Two photographic cylinders are created with identical dimensions, capturing the perspective as seen from each eye position.  When these cylinders are mapped to a spherical surface, full stereo in all directions is achieved.  Image resolution is the same everywhere in the final presentation, and this resolution can be as high as desired.


CAVE Cave Automatic Virtual Environment

CAVE, the first virtual reality theatre was invented in 1992 at the Electronic Visualization Laboratory in Chicago, Il.



Science and art

CAVE technology has traditionally been used by both scientists and artists, using similar techniques, but with differing purposes. For scientists, virtual reality allows visual exploration of data in new ways. With this added perspective, computationally intense subjects can be investigated perceptually, often providing insights that are not otherwise possible. Artists find these same visual capabilities to be equally liberating in terms of potential modes of expression. With the advent of digital photographic imaging, a third application is created, allowing photographers to join scientists and artists in exploring the possibilities of the virtual experience.

StarCAVE - molecular structure of protein
StarCAVE - architectural rendering

It has always been fairly easy to place photographs and other flat objects in virtual space. What has been added with this new development is the use of photography to create the virtual environment itself. In practice, a complete spherical image is created that can be viewed in any direction that is supported by the CAVE design. Full stereo imaging is available in all directions, without exception, and is automatically configured to match standard interocular distance.



StarCAVE - Inside the Wisconsin State Capitol. This is a picture of the first photographic rendering of a virtual environment. The person in this photo is standing in the center of the StarCAVE. His view is identical to what he would see if he were on the balcony in the original location. The background photograph image is in full stereo, and may be viewed in any direction.

In order to create a virtual environment photographically, it is necessary to combine two conventional techniques. Stereo vision supplies the depth information, and spherical panorama imaging covers the entire visual field. Unfortunately, these two methods are not readily combined in a single operation.


Stereo Imaging

This early stereopticon slide combines left and right images.

Stereo photography came into being shortly after the invention of the camera. This stereopticon slide dates from the mid 19th century and was used with a hand held viewer. Two separate photographs capture the perspective as seen by each eye. When combined optically, these images give a realistic stereo view.

This contemporary stereo image shown below also combines two overlapping views of the same scene. In principle, this stereo image is the same as the classic form, with separate images representing the views from the left and right eye.

The stereo imaging process used in creating VR environments is considerably more complex than simply capturing two photographs of a scene. The technique for shooting conventional stereo images is straightforward.  With a single camera, the stereo pair is created by moving the camera laterally to create the two successive images. When two cameras are used, they are separated by a similar distance, creating two images simultaneously.

Rollover Image. This illustration shows how the left and right images present a slightly different perspective.


For creating a pair of stereo images, dual cameras may be used or a single camera may be moved left and right.

The interocular distance, the distance that separates the left and right images, varies, depending on the effect. An interocular distance of 70mm is generally used to duplicate the spacing of human eyes.

Increasing this distance beyond 70mm is sometimes used in landscapes and in "boosted stereo" to enhance the depth effect. An interocular distance of less than 70mm may be used in close-up photography.

Various methods are used to move a single camera left and right in capturing a stereo pair. The main problem with this method is relative movement of objects within the field of view during the time between exposures. Using dual cameras solves this problem, but requires careful synchronization to create a realistic stereo view. People wandering about while the photograph is being taken presents a particularly difficult challenge.


Anaglyph stereo

This stereo anaglyph image can be viewed red/cyan glasses.
The anaglyph method uses special viewing glasses to separate the left and right images by color. The image shown here is best viewed with a red/cyan pair of stereo glasses. The left image is modified by removing all green and blue from the original image, leaving only red. Similarly, the right image is created by removing red, leaving both green and blue. This manipulation is easily done in PhotoShop®, using the Curves and Layers options. More elaborate color separation techniques can be used to provide even better separation of the left and right views into overlapping images. Depending on the color scheme selected for separating and printing the left and right images, glasses in red/cyan, green/magenta, amber/blue, and other color combinations are used for viewing the result.

When this image is viewed with a red filter over the left eye and a cyan filter over the right eye, the original photographs are combined to form a stereo image.

Anaglyph stereo is less effective than various methods for presenting left and right images via electronic or other separation techniques. For one thing, anaglyph images only work well with less saturated images. In addition, red objects in the original scene are sometimes shown incorrectly due to lack of image information in the cyan channel.

The primary advantage of anaglyph images is the ability to present a full stereo effect, using only a single image which can be displayed on a screen or printed, much like a conventional photograph. Electronic presentations of an anaglyph image can range from conventional computer displays to more elaborate tiled displays of almost unlimited dimensions. When print media is the appropriate choice, the price of anaglyphs, compared with other stereo presentation systems, can't be beat.

Sources for red/cyan anaglyph glasses:


Panorama Imaging

Rotating a single camera around the entrance pupil or zero parallax point can create seamless overlapping images.
With the advent of photo stitching software, creating panoramic images by combining multiple photographs has become relatively easy to do by simply rotating the camera while taking several images. Adjacent images overlap from one-third to one-fourth of their width so that the stitching software can combine these images in one large rectangle. If the images cover a complete circle, this rectangle can be viewed as a continuous cylinder, covering a 360 degree field of view.

In the example below, 36 separate photographs are combined to form a cylindrical image. When this image moves left and right, the cylinder automatically maps to the rectangle shown on this page. This mapping causes some distortion, shown as stretching of the image at the sides. If this same cylindrical image were viewed on a curved surface, the image would match the original scene exactly.

If the original photographic subject is distant, as with a typical landscape, the alignment of adjacent images is not as critical. In this case, stitching software easily creates a seamless image.

Movable Image The courtyard at Frank LLoyd Wright's Taliesin near Spring Green Wisconsin in 360° panorama.


Panorama rotation point

For precise panoramas, and for any panoramas to be viewed electronically or in anaglyph form, rotation of the camera/lens combination around the point of zero parallax is essential. This location is often called the nodal point of the lens - a designation that, while commonly used, is not strictly correct. The entrance pupil of the lens or zero parallax point is the actual location about which the lens can be rotated freely in both horizontal and vertical axes without introducing parallax error. Every lens has actually two nodal points: one behind and one in front of the lens axis. Neither of these points is the location where rotation will eliminate parallax. The correct designation of "zero parallax point" or "entrance pupil" can be used interchangeably to specify the correct rotation point to create seamless photographs containing overlapping images.


Panorama lens calibration - the zero parallax point

The lens/camera system must be calibrated to determine the exact rotation point for zero parallax error. The criticality of these measurements and subsequent camera adjustments is an open question. In practice, this point can be found experimentally to within one millimeter. Somewhat less precision than this is probably acceptable; much less precision will create panoramas that simply cannot be stitched accurately and they will fail.

The exit pupil is easily seen with light entering the lens from the rear. This is the exact location that must remain stationary while the lens is moved or rotated for zero parallax imaging.

In most cases, the entrance pupil of a lens can be seen by looking at the front of the lens with the rear cap removed and light entering from behind the lens. The entrance pupil, also defined as the point of zero parallax, can be seen as a small white dot located inside the lens and behind the front element.

If you move the camera carefully you will be able to pivot the lens in any direction while keeping the zero parallax point fixed in space. This precise camera movement is required when taking complex panorama images with zero parallax error.

Both focal length and focus distance must be the same for all images in a panorama. If you are using a zoom lens, note that the entrance pupil moves when the focal length is changed. This movement is usually nonlinear. Select a focal length you plan to use and then adjust the focus from infinity to its minimum distance while watching the entrance pupil.

If the entrance pupil remains stationary while the focus is moved, a single calibration is sufficient for that focal length. If the entrance pupil moves with variations in focus, you will need to calibrate for both the focal length and the focus distance you plan to use.


Determining the zero parallax point experimentally

The goal is adjusting the rotation point so that the near objects do not move relative to the background when the camera rotates horizontally or vertically. With a zoom lens, the zero parallax point will only be accurate for the focal length selected. If the entrance pupil location varies with focus at the focal length, the focus distance setting also needs to be duplicated.   Taking two photographs and comparing the results will refine this technique to a high degree of precision. 

Panorama image A
Panorama image B

These two test images A and B were taken by rotating the camera around the zero parallax point.  The large red square is a Post-It® note fastened to the window, about one foot in front of the camera lens.

The rotation point of the camera has been adjusted so that there is no relative motion between the red square in the foreground and the trees in the background when the camera is rotated left and right.

The two white rectangles have been added for emphasis in these test images.  The enlarged images inside these rectangles show that the foreground object (the red square) and the background trees appear identical in these two photos.  The white rectangles are in the areas that would overlap when these images are combined in a single panorama. When the zero parallax point has been adjusted correctly, all images taken in this configuration will show zero parallax error.

Image A enlarged
Image B enlarged

For ultimate precision in setting the zero parallax point, the test images may be enlarged to show detailed alignment.  In these examples on the right, the area inside the two white squares in the test photos above is enlarged in PhotoShop® to show actual pixels.

Comparing the two images at this level of magnification verifies that the zero parallax point is set correctly.  The two images are slightly blurred because the depth of field is insufficient to capture both foreground and background at this level of magnification.

Using this alignment method, the zero parallax point (also defined as the entrance pupil of the lens) may be experimentally adjusted to a precision of one millimeter.


Panorama stitching

Stitching software is remarkably efficient at creating a single, seamless image from several individual photographs.  Stitching or blending of adjacent images is managed by carefully adjusting images that overlap so minor differences are unnoticed. The stitching software used in these examples is PTGui Pro, available from PTGui.com

The amount of blending required in the stitching process is dependent on the care with which the original images were created.  A tripod-mounted camera pivoted around the zero parallax point will create a panorama that requires minimal adjustment by the stitching software.  Even hand-held images can be stitched successfully, however, if the parallax errors are small.

Ideally, the stitching software will produce a perfect panorama automatically.  In practice, however, considerable hand alignment and adjustment may be necessary to create a convincing image.  Natural scenes are easiest to photograph.  Architectural imaging can be difficult to render in true perspective.


Single-channel panorama vs. stereo panorama stitching

Creating single-channel panoramas is almost trivial compared to creating two matching panoramas that represent the correct perspective for each eye.  While blending manipulations created by the stitching software will often go unnoticed in a single image, these same adjustments may cause stereo pairs to fail.  That is, the differences introduced by the stitching software itself will disrupt the stereo illusion when the two panoramas are viewed together.

When stitching stereo panoramas, the left and right images must be carefully matched throughout.  This provides similar, if not identical, matching points in adjacent images when the resulting panoramas are stitched.  Alternately, it may be necessary opt for the manual mode in the stitching software to individually select identical matching points in left and right panoramas instead of using the automatic options. 


Camera geometry for stereo-panorama imaging

The basic goal is to create a blended image that captures both depth information and wide field information.  The resulting images may be used in projected VR environments or printed in anaglyph form.

Stereo photography is based on capturing two separate images of a scene, with each image preserving the exact perspective of each eye.  A single camera can be shifted laterally between exposures or a dual camera setup can be used.  Perfect stereo information may be preserved with either technique.

Panorama photography is based on creating a horizontal row of multiple images as the camera position is precisely shifted.  When combined in a stitching program, these images can form a seamless rectangular image of any width, including wide field images up to 360º.   

Stereo-panorama photography is based on a combination of stereo and panorama configurations.  While the procedures for creating either stereo images or panorama images are well understood, combining these techniques and creating stereo panoramas is considerably more complex. In practice, spherical panorama images to 360° by 180° in perfect stereo alignment can be developed. The left and right camera positions remain separated by the interocular distance.  The pair is rotated about a common axis, located midway between the left and right positions and on a line connecting the zero parallax points. 


Interocular distance error

Effective stereo separation decreases as the point of view is shifted from a point directly in front of the cameras.

The problems associated with combining stereo and panorama views of a scene stem from changes that occur in the effective interocular distance, which is the separation distance between the left and right views. As the viewpoint is moved either left or right of the center position in the image, this effective interocular distance begins to collapse.

In this diagram, the green lines show how the interocular distance is preserved when looking straight ahead, viewing the exact center of the visual field. When the angle of view rotates from the center, as shown by the red lines, the separation distance between left and right views is decreased. If we were to rotate to the 90° position, the stereo separation would be zero and the stereo information would vanish completely.

If two 360° panorama views were taken with the center rotation point of the panoramas shifted left and right, as with conventional stereo photography, the stereo effect would be preserved when looking straight ahead at the combined view. Looking 90° either left or right would collapse the two images and eliminate the stereo effect. Shifting to 180° would actually reverse the left and right images. This would create a situation where distant objects would appear closer, and close objects would appear to be distant.

It is unclear why the effect of reduced stereo separation is not noticed when viewing a scene in a real-world situation. For example, when the head is held stationary, the effective separation distance between the eyes is reduced when looking either left or right of the center position. Since we are able to tolerate these small changes in the effective interocular distance without perceiving a reduction in depth information, it's probable that our internal visual processing is programmed to compensate for minor losses of this type. If the photographic virtual representation is within similar limits, the depth illusion can be retained.


Stereo-panorama imaging

Capture geometry for creating stereo-panorama images. The twin camera positions rotate on the zero parallax point and remain separated by the desired interocular distance.

The solution is to create dual panoramas where the full stereo separation distance is perceived throughout. This applies equally to 360° by 180° or spherical imaging.

In the diagram shown here, the left and right camera positions are rotated about the zero parallax point, shown by the red dot. The separation distance between the two camera positions, shown by the green dots, remains constant.

If you examine the geometry of the stereo-panorama technique carefully you will see that, strictly speaking, it doesn't quite work. That is, there are inherent errors in this stereo-panorama setup that are not present with either stereo or panorama images alone. Fortunately, the "virtual" part of VR means that we do not have to be precise in creating an environment that is acceptable. With the technique shown here, the errors introduced are within the envelope of believability, allowing us to create panoramic or spherical images that the eye/brain accepts as real.


Stereo separation vs angle of rotation

The interocular distance set by the equipment geometry is only matched in the center of each stereo pair in the panorama. At the overlapping edges where stitching occurs, the interocular distance is slightly less. Our eye/brain is apparently able to overlook or simply ignore this difference if all other factors are correct. That fortunate adaptability allows this stereo-panorama technique to capture a wide variety of photographic environments.

The rollover image below shows a 360° panorama consisting of eight images. This would normally be cropped to eliminate the scalloped effect at top and bottom. In this configuration, however, it is possible to easily define the centers of the original image, shown by green lines. If this image were one half of a stereo pair, then the exact centers of the original images would be separated by the desired interocular distance. The overlapping edges of adjacent images, shown by the red lines, would exhibit effective interocular distance reduction.

Rollover Image. Eight stereo images are combined in this 360° panorama. The green lines are at the centers of overlapping stereo images. At these centers, the interocular distance is preserved. The red lines are at the edges of overlapping stereo images, where both the effective interocular distance and the stereo information is decreased.



Camera moves 35mm left and right of the zero parallax point.

In these examples, a single Nikon D3 camera with AF-S Nikkor 24-70mm lens is used for all images.  The camera position is moved left and right to match the 70mm interocular distance. 

The stitching software is PTGui Pro, available from PTGui.com. Other stitching software may be used, but this particular product is exceptionally reliable.

A standard Manfrotto 303SPH QTVR Spherical Panoramic Pro Head is used as the basis for positioning the camera.  In the original configuration, the Manfrotto head can only be adjusted horizontally by moving from the center to the left of the center position. This panorama head must be modified from the standard configuration to allow the camera to be positioned both left and right of the central rotation point.

In creating the modification required, the horizontal rail used for positioning the camera is remounted to the base and offset, as shown in the photo.  After this modification the camera may be moved as necessary to create two complete images for each position as the panorama head is rotated. A standard interocular distance of 70mm is used for all images. This requires that the camera be positioned 35mm to the left and 35mm to the right of the central rotation point.

Modifying the Manfrotto assembly is not trivial and the necessary machining must be done carefully to preserve the inherent precision of the original equipment.


Rotation parameters

Images for a 360° Panorama Circumference in Pixels
24mm 36° 10 images 17.9K
28mm 30° 12 images 20.6K
35mm 24° 15 images 25.8K
50mm 20° 18 images 36.7K
70mm 15° 24 images 50.5K

This table summarizes the parameters for creating a single stitched image.  If a full 360° rotation panorama is photographed, the resulting stitched rectangle will form a cylinder.  The circumference of the final image will match the pixel resolution shown.  If the cylinder is mapped to a spherical surface, both vertical and horizontal resolution will match the figure shown. 

In practice, even longer focal lengths and smaller rotation angles could be used, producing final stereo images with as much resolution as desired.

The resolution of the final image, whether in a rectangle as produced by the stitching software or the final image as mapped to a sphere, is dependent on two factors.  The horizontal resolution of the camera sensor sets the basic resolution of the system.  With the Nikon D3, the horizontal dimension is 2.832 K pixels with the camera positioned vertically.  A camera with more or less native resolution will change the final resolution accordingly.

The other parameter affecting final image resolution is the focal length of the lens.  This sets the number of images per 360° rotation and fixes the resolution of the final image in both horizontal and vertical dimensions.  The vertical resolution in the completed image will be identical to the horizontal resolution shown.

The rotation angle must be selected so that the resulting adjacent images have sufficient overlap for the stitching software to work.  If image overlap is insufficient, the stitching software simply will not work.  If the rotation angle is excessive, alignment errors can occur between left and right stereo views of the same scene because different sets of control points may have been selected by the stitching software. An overlap equal to one-fourth the width of the image is a reasonable compromise.


Image parameters

Many of the camera adjustments and parameters familiar in conventional photography need to be modified or even abandoned for photographic capture of VR environments.  Other than selecting a location that’s worth photographing in the first place, few skills learned from conventional photography remain intact when shooting for VR systems. More than this, many of the artistic considerations familiar to photographers no longer apply -- or are applicable in new ways.

Composition is often the first consideration in creating a conventional photograph. With environmental imaging, however, composition as such is not even relevant since there is no "frame" or boundary to define the experience. This doesn't eliminate composition in the broad sense from consideration: it translates this aspect of the creative experience to encompassing more than two dimensions. The challenge becomes one of composing an image without reference to any boundary or edge. In this connection, selecting an environment for photographic capture has more in common with the ways we view architecture than the artistic parameters associated with conventional photography.

Depth of field is often used as a creative element in conventional photography, rendering background or other information slightly softer and shifting point of view to a specific area of interest.  This creative tool does not work the same way in VR because the viewer expects everything to always be in focus, just as it appears in the real world. In practice, maximum depth of field, typically using f/22 as an aperture setting and carefully setting hyperfocal distances, is often the best we can do.

With precise aperture setting to preserve maximum depth of field for all visible objects we can present a virtual environment that appears in focus, regardless of the subject or portion of the image being scrutinized. If an object in virtual space does not immediately appear in perfect focus, the illusion is lost and virtual reality fails.

Dynamic range is challenging in conventional photography and even more problematic when capturing VR environments because of the wide range of light values. With a 3600 x 1800 spherical panorama in an outdoor setting, it's common to be facing directly into the sun in one direction, and into deep shade in the other. Exposure bracketing and HDR (high dynamic range) processing is the only solution. Fortunately, stitching software is designed to process HDR images automatically in both true HDR and image fusion modes. There are, however, drawbacks to this approach. With each additional set of bracketed images, processing time increases dramatically. This increased data collection taxes everything from camera buffers and storage media to post production. Computer processing time can easily stretch to more than a day to calculate and display a single image, using current techniques and conventional computers.

The dynamic range of human vision is extensive, allowing people to easily adjust to a wide range of light values. Photographs, by comparison, are extremely limited. As a result, the same loss of detail in highlights and shadows that is typical of many conventional photographs will appear unreal in a virtual setting. This places additional demands on the dynamic range that must be captured in order to provide a believable representation of what we would normally expect to see. With conventional photography, a major challenge involves adjusting the dynamic range of the image to match the very limited dynamic range of the photographic print. When digital displays are used, this restriction may no longer apply, and both dynamic range and overall image sharpness can be extended.

Image resolution is unlimited in stereo panoramas. Longer focal lengths require more images for covering a given area or angle of view. Adding more images directly increases resolution in the final product. This would be good news to those who are convinced that more is better where megapixels are concerned, except that processing demands quickly limit image capture to practical levels. A reasonable approach would be to determine the maximum resolution the display media can accommodate and set the focal length and number of images required accordingly.

Patience is always a virtue, of sorts, and with photographic imaging it is a necessity. Several hundred separate photographs may be required in creating a single pair of environmental images in full stereo.  Inadvertently bumping the tripod while shooting or committing any one of several dozen other minor errors can easily destroy the final result.  The unhappy consequence of this and similar catastrophes will often not be apparent until long after the shoot and well into post production processing. 


Camera settings

Automatic settings make modern digital cameras easy to use and give the photographer creative freedom to explore subject matter instead of being constantly immersed in myriad dials and details. Unfortunately, many of these settings must be disabled when shooting for VR capture.

Focus can no longer be automatic. It's fine to use auto-focus to initially set the distance, but this feature must be turned off when shooting panoramas. Shifting focus during capture can result in major confusion in post production.

Exposure should remain constant. Variations in light level should be accommodated by bracketing all the images, not by changing exposure settings of individual images that will be combined in the final result. There is some latitude for individual exposure adjustment if you are shooting a panorama as a single row of horizontal images. For spherical panoramas, however, exposure must remain fixed.

Focal length must remain constant. The more advanced stitching software calculates the actual focal length used instead of relying on the EXIF focal length data encoded with the image file. If this parameter varies among images, the stitching software becomes completely befuddled.

White balance must be consistent for the complete scene. Allowing white balance to adjust automatically for each image can result in wide variations that are extremely difficult to correct later.  


Image capture – Wisconsin Capitol Dome

A single rotation of the panorama head creates a horizontal row of images.  Changing the vertical angle creates additional rows of images as shown below.  In this example, fifteen images are required to cover the full 360° horizontal and five rows of images are required to cover the vertical field.  A separate set of images is created for each eye position, separated by the interocular distance desired.  This distance is typically 70mm. 

This composite image shows all 75 individual photographs.  This represents either a left or a right component of the stereo image.  The entire stereo for the VR output is constructed from at least 150 individual photographs. If higher resolution is desired or if bracketing is required, this number will rise dramatically.

These 75 separate images will be combined to create a single channel of the stereo view covering 360° by 180° .


Image stitching

The stitching software converts the separate images and creates a 360° by 180° cylindrical projection. This equirectangular projection will be subsequently mapped onto a sphere for viewing in a VR environment.  The focal length is 35mm.  The five horizontal cylinders were shot at vertical angles of +70, +35, 0, -35, -70 degrees.

The 75 separate images are combined in stitching software to create a complete equirectangular or spherical view.


Quick Time VR (QTVR) test - Wisconsin State Capitol

The PTGui stitching software includes an option for creating a Quick Time VR image.  This is an ideal test of the complete stitching system when spherical panoramas are created.  In this configuration, the equirectangular cylinder is mapped onto a sphere, which is similar to the complete VR presentation.  Only one of the two stereo images can be presented at a time with this technique. While this test image does view not show depth, it does give a true indication showing how the equirectangular photographic image appears as it is mapped to a spherical surface. In other words, whether normal perspective is restored and all the lines appear straight.

Movable Image This QTVR test view shows the equirectangular image mapped onto the surface of a sphere.

The black object seen when looking straight down (nadir) is the panorama head and tripod.  The image of the tripod and spherical panorama head can be removed, if desired, by adding a series of hand-held images of the floor without the tripod in place.  Adding these images to the final stitched image will effectively erase the tripod from the final product. First attempts at this procedure usually result in also adding photos of your own feet.

The small black circle seen when looking straight up (zenith) is a mistake.  The vertical angle on the topmost cylinder was not quite large enough to include the full 90° vertical.  This is one reason why running a spherical QTVR test image like this while still in the stitching software is a good idea. 


NextCAVE - Environmental Imaging

NextCAVE experimental setup. This high resolution display presents polarized stereo images.



Tiled Display - Maximum resolution imaging

Tiled displays provide extreme resolution images in standard format, anaglyph format, and stereo.



Quick Time VR (QTVR) test - House on the Rock

Movable Image This view of House on the Rock is captured in extreme resolution for tiled displays or VR.



StarCAVE presentation - House on the Rock

A StarCAVE presentation of House on the Rock near Spring Green, Wisconsin. Only one of the stereo channels is shown in this photograph of a person veiwing the VR environment. The complete image may be viewed in full stereo and extends to cover the floor.

Image resolution - summary

Photographic panoramas may be created in any resolution, in full stereo, and to any dimension desired.  Images in the several gigapixel range are easily achieved.  The practical resolution limit is based on size of the storage media for image capture and the computer processing time available for the stitching software to create the final image.  

It is essential that a panorama head with indexed rotation points be used.  Both horizontal and vertical resolution of the final image are based on the focal length of the lens and the width of the image sensor in the camera.  The rotation angle of the panorama head and the vertical angles used to cover the vertical field of view must be selected so there is sufficient overlap between adjacent images for the stitching software to work.  The overlap is typically 1/3 to 1/4 of the image.  Increasing the focal length and reducing the angle of rotation allows more images per 360º rotation and, hence, a higher resolution in the final product. 

Once the rotation angle of the panorama head is set, the vertical angle for each row of images is adjusted for a similar image overlap.  With smaller rotation angles, a corresponding increase in the number of horizontal cylinders is required to cover the vertical field of view.  In the Wisconsin State Capitol Dome example above, a 35mm lens and a rotation angle of 24º is used, requiring 15 images to cover the full 360º and a total of five horizontal rows to cover the vertical field from 0° to 180°. 


Reality -- virtual and otherwise

Our mechanism of visual perception abhors ambiguity, and may go to extreme lengths to create reasonableness and continuity out of whatever visual information is present. Under normal circumstances, this cognition is the primary contributor to our sense of what is real.

But our perceptions can be fooled, as with the Necker Cube illusion first examined by Swiss crystallographer Louis Albert Necker in 1832. Is the cube turning right or left? There are at least two possible realities represented by this animated figure. With insufficient data to definitively select one over the other, our actual perception of this event may waver. This dichotomy is precisely the circumstance that allows us to perceive optical illusions of various sorts, and also allows us to intentionally manipulate the visual realities of others. Examine Necker's cube carefully before you decide which interpretation is real and which is not.

Necker Cube - Optical illusion observed by Louis Albert Necker in 1832 shown here as an animation.


To create a virtual or alternate view of reality, sufficient information must be presented to make that view the preferred choice. When photo realism is added to the VR environment via environmental imaging, the data can be sufficiently compelling.

It is no accident that we use the phrase "I see" when we actually mean comprehension and understanding. We live, after all, in a visually defined universe -- as opposed a dolphin's principally auditory environment and the canine's olfactory world. To us, seeing actually is believing -- even if it's only a virtual perception.

Kayahara's Spinning Dancer is a beautiful illustration that shows how you can create a reality to match your perception. Look carefully at this figure and notice that you begin to see "clues" that affirm the direction she is turning. These clues will be subtle at first, but will gain in strength as you continue watching. Some time after you are convinced that you have figured out the correct interpretation of her movements, you may be surprised to see that she has changed completely and is now turning the other way.

Nobuyuki Kayahara's Spinning Dancer

This figure is carefully designed and created to be perfectly symmetrical. There is absolutely no difference between the interpretation that she is turning to her left, and the equally valid interpretation that she is turning to her right. It is possible to experience her facing you and swaying back and forth. And equally "correct" to see the same movement with her facing away from you.

While the Necker Cube illustration is often confusing and ambiguous, many people view Spinning Dancer with firm conviction that their perception is the correct one, and all others must be false. Once we have made a choice, we continue to assemble visual evidence to support this contention.

Our inherent ability to create a sense of reality that is based entirely on visual information is a valuable asset. The learned ability to manipulate the visual world and create realities of our own choosing is the essence of all VR systems.


Conventional photography and environmental imaging

As display facilities and galleries devoted to virtual reality and other electronic arts become more prevalent, environmental imaging and similar multi-dimensional photographic forms will be explored further. Current research now in progress will provide smaller and more portable image capture systems than the one described here. Additional development will also provide fully robotic systems for automatic environmental imaging, using the techniques and concepts already described.

The basic principles that attract artists to photography in the first place are unchanged when the prospect of another vehicle of expression is added. All photography begins with the desire to share a visual experience. Conventional photographs allow others to experience things that the photographer has seen. With environmental imaging, the emphasis shifts to recreating places the photographer has been.


Notes and References

Calit2 --
California Institute for Telecommunications and Information Technology

CAVE Cave Automated Virtual Environment --

High Resolution Multi-tiled Displays --

"Poverty Island with Digital Skies" --


Taliesin Courtyard Panorama © 2009 Dick Ainsworth and the Frank Lloyd Wright Foundation. Used by permission.

Necker Cube in Rotation -- Centre for Cognition, Donders Institute for Brain, Cognition, and Behavior

Don't call it a "nodal point"

Wikipedia "nodal point"

Kerr, Douglas A. "The Proper Pivot Point for Panoramic Photography" 

van Walree, Paul. "Misconceptions in photographic optics"

Littlefield, Rik. Theory of the “No-Parallax” Point in Panorama Photography

Equipment list

Nikon D3
Nikkor AF-S 24-70mm f/2.8 lens
Manfrotto 303SPH Spherical Panoramic Pro tripod head (modified)
Manfrotto 438 leveling head
Manfrotto 055MF4 tripod
Manfrotto two-axis camera mount level

Software list

DxO image correcting software - www.DxO.com
Photoshop CS2 - www.Adobe.com
PTGui Pro stitching software - www.PTGui.com