Make your own free website on Tripod.com

2nd Edition of the Kiwizone Stereo web site. 1st edition here

 

Making Stereoscopic Pairs

by John Wattie (kiwizone)

This page presents basic concepts which will be expanded on later pages.
First time here? Just read the left columns and avoid the right column complexity.

Two pictures needed to make a stereoscopic pair, one for each eye.

These are later fused, by various methods, in the observer's brain to create a realistic 3D impression.

Horizontal disparity between the images is the essence of stereoscopy but vertical disparity is not permitted.

People I talk to often imagine the two images for stereo are identical, but that would never work.

So the real cameras, or virtual cameras, are side by side, separated by a horizontal base-line, and the distance between them is the stereo base.

The image pair must be taken with some care to match how human eyes are horizontal and cannot comfortably fuse vertical differences (disparity). The amount of horizontal difference that can be fused varies in different people and is covered later in "choosing a stereoscopic baseline."

The stereoscopic pair can be obtained by:

1. a single camera, moved sideways between the two sequential exposures.
(Or, a static camera and the subject moved, or rotated, between the exposures).

2. a pair of cameras, mounted side by side, exposing nearly simultaneously, ideally linked electronically.

3. a single dedicated stereo camera, which is essentially two cameras mounted together in a box with the shutters, focus and lens diaphragms precisely linked, often mechanically. This method has a fixed stereo base.

4. Mirror or prism beam splitters using a single camera, or a pair of cameras.

5. Macro stereo methods: discussed later.

6. Computer Generated Image (CGI):
a. Originally used for special effects, but currently most 3D movies are CGI animation, entirely or in part. Many stereoscopic controls, like variable depth budget at three different levels, are difficult without CGI.
b. 2D to 3D conversion: often effective in stills, but a controversial topic in movies.

7. Advanced systems, often with multiple cameras and extensive computer manipulation, used in science, robotics, face recognition etc.: Samuel L Hill Varrier

 

Two Olympus E410 digital cameras on a Manfrotto tripod bar:

Cross-eye stereo pair: X.
Alec Kennedy, shown driving this rig, hates anaglyphs, so you will have to go cross-eyed to see him in stereo!
Alec is using "digital shutter release:" (two fingers!). Click the picture to comment.

The two images are identical except for a sideways shift to produce stereoscopic disparity or parallax.

Two Olympus E 410 stereo camera rig

Alec Kennedy using two Olympus E 410 cameras on a Manfrotto bar and "digital" shutter release.

Stereo photograph of Alec taken by John Wattie, using paired Sony V3 cameras on a home-made aluminium bar.
Shutter release was true digital: using a Rob Crockett Lanc Shepherd.


 

 

Alignment and Convergence

Alignment must be perfect. The two images must be identical except for the essential slight image shift in the horizontal plane (x axis), which gives stereoscopic parallax (disparity). There must be no difference between the images in the vertical plane (y axis).

Convergence (stereo window level) is discussed below.

 

(Some effects, such as slant recognition, can be based on very slight y axis disparity, as studied in the psychology of stereopsis, but y disparity is consciously avoided in stereoscopic photography.)

Geometrical requirements for making stereo photographs

1. The base line between the cameras must be horizontal.

(The x plane is level).

One camera cannot be higher than the other
(y axis difference is forbidden).




 

 

Usually, the two cameras are mounted on a horizontal bar, or an elaborate rig.

When the cameras are widely separated, a laser level is ideal for alignment. At least, sight along one camera to the other to ensure both are on the same level and parallel.

Cha-Cha stereo photographyRed-cyan stereo glasses

Cha-cha stereo, where one camera is moved sideways with sequential exposures, is very prone to a tilted base-line. The Karate girl, shown above in an anaglyph movie, is tilting her camera.

Vertical image disparity, from one camera higher than the other, is impossible to fully correct during post processing. You can only fudge it by rotating both images and ending up with a sloping horizon.


Victor Reiss discusses this in more detail.

2. No rotation between the two cameras:

Both should show a horizontal horizon.

Rotation can be easily corrected in post processing, with loss (cropping) of some of the image.

3. Neither camera may look up or down relative to the other.

Not only does this shift the images vertically, it causes a change in perspective.

Correctable by vertical perspective transformation during post processing, usually followed by vertical size distortion, often with cropping loss.

4. The image sizes must be identical.


This means matched focal length lenses and the cameras truly side by side: not one camera forward of the other.

Size difference is not very critical because it is correctable.

However one camera forward of the other not only makes a size difference, but introduces a perspective difference in depth (z plane), which is not correctable. Z plane perspective distortion is worse for objects near at hand and nearby objects are those best shown stereoscopically. The problem is less severe in hyperstereoscopy.

Some people set up one camera forward of the other (overlapped) to bring the two lenses closer together. Closer to the ideal of 65mm inter-axial distance, which matches human inter-ocular distance.

5. The focus and depth of field of the two images should be identical.

Not so vital. A difference in focus has even been advocated to increase depth of field over-all.

However, precision workers do not break this rule, because it causes visual stress, especially if the images are magnified onto a large screen.

No post processing repair available.

6. The exposures must be synchronised.

Nothing can move between the two exposures.

Synchrony is critical in high speed stereoscopy, down to no problem at all if the subject is static.

Remember clouds are not usually static. For cha-cha stereo, move in the direction of the clouds.

Asynchrony between movie cameras causes really strange 3D effects, like people walking in space rather than on the road. Synchrony between two video or digital movie cameras is done by genlock

7. The exposure and image post-processing must be identical

The pictures must have the same contrast, brightness and sharpness.

Difference in brightness causes retinal rivalry producing a flicker or luster that is unpleasant to view.

Post processing for brightness difference usually means correcting for contrast and often colour balance as well.

Colours can be very different, with minor stereoscopic problems, as long as the luminance of the two images is identical. Otherwise anaglyphs would be impossible. Luminance is adjusted in LAB mode in Photoshop because this does not change colours.

For stereo pairs, other than anaglyphs, it is usual to match colours carefully.

8. The stereo base must be chosen to match the pre-determined depth budget

Setting the stereo base is not necessary in a dedicated stereo camera, where it is fixed. When paired camera rigs are used, stereo base becomes a big subject.

Discussed later

It is very hard and time-consuming to correct stereo base errors during post processing, but it can be fudged using depth maps.

Convergence onto near objects

[Neophyte column on the left, Expert column on the right]

The stereo image pair must be:
1. accurately aligned and then
2. converged onto the desired stereo window distance.

Convergence is essential to move objects in virtual 3D space to just behind the stereo window. (z plane correction).

It is ideal to both align and converge during image capture. Various stereoscopic rigs usually fail to achieve this, to different degrees of 3D distortion.

Fortunately, alignment and convergence can both be achieved, or refined, during post-processing.


The stereo window

stereo frame

Right eye --- Left eye

A cross-eye stereo pair of a cocktail glass, sitting in front of a map of New Zealand, photographed through a picture frame.

This illustrates correct stereo window.

The tumbler is slightly displaced relative to the picture frame. The right eye image shows the tumbler about central, while the left eye image has it displaced slightly to the left. That means the glass is just behind the frame.

New Zealand is displaced even more to the left in the left eye image. That means it is even deeper in the frame than the tumbler.

As a stereo photographer you need to learn how to tell the right and left images apart.

If more distant objects move to the left between the two pictures, then you are looking at the left image.

Distant objects move to the right, relative to closer objects, in the right image.

Just the same way things move when you look out of the side window of a car.

Stereoscopic 3D, using two eyes, works just the same as motion 3D. No wonder the combination of stereo and motion 3D is a powerful tool for indicating depth.

You can convince yourself the rules are correct by viewing the two pictures cross-eyed, when they will fuse into a single 3d image. Or they will if you have learned the optical gymnastics required. Looking cross-eyed at an image pair is the quickest way to tell which is left and which is right.


Dahlia puzzle

Here is a Pompom Dahlia stereoscopic pair for you to decide which is the left image and which the right:

Pompom Dahlia X stereo

There is no stereo window edge, so you must use the rules given above, rather than the method in the right column. This is reality. When you have taken the pictures there is no correct window edge until you have post-processed the two images, and you cannot do that without deciding which is which.


Wide screen

The real world does not have a visual frame because the wide angle view of our eyes seems to have no margins: it just fades out. Very large movie screens (Imax) attempt to present the illusion of a frameless world.

Stereoscopic pairs made for computer screens do not allow a frameless world. The edge of the screen makes a picture frame. We must make certain the frame is not broken by objects in the picture crossing through it. Everything at the picture edge must sit behind the frame as we view it in 3D. Otherwise the brain is confused by a physical impossibility. Eye strain and nausea result if this is continued for a long period, the very thing we must avoid if our stereo pictures are to be accepted. There are few rules in stereo photography, but avoiding window violations is one of them.

Very wide movie screens succeed if the action is in the center of the screen. Small window violations at the periphery of vision are not noticed, except by stereo enthusiasts who look at the edges purposely to find window violations, instead of behaving themselves and watching the central actor.

Moving the picture frame in front or behind the screen surface is a common 3D trick, but objects must still not break the shifted frame. So get the window level right first, then move the frame second. Floating windows are used extensively in movies and you will see many examples in the still pictures on this web site too.

Still stereoscopic photographers do not have the cinematographic tool of diverting the audience attention (like a magician does to disguise his tricks). Still 3D must avoid any window violation. When this rule is followed, it allows viewers to look anywhere in the picture without breaking the stereoscopic illusion.


Answer to the Dahlia problem:

The Pompom is a sphere, so the edge of the flower is further away than the more central parts.

In the left of the two images, the central parts of the flower (closer) are moved to the left compared with the flower's edge (further away).

The left image is for the right eye.
The picture is a cross-eye stereo pair (X).

If you look at the image with parallel eyes, the dahlia will turn inside out and become a bowl rather than a sphere. This is called pseudo stereoscopy.

 

Different terms for "convergence"

The term convergence comes from the 3D movie world, where toed in cameras are frequently used and converged on closer objects, just as our eyes converge on something nearby.

Alternative terms include:

  • HIT (Horizontal image translation).
    One image is moved within the frame relative to the other until the closest objects are superimposed on each other. Objects aligned on top of each other end up sitting on the screen (or print) surface when seen stereoscopically.
  • setting the stereo window depth or
  • ZPS which is "Zero Parallax Setting," from the world of 3d image processing.

Obscuration

Obscuration of one object by another is the most powerful optical indicator that a distant object, partially hidden by an adjacent object, must be further back.

Obscuration does not need two eyes and is so powerful that it over-whelms stereoscopic information about depth.

Stereo Window.

The edges of a 3D image act as a window, through which we view the stereo scene. The window is part of the stereo image. Window edges will obscure objects further away than the window. Usually the window frame is on the screen surface. However, the window can be moved in front or behind the screen (floating window).

Window violation

If the window frame fails to obscure any object behind it, there is a window-violation (wv). When the image pair are set up during post processing, it is essential that objects objects supposed to be obscured by the window frame are seen stereoscopically sitting behind the frame.

  1. The right eye can look around the left frame and must see more than the left eye.
  2. The right frame obscures more from the right eye than it does from the left.

Any window violation breaks these rules and partially destroys the stereoscopic illusion. This leads to visual stress in the audience. Audience stress and headache mean pictures get rejected, and so convergence is an essential skill for stereo-photographers.

 

Motion parallax

People and animals frequently move their heads sideways to produce motion parallax and confirm that a near object is obscuring a distant one. If an obscured object shows up from behind a near object as our head is moved, that is the clue it must lie further back.

Look out the side window of a car to see motion parallax in action. Near objects whiz backwards while distant objects seem to keep up with the car's motion.

Slowly shifting a movie camera sideways is a powerful way to show 3D without the complexity of stereoscopic filming. Just panning the movie camera around on a fixed point fails to produce 3D. A good cameraman will combine panning with sideways motion and this is a very potent combination for 3D, especially when done with a stereo camera rig.

Our inability to use head movement in a stereoscopic picture is one important reason we say stereoscopic images are only an illusion of 3D. Motion parallax is about as good as stereoscopic vision for detecting three dimensions and even uses the same part of the visual cortex.

Mounting the Stereo Cameras

Inition bolt rig

(Image from Inition, used with permission)

This sophisticated 3D camera rig has many advantages, achieved with mechanical ingenuity. You may notice a couple of possible movements that I do not recommend further on in this web site. This rig (or a similar by Inition) is what I currently favour, if only I could afford it :-(

 

Inition 3D Camera rig using
Silicon Imaging SI-2K Mini cameras

A pair of cameras or camcorders are mounted side-by-side.
The extensive movements, provided for professional 3D 2K movies, can be motorised to allow remote control.

Inition Bolt Rig

SI-2K mini Camera

Being only 65mm wide, Silicon Imaging mini cameras can be mounted for stereo 3D, separated by the human interocular distance, without need for prisms or mirrors. They use 16mm movie lenses (cheaper than 35mm).

 

Stereo picture of a violinist at Howick Historical Village, in anaglyph format.

Red-cyan stereo glasses

Howick Historical Village, Violinist, 3D anaglyph

Below is the full colour version from which the anaglyph was made, in cross-eye stereo format.

Stereo Pair in cross-eye format
Anaglyph

The violin is coming through the window.
That is perfectly aceptable and in fact is preferred by naive people, who think it is not stereo unless things come at them.
The edge of the frame must not be penetrated, but things can come through the middle, just like real-life open window frames.

Note that the anaglyph colours on the right are close to the original colours, but not quite.

Skin colour is not as pink in the anaglyph as in the original pictures.
If parts of one image, when seen through the red/cyan filters, are darker than the other, the effect can be unpleasant (retinal rivalry).
In the anaglyph, red is being used to code the left eye image (red image) for stereo disparity (not really for colour), but without causing retinal rivalry.
The brightness difference caused by using only the red part of the image is overcome by adding some green and blue luminance (brightness) to the red (left) image. Not the blue and green colours themselves, you understand, just their luminance component.
Some red luminance is added to the cyan (right) image. That additional, colourless luminance, dilutes the red and cyan colours. We will learn more about these tricks when describing how to make an anaglyph.

But red colour turns to black, when seen through the cyan filter over the right eye. RED Red-cyan stereo glasses
Bright red turns to white when seen through the red filter over the left eye, while dark red turns grey. So the red part of the anaglyph is actually black and white (monochrome).
Green plus blue (cyan) hopefully turns to black (or grey) through the left eye filter but stays a pale cyan through the right eye filter. CYAN
The red filter may leak some cyan, depending on the quality of the anaglyph goggles.

So red is never seen correctly in a red/cyan anaglyph.
That is why many stereo-photographers, like Alec, dislike anaglyphs.

Next Page : Practical and theoretical methods to align two cameras for stereo.

Oversize sensor with digital convergence

Stereo Contents (1st edition)

Kiwizone