2nd Edition of the Kiwizone Stereo web site. These are the only pages so far loaded to the internet. 1st edition here

Digital Stereo Camera Rigs:
Existing and Theoretical

By John Wattie

Five ways to align two cameras and introduction to post-processing

1 Toed-in

 

2 Parallel

3 Horizontal sensor shift

4 Mirror or prism stereo rigs

How to set up a cine mirror rig

5 Over-size sensors and digital convergence.

 


Toed in Optical axes

Toed in Cameras

1. Toed-in optical axes.

Both cameras tilt towards the same foreground object.

Toe-in corrects for convergence at the time of taking the picture. This avoids a window violation (wv) because the cameras converge on the level of the stereo window.

Objects can sit in front of the stereo window, without causing eye pain, if they avoid the areas marked as visible to only one camera in the above diagram. Otherwise you have a "window violation" (wv).

Objects beyond the stereo window and seen by only one camera are in "monocular areas." These are inevitable and do not arouse protests in commentators, but it is better to keep all objects of interest out of monocular areas.

Unfortunately, toe-in causes keystone distortion because objects are seen obliquely and parts closer to the camera are magnified more than parts further away. The magnification difference between the two sides of the pictures are exactly opposite to each other. The left camera magnifies the left edge but the right camera enlarges the right edge. The result is a pain in the eyes at the edges, because different sizes in the vertical direction makes the image pair difficult to fuse.

Landscape aspect ratio

Modern 3D movies, computer screens and TV have a horizontal, rectangular shape, not the old square or portrait aspect ratio of earlier stereoscopic images.

Toe-in keeps a landscape aspect ratio, because nothing needs to be cropped off the sides of the images during post processing.

Toe-in angling of the cameras must occur around the front nodal point of the lenses, just as a panorama must rotate around that point. Otherwise the stereo base will be changed at the same time as the stereo window distance and we want to vary those two independently. Unfortunately, changing the focus also moves the nodal point, ending up with three things for obsessive cameramen to worry about. Hmm: how about going with parallel geometry?

Beam splitters

These usually work with a toed-in axis. The latest Loreo 3D 9005 lens in a cap (beam splitter) for digital SLR cameras combines parallax compensation with focus, so that the focused object always has ZPD (zero parallax displacement). However, it produces portrait format images, since they must both fit side by side in the 4:3 aspect ratio frame of an SLR.

Stereo Base

The distance between the cameras can be changed and the lenses zoomed while keeping the degree of toe-in constant. This maneuver follows the PePax principle (see later).

 

 

 

Keystone distortion

Post processing is essential for toed-in images to correct for keystone distortion. The correction involves horizontal perspective adjustment on each image. The heights of objects at the edges of the image pair are restored to identical.

In stereo movies, keystone correction is done frame by frame, but can be automatic (as in StereoPhotoMaker software: SPM). The correction is time consuming and many workers say it is better to shoot parallel.

If the lenses have barrel distortion, it is necessary to correct that first in software.

In Photoshop, perspective correction is a Transform function. For keystone distortion perspective correction is run horizontally, best done on both images, sharing half the correction between them (details later).

Focus and stereo base (inter-ocular: inter-axial: separation ) are impossible to fix properly in post processing and it makes sense to correct them in the camera. (Depth maps in post processing are not as easy as getting it right while taking the image!)

Toed-in Movie Cameras


Toed in cameras are common when taking movies and the convergence correction can be varied during the shot by having an assistant "convergence puller" changing the degree of toe-in to keep the main actor at constant stereo depth.

Another assistant is the "focus puller" who keeps the two cameras focused on the main actor.

In advanced rigs there may even be an "IO&C puller," also called "Con/IO" (Inter-Ocular and Convergence puller: two jobs one person)

Disparity Tagger is a computer from Binocle which colour codes disparity live, during filming, and adjusts the camera stereo base to correct for out of range parallax. The parallax limit can be set for different screen sizes. Amateur stereoscopists must learn to do this without help from a fast computer!

Even cameras working through a mirror box, which usually shoots parallel, may be optically toed in for convergence correction.

Depth of field

Large movie cameras with full size sensors often aim to reduce depth of field as a tool to direct the eyes of the audience onto the active subject. Everything else goes out of focus. (Extensively used in Avatar)

Low depth of field works well as a dramatic tool in a movie but fails badly in a still stereoscopic picture. The depth of field must be deep, because the audience for a static view like to explore all the depth planes in the image and are unhappy if there are blurred bits. Documentary movies in 3D are often made sharp all over because the audience will explore the scenery in unpredictable ways.

Live 3D broadcast TV

This does not have the luxury of post processing, so camera corrections for alignment and convergence are preferred. There are many traps in live 3D TV sports broadcasting as described here

 

 

Stereo Camera Rig with Parallel Optical Axes

Parallel Cameras with infinity stereo window

 

2. Parallel optical axes.

This is the main stereo rig used by stereo photographers at present.

Both cameras look straight ahead. The cameras are aimed at infinity objects. If the stereo pair are mounted like this, the stereo window is at infinity. Everything else lies in front of the window, which is a massive stereo window violation.

Digital stereo cameras, with both channels looking straight ahead and parallel, may allow convergence correction in the camera by cropping off the insides of both images, reducing the horizontal width in the process. This is done while viewing the image, in stereo, on the back of the camera. (Fuji).

The photographer must think, while taking the stereo pair using parallel cameras, that the inner parts of both pictures are going to be cropped off during convergence.

Loss of wide aspect ratio

The aspect ratio is not preserved with a parallel rig and landscape format becomes square or even portrait format during post processing for "convergence" (HIT).

The image height can also be reduced in post processing to regain a wide aspect ratio lost during convergence. This means the original picture must be over-framed to allow for all this cropping.

Practical example showing how the stereo window was set on a macro subject, where image loss can be severe.


 

(see Waterman Polyhedra:
This is a good learning exercise to understand concepts involved in post processing a stereo pair.
Polyhedra on a sphere are presented in stereo anaglyph format. Real time controls are provided to change how the polyhedra are seen in 3D space.

  • Separation adjustment will vary convergence and change the size illusion, although the image actually stays the same size.
  • Roundness adjustment will change stereo base length, which eventually introduces squeeze and compression.


Convergence correction

Parallel twin camera rigs that do not have parallax compensation built in must have the closest objects superimposed during post-processing, by HIT (Horizontal Image Translation) and cropping, until the desired stereo window is at the screen surface. This may be done by automatic batch processing of a whole folder of images.

HIT is Horizontal Image Translation,

HIT is often done manually, with the left and right images on separate layers, in an image processing program such as Photoshop (PS) or StereoPhotoMaker (SPM).

The top image is made transparent, or blended as a difference image (PS), or converted to anaglyph (SPM). The images can both be seen simultaneously by either method.

As the image pair came out of the cameras, the most distant objects are superimposed on themselves and nearest objects are widely separated. This is exactly how you do not want the stereo pair to be, because the stereo window is at infinity, behind everything, when it should be in front of everything.

The images are moved relative to each other, in the horizontal or x axis (HIT), until the closest objects are superimposed on themselves. Getting nearest objects superimposed at the edges of the combined image is always the first step and floating windows, if they are to be used, are set up after that.

Anaglyph is excellent for doing HIT because you can quickly see from the colours if a near object is not on the screen plane. Objects on the screen plane no longer have a coloured fringe to them. You can also quickly see if objects are rotated or not the same size or have keystone distortion (which shows up as different magnification on each side of the image). These defects are not always so obvious when the anaglyph is seen through the goggles, in stereo, by an experienced 3D viewer who has learned to correct such defects in his brain. It is always wise to check the anaglyph, without goggles, before declaring it is correctly aligned and converged.

After HIT, the most distant objects are widely separated, which is the opposite of how they were after filming with parallel cameras.

Correct Stereo Window
Cross Eye View above
Parallel view below

Often correction is also made at this stage for vertical misalignment, rotation, unequal size, perspective correction and colour correction.

Once all this is done, the images are ready for subsequent special effects (CGI, VFX) and floating windows.

Cropping

Once the stereo pair is cropped down by cutting off redundant parts, you may limit subsequent choices in post processing, like floating windows and aspect ratio adjustment. It is often good to keep a set of aligned pairs which have not been converged or cropped. Cropping sets the stereo window. Setting the window by removing parts which are causing window violations may narrow a landscape aspect ratio towards a square or even portrait aspect ratio.

If the image height is cropped down to make the picture wider, remember that when the picture is resized to full screen or monitor height, this is equivalent to a digital zoom in (mild telephoto).

Over framing

To allow for these sorts of changes, it is often wise to over-frame the stereo image. In other words do not always zoom to fill the frame while taking the picture. This move will leave extra picture space to allow aspect ratio changes and advanced window effects in post processing.

See later for through the window and differential window floating effects, for which the aligned images may need to be kept as full size separate files, and convergence only done as the desired effect is achieved. Batch processing for alignment is good but automatic convergence may mess up advanced window alignment later.

Parallel Cameras with infinity stereo windowParallel Cameras loose width

The closer the stereo window is to the parallel cameras,
the more image width is lost when the stereo window is set during post processing.


Parallel cameras with sensor shifted sideways, away from the optical axis.

The image width is retained because
very little or nothing is cut off during post processing

Moveable sensors for varying the convergenceParallel Cameras with sensor off-set from the optical axis

 

3. Horizontal sensor shift.


Both cameras look straight ahead (parallel), but the image sensors are moved outwards. This keeps the two sensors in the same plane while converging the view.

  1. Because the sensors are not tilted, there is no keystone distortion.
  2. Because very little is cut off during post-processing (HIT) most of the aspect ratio of the receptor is retained.
  3. If the sensor shift is adjustable, even better, because no pixels are lost during post processing.

Sensor shift is the best method for converging parallel camera stereo rigs.

Variable sensor shift is HIT done in the camera and not during post processing.

(Horizontal lens shift has a similar effect, but cannot be used, because that changes the stereo base.)

Built in stereo window

Dedicated film stereo cameras like the Stereo Realist used this method, by keeping the two image frames further apart than the lens separation. This was called "built in stereo window." The convergence was fine for objects 2 meters away, but could not be adjusted for other distances because it was not convenient to move the film gate when using roll film.

Double depth slide mounts

Stereo Realist slides could have further window correction by using slide mounts with narrow apertures, aligned to cut off the inner parts of the image pairs. This is the same as digital HIT, described above, but was combined with reduced window separation to float the stereo window forwards.

We will discuss the digital version of floating windows later.

 

 

 

Adjustable sensor shift

Currently I can find no reference to any camera using adjustable sensor position, but I bet some clever manufacturer is working on it!

Horizontal sensor shift in a digital stereo rig should be adjustable, so that convergence can be varied to suit the subject matter.

Image stabilisation is done in some digital cameras by oversized sensor shift, using a linear motor.
example. linear motor.

Variable convergence would use the same ability to move the sensor in relation to the camera optical axis.

Because it changes both convergence and vertical alignment, image stabilisation should be turned off for stereo work in case it becomes out of phase in the two cameras.)

For movies, precise sensor shift by a stepper motor in the horizontal (x) direction only, under the control of the camera man (or the convergence puller), is very desirable and avoids a lot of post processing corrections. The aspect ratio is not changed (very important at 16:9 HD format) and there is no keystone distortion.

Vertical sensor shift

Vertical shift of one image only is provided in stereo cameras, such as the Fuji, in case one of the channels is slightly out of alignment with the other.
(Pitch correction).

Vertical shift of both images simultaneously will be discussed later under cameras with electronic alignment and convergence control.

Window violation versus
Monoscopic areas

In the top diagram, objects seen by only one camera in front of the stereo window cause a window violation.

Objects seen by only one camera beyond the window are monoscopic (2D). Monoscopic areas also occur within the picture, when a close object obscures a more distant one (obscuration).

There is argument about how objectionable monoscopic areas are, since they are an essential part of a 3D scene. In normal life they are resolved by moving your head which is impossible on a stereo image. They can only be reduced by keeping the depth budget low (defined later). If you align the images for stereo window (HIT) but do not crop afterwards, a wide image is retained but the outer parts are not in stereo. Some folks advocate this for cross-eye viewing of uncropped stereo pairs to give a more immersive experience. Personally I prefer proper cropping of a wide format pair to have full stereo, full width.

(See 3D Big Pairs)

Stereo window continuity

In movies and to a lesser extent in stereo slide shows it is a pain in the eyes if the stereo convergence changes rapidly so that objects seem to dance back and forth in the stereo window. Movies like Avatar often keep convergence centered on the actor who is speaking. If you take the 3D glasses off, the active actor is pin sharp and does not have a double image. In a slide show it is helpful to keep the main object converged which also reduces ghosting.

Some stereoscopists prefer to do convergence during post processing because at the time of taking the image it is not always clear what scenes are to be edited before and after and these may cause a convergence jump. Post processing of convergence is possible if over framing is used during filming, giving ample room for a HIT move to correct depth continuity.

In live TV it is better to have convergence done in the camera, without over-framing and this is where a moving sensor really comes into its own.

Special Case: Moveable sensors when magnification is 1:1

 

 

 

Mag one

Stereo Macro Photography with moving sensors

As the sensors are moved out from the optical axes, the stereo window gets closer to the cameras.

The lenses have to be racked out to keep the window in focus. When the distance from the lens to the sensor and to the stereo window are equal, magnification is 1:1.

 

Special Case:
Magnification 1:1

At 1:1 magnification, the stereo window is the same size as the sensor.

The advantage of Macro Stereo photography with moving sensors is no need for HIT during post processing and the sensor aspect ratio is maintained.

Wide Screen Macro: excellent!

At the moment macro stereo is often done with a single camera moved parallel between the two exposures. The sensors cannot be moved in current cameras. At 1:1 magnification, half the pixels on each sensor are lost during window adjustment. This makes it more difficult to arrange a wide aspect ratio in macro 3D, unless over framing is done. As we will see, over-framing in macro 3D has another big advantage: it increases depth of field.

Digital camera sensors are very small and 1:1 is big magnification, once frame magnification is added in. Don't worry, we will clarify all this in the digital macro-photography pages.

Oh no: algebra

(F+E)/N = M

When M = 1:1,
E=F
F + E = N
and the sensors are separated by one sensor width.

F : focal length
E: extension
N: distance to nearest object and stereo window
M: Magnification.

 


Stereo camera mirror rigs

3D Cine Camera rig using two Sony cameras and a Mirror Box on a Steadicam

3D Movie Making [X stereo 1080p]

Karl Schodt describes how to
set up a cine mirror rig

4. Mirror or prism stereo rigs

Half silvered mirror rigs are commonly used for 3D movies, because professional movie cameras are too big to set up parallel, with a 65mm or less stereo base, any other way.

Mirror rigs have problems with dust on the mirrors, light leaks, reflections, ghosting and a long time spent aligning the cameras before the shoot can begin. Less than half the light reaches each camera and very bright lighting is essential (which gets hot and liable to blow fuses.)

Mirror rigs are very good for macro stereo.

A host of different mirror rigs are so well reviewed by
Donald E Simanek
that I really have nothing to add! Here is his ingenious prism rig for macro 3D on the popular Fuji W1 digital stereo camera: Fuji macro

Superb high speed stereo insects and water figures, with discussion of the toed-in set-ups, which eventually evolved to front surfaced mirrors:
Franz (fotoopa)

John Hart mirror rig John built mirror rigs years ago for macro stereo.

Jim Metcalf has assembled a mirror rig for macro photography and his Flickr site shows the results

 

 


 

5. Over-size sensors with
digital convergence and
alignment correction

Discussed on the next page.

Link to digital convergence

Introduction to mounting stereo cameras: Previous page.

Next Page

Stereo Contents (1st edition)

 

Comments

Frans (fotoopa)


 I have read again your nice explanation. This brings me to review my mirror calibration and to measure the parameters.

The picture frame now used for my in-flight insects stay at 100mm. I use a combination of Toe-in and Parallel setting. The current stereo base is about 32mm (is the minimum possible value). ( measured on the mirror position and AF105mm macro lens) At the focus point the 2 cameras have 15mm space, the rest (32-15=17mm) is the toe-in value. This give me a keystone error but if the background is black or blurred due the extra out-focus depth this keystone error can be disregarded. I have used this compromise to limit the extra crop of 32mm from the 100mm picture frame.

On high-speed macro and with the current shutter lag of 58 msec most insects are too far out the centre or out the frame. Setting the 2 cameras for only Parallel, you lose extra 17 mm and this results again in missing the insects or you have to set the picture frame at 100+17= 117mm for the same results. But 117mm give for small insects less pixels due to the heavy crop.

But reading now your articles, I believe that I will test the only Parallel method to see if the results in case of backgrounds are better (and put the picture frame for 120mm (after cropping due to the 32mm stereo base this give me the same netto picture without keystone effect)

Setting the mirrors again take me 1/2 day but I believe it will give me a valuable test. Disadvantage is the lower resolution for smaller insects. Once the setup is adjusted I have to work with the setup for several days due to the time-consuming adjusting of cameras and mirrors.

Once I have a setting, I can make an action in photoshop to normalize the 2 cameras to the same pixel density (D300 - D200) and to crop the correct picture due to the parallel camera setting. There is no need to due this in SPM because the setup stay fix, so all values are always the same.

In macro high-speed the netto pixels on the frame is a very important factor. Most of the web sites neglect this element. The shutter lag is also an important factor, due to the postponement.

But just now my first results of the insects in-flight are not too bad.

 

 

 

 

Kiwizone