Make your own free website on Tripod.com
.

 

 

Oversized CCD 3D Camera

by John Wattie (kiwizone) This version: March 19, 2012


Mounting stereo cameras: introduction

  1. Toed-in

  2. Parallel

  3. Horizontal sensor shift

  4. Mirror or prism stereo rigs

  5. Over-size sensors and digital convergence.
  6. Cameras using digital convergence
  7. Uses for variable convergence

 

 

Panasonic Meduza Fuji

 

Basic Concept

The idea of moving the camera receptor sideways for convergence control, adjusted by a linear motor, was mentioned on the previous page. There is no reason to use mechanical receptor motion if it can be replaced by electronic control for:

  1. convergence,
  2. alignment and
  3. vertical parallax correction.

Variation in stereo base has to be mechanical, (albeit motorised) since it involves moving the the two cameras sideways, but the remaining stereo geometry control is electronic.

Oversize receptors in the two cameras are set up parallel, so there is no keystone distortion at any stage.

A large 2K or even 4K CCD chip is suggested, but only part of the CCD is used for the image. This portion might be:

  • HDTV size of 1920 x 1080 instead of native 2048 x 1152 pixels (2K) 0r
  • 2048 x 1152 instead of 4096x 2048 (4K).


Since the convergence adjustment on this non-mechanical version is done electronically on the oversize CCD itself, the CCD offset from the optical axis could be zero.

HDTV example

1/30 rule for stereo base would be 1920/30 = 65 pixels needed for convergence.
1/15 rule for close up stereo: 1920/15 = 128 pixels needed for convergence.
Excess pixels available for convergence = 2048 - 1920 = 128 pixels on a 2K chip.
This means even a 2K chip is big enough for HDTV with electronic sensor shift.
(Not the correct way to apply the 1/30 rule, giving too many pixels on a wide aspect ratio sensor, so a worst case scenario.)

Convergence is done by electronically moving the smaller portion of the CCD used for imaging (instead of moving the whole sensor mechanically).

It is even possible with an oversize chip to move the two image reception zones vertically, to overcome converging verticals, without the need for shift lenses.

The alignment corrections could be viewed in real time during filming, in stereo if desired.

Big sensors means many pixels, times 2 because we are outputting stereo means a big big computer needed, hopefully without hitting the "2K limit." (2048x2048).

The advantages are:

  1. no post-processing is needed for correction of stereo geometry because all corrections are made during filming.
  2. Full 16:9 aspect ratio is retained, despite using parallel cameras.
  3. No keystone distortion, despite convergence control.
  4. Since the convergence is electronic, there is no problem with jerky mechanical movements during convergence-pulling on 3D movie cameras.
  5. This control would be ideal for 3D live television broadcasts. Adjustments are made in the camera and the image is ready to broadcast as it is recorded.
  6. Vertical perspective control is also provided, with no shift lenses needed.

Disadvantages include:

  1. Oversize lenses are needed to ensure the image circle is big enough to cover the large sensor.
  2. Variable stereo base still has to be achieved mechanically by moving the cameras parallel.
  3. Unlike mirror rigs, the stereo base cannot be less than the camera width (say 65mm). This is fine for standard images and only a problem for macro stereo, but even then it could be cured - by using mirrors!
  4. Big camera sensors mean big computers to handle the data.
 

 

Two cameras with parallel optical axes and offset receptors for convergence control

This is the same as the old Stereo Realist film camera. (Asymmetrical frustrum with no keystone distortion).

Improvements possible are:

  1. Moving the receptors sideways under linear motor control for convergence adjustment (stereo window setting).
  2. Using oversize receptors which do not move, but the small area used for image reception moves electronically.
  3. Arranging to move both the small receptor areas vertically, for perspective control.

Parallel stereo camera axes with offset CCD receptor

Cameras using digital convergence on an oversize sensor in 2010

 

 

 

Panasonic

Looking at this picture of the prototype Panasonic 3D movie camera I cannot see any correction for stereo base. That is not good, because I use base adjustments extensively. A small stereo base will not do for sports 3D TV.

Meduza

A new camera (2011) which does have variable stereo base and convergence, interchangeable lenses and even interchangeable 4K sensors. It can be controlled from an iPad or Android: Utube

 

Silicon Imaging SI-2K is said to have digital convergence because it outputs direct to a Windows computer running post processing software. But buy two of those for 3D and you are into big bucks.

There are rumours the Panasonic Full HD 3D camcorder, which is not yet available on the market, will be using the digital convergence and alignment system described here.

Panasonic Full HD 3D camcorder3D

The FinePix Real 3D W1 stereo camera

This is the first digital stereo camera on the market.

Victor Reijs has done experiments on the Fuji camera and found parallax variation was more complex than initially suspected.

  1. Convergence.
    The camera uses optical convergence. This is fixed. Optical convergence, as expected, causes slight keystone distortion in the right channel. Convergence is done by tilting the right lens by 2.2 degrees.
  2. HIT
    In addition there is Horizontal Image Translation (HIT) on an over-size sensor. This is variable. The variation has an automatic and manual mode.

 

 

 

So the Fuji camera is converging using two methods at once.

In the camera's automatic mode, the stereo window is placed at the focussed distance (like the Loreo beam splitter does). Since the optical convergence angle is fixed, the variable convergence must be done by HIT on the sensors.

Convergence can also be done manually. However, it seems this only works on the 3D viewer on the back of the camera, but that is great for checking the stereo image when big convergence close-ups are taken. Convergence needs to be done again in post-processing, which is no problem with Stereo Photo Maker program.

Uses for variable convergence

 

Having the object of interest in sharp focus and also at the convergence point is a method favoured in movies, including Avatar.

The advantage is the object of interest is on the screen surface, meaning no stereoscopic disparity, which is easy on the eyes, especially for an audience which is not trained in stereoscopic viewing. No conflict occurs between focus and convergence in the eyes of the audience.

Convergence and focus are linked, making it easier to vary them both simultaneously and keep a moving actor at zero disparity

The object of interest has no disparity ghosting.

The director can draw attention to the main feature by having it in focus, on the screen and ghost-free, even if it is moving

Convergence on the focus point causes a window violation, unless floating windows are used, or if it is considered acceptable to have objects overlapping the lower edge of the screen.

The animated 3D movie, Toy Story 3, does use floating windows but at times also overlaps the lower screen edge to project objects into the movie theatre space. But the projected object is usually dark, and so merges into the dark of the theatre and it takes an astute observer to realise what has happened.

Overlap of the upper screen edge is not acceptable, since we do not enjoy having a character's head split by the screen.

 

Comments

If you do not know my email address, please send comments via Flickr or Linkedin. (This avoids spam in my personal email, a big problem before.)

 

 

Jay Gharavi: 20/4/10

The hardware design and picture processing pipeline of the digital camera style sensors, including those with video capture capability, assumes that the entire effective area of the sensor yields a single frame per capture. This means that it may not be feasible to force the system to capture a frame mapped only from a part of the sensor, in real time. The implication is that the desired mapped frame may have to be extracted from a fully captured frame.
This may not be a disaster, but it certainly does require some buffering and processing, that may reduce some the advantages that you listed.

Another point that worries me a little, is the offset axis of the lens(though the offset may not be large enough to cause trouble). Lenses with large image circles are necessarily more bulky, and complicated, adversely affecting the design choices (zoom range, max aperture, minimum focus, etc.), bringing to question their usefulness as the normal lens.

Mathew Orman

Over-sized sensor means over-sized lens distortions which makes toe-in a superior method since only barbell and keystone correction is required.
In off-axis method there are asymmetric lens related distortion like chromatic aberration, vignetting, decreased DOF etc.
Still, the electronic convergence vs mechanical most likely will win as it produces more profits for manufactures.

Mathew Orman

For state of the art stereoscopic vision systems with: eyestrain free, distortion free and realistic immersion stereoscopic experience contact me
at:
http://www.tyrell-innovations-usa.com/

Stanslav Evstiev

I'm not sure I got the whole idea, because for HDTV cameras now a 36mm senzor is oversized, but for 36 a 6x6 etc... but here is something as a quick over:

Well, there are some nice points here. But custom sensors are expensive :-)
Still if we just keep it on paper - let say we are aiming at a space of 36 mm sensor. If You think about two areas of around 2/3" size (commonly used in HDTV equipment) You can place four of these areas side by side on a 36mm, meaning that if You want to use just two - they can be moved by their FULL lenght in half the sensor. The shift is 1 pixel, so it will not be noticed (that's 0.05% of the size). Well for having 2K for each 2/3" we need 8K, but only horizontally - meaning that we can pass by having 8000 x 1500 (or for the math sake - 2000 :-) - resulting in 16Megapixel sensor within the area of 36mm. The sensor density is around 200 per milimeter which is less than the current Canon 7D (550D) sensors (they have 232 photo sensors per milimeter and are quite good at capturing light). So that can already be possible, but not made.
But if anyone need something to test right now he can just use a Canon 5D MKII for experimental rig. There You cannot have so much area - but still if You separate two 2K rectangles from the sensor You can move them in the rest 1500 pixels - so each area can be moved by 750 pixels left/right. Simple test could be done by just taking pictures - not shooting video, so just the optical part is in question here :-)

Nice one and keep going!

Phil Streather

"If the cameras are thin (such as the 65mm SI-2K Mini) there is no need for a mirror box because the stereo base (I.O.) can be set at the normal 65mm."
This is one of the biggest and most dangerous myths in 3D. Anyone who thinks that 65mm is an optimum i/o for shooting 3D should not be allowed to shoot 3D. Any film could use from 5mm to 5ft depending on the shot. Sorry to be tough but these misunderstandings and representations must be nipped in the bud before newbies to the game take such nonsense at face value and thing they can make good 3D with a camera pair that can only achieve i/os of 2.5" pr greater. Or, heaven forbid, buy the Panasonic 3D camera (1/4" chips indeed!!) and wave that about in anger!

Oversize!? The chips are 1/4", this is not enough to qualify as HD for ANY channel (Sky, nat Geo, Discovery), even if you use all of the chip. If there is extra room on a 1/4" chip then how much is left? Now, the EX3D camera (with 1/2" chips and HDSDI out at 10 bit off the sensor) is a much more interesting proposition - particularly, as I hear, it will use interchangeable C mount lenses and thus be able to get a min i/o of 1.5" as opposed to the 2.5" of the Panasonic

For professional, full size movie, you are of course correct. For 3DTV or smaller screens, bigger inter-axials are needed. But for macro work, even on TV, smaller interaxials. It all depends on how the 3D is to be viewed. For hyperstereo, even 5 feet can be rather too small, with distant scenes! John Wattie .

Avenir Sniatkov

I'd say that electronic transformation of picture information captured by camera sensor could be thoughtfully done in post, or TV station. The more functions are built into the camera, the less interactive control you have, the more consumer (vs. professional) the camera appears.
The idea itself is bright, I think. All you describe can be easily incorporated in 3D animation software camera rig (and often done that way, I believe)

Peter Wilson

Parrallel camera's with digital post processing works fine with the added benefit that its extremely easy to align the rigs with a checkerboard.
For a 3D production workshop I ran at Pinewood studio's a couple of years ago I had a friend make a monitoring box now called the Stereobrain to drive micropolarised displays without horizontal resolution loss and invert the mirrored channel for recording and display.
I also had Snell and Wilcox (now Snell) Programme their Kahuna Production switcher DVE's to simulate toe by using complimentary rotating perspective. This works fine with parrallel camera's with interaxial around the normal. 60-70mm. The latest version shown at NAB has this ability and can invert mirrored images. The advanced Snell architecture can process many channel pairs live by invisibly switching resources behind the scenes.
The Quantel Pablo can also do this and it is quite entrenched in the High end Hollywood Post Houses.
I am looking at lower cost solutions for Live multichannel production.
Changing interaxial will directly affect scale and volume. Smaller works well with model shots and small objects, wider can extend the stereo effect but if overblown makes the objects look ridiculous. Extremely wide Interaxial will affect object volume and turn a soccer ball end on into a rugby ball.
For tv particularly sport the mechanical rigs will need to get simpler and Digital processing with some level of automation will take over.
A secondary problem with the small chip camera's is depth of field which if used with a lot of toe can cause backgrounds which are impossible to fuse.

 

Introduction to Mounting cameras for stereo

Practical and theoretical stereo rigs ( more details than given on this page )

Contents

Kiwizone