ARIEL DYNAMICS WORLDWIDE   


 

Software GenLock

 

When dealing with 2 or more unsynchronized cameras that are used to simultaneously collect video data for 3D analysis, it is possible to employ a software algorithm which will calculate the relative time offset of the cameras in question.

The simplest case to consider is a two camera setup with a single point in the field of view. The two camera projection centers along with the object point define a plane. If there is considerable motion out of this plane, then an time base error will translate into an increase in the residual when transforming to 3D, as will be shown below. One can interpolate between frames & find the time shift of one vs the other which minimzes the residual to find the "best" time offset.

Consider the example of a ball falling to the ground with two horizontally pointing cameras. Each camera determines a ray from the camera principal point through the object point. In a perfect world the two such rays, one for each camera, would intersect at the object. Now if one imagines introducing a time shift of one of the cameras, then the ray for that camera would be aiming higher or lower than the ray for the other camera and the two lines would not intersect. Rather there would be some "distance of closest approach" for the two lines. This distance is related to the residual in 3D calculation. In this example, the greater the time offset, the greater this distance would be. Then by minimizing this distance versus time shift one can calculate the real time difference between the unsynchronized cameras.

Since this method relies on minimizing the residual of a point moving out of the plane defined by the cameras & object, any other effect which has the same result will incorrectly be interpretted as a time offset between the cameras. Possible such effects might include systematic incorrect digitizing. For example if one camera view was systematically digitized low and the other high one might incorrectly  interpret the residual as a time shift. Another example would include camera distortion. For the algorithm to work successfully the error due to time offset between the cameras must be larger than the other contributing errors. If one considers a ball dropped from a height of 2m & video taped at 60 Hz, the ball would have a velocity of 6.2 m/s resulting in motion of 10.3 cm between frames. The motion of the ball in .1 to .2 frame times should be larger than the other errors mentioned.  When one considers this residual summed over all frames, it is reasonable to be able to calculate the relative time offset of multiple cameras to .1 frame time.

However, the APAS System is not limited to only software genlock. You can use any hardware that you wish to genlock your cameras. But why to spend the money when you can achieve the same with software.

[Back to FAQ]