ARIEL DYNAMICS WORLDWIDE   


 

Panning


Hello Andrew,

Thank you for your message.  The APAS system supports two different methods
for the panning cameras.  Both methods are used for 3-D analysis.  The
original method consisted of a panning head that mounted to the tripod
(between the camcorder and the tripod).  This panning head had a cable that
connected to the character generator port of the camcorder and was used to
superimpose a horizontal line on the video image.  The length of this
horizontal line was proportional to the panning angle of the camera.  During
the digitizing process, instead of digitizing the "fixed point" (as with a
stationary camera) the user was required to digitize the endpoint on the
"paning bar."  Information on this algorithm was first presented at the
1993 ISB Congress in Paris.  The reference information is listed below.

   Stivers, K.A.; Ariel, G.B.; Vorobiev, A.; Penny, M.A.; Gouskov, A.;
Yakunin, N.; "Photogrammetric Transformation With Panning"; XIV ISB
Congress, Paris, France, July 4-8, 1993.

While the "panning head" method was very functional, Ariel Dynamics research
and development improved on the panning method and eliminated the need for
the panning head hardware.  The new algorithm handles this task entirely
within the software, thus allowing any camera to be used for the panning.
The software algorithm requires that there be two calibration cubes.  Each
cube must have a minimum of 8 control points (though 12 or more are highly
recommended) and ALL points are still measured relative to a single origin.
In essence, we are telling the software that there is one very large
calibration cube.  In between the two calibration fixtures, we use "panning
points" instead of the fixed point.  The user has the option of specifying
(and then digitizing) any of the panning points as they come into and go out
of the field of view.

Additional information on the panning procedures are listed in the pull-down
help menus of the Digitize software module.  Open the Digitize module and
select HELP, INDEX, PANNING CAMERAS to access this.

I hope this information is helpful.

Sincerely,

John Probe
Ariel Dynamics, Inc.
Email:  ARIEL1@ix.netcom.com



----- Original Message -----
From: "Andrew Lyttle" <ALyttle@wais.org.au>
To: "'Ariel Dynamics'" <ariel1@ix.netcom.com>
Sent: Sunday, July 22, 2001 5:57 PM
Subject: RE: 2D Panning Algorithm Required


> Hi Gideon,
> Thanks for the reply to my posting on Biomech-L.  I am looking for the
> algorithm and calibration method you use for your panning camera and
> instrumented tripod.  We will not be commercializing the system and
already
> have an instrumented tripod and most of the software written for data
> collection so we are just after the most accurate way of calibrating the
> run-way space.  We currently have the latest version of the APAS system at
> WAIS but I could not find any details in the manual on the algorithm.  Any
> help you could provide would be greatly appreciated.
> Regards,
> Andrew Lyttle
>
> Andrew Lyttle
> Sports Biomechanist
> Western Australian Institute of Sport
> Stephenson Ave, Mt Claremont WA 6910
>
> Tel:  (08) 9387 8166
> Fax: (08) 9383 7344
> Email: alyttle@wais.org.au
>
>
> -----Original Message-----
> From: Ariel Dynamics [mailto:ariel1@ix.netcom.com]
> Sent: Saturday, 21 July 2001 0:36
> To: Andrew Lyttle
> Subject: Re: 2D Panning Algorithm Required
>
>
> Check at:
> /
>
>
> ----- Original Message -----
> From: "Andrew Lyttle" <ALyttle@WAIS.ORG.AU>
> To: <BIOMCH-L@NIC.SURFNET.NL>
> Sent: Friday, July 20, 2001 12:00 AM
> Subject: 2D Panning Algorithm Required
>
>
> > I am in the process of developing a system to determine an T&F athlete's
> > foot position along the run-way using a single panning camera.  The
system
> > would be used for competition analysis and hence will be non-invasive in
> > nature (ie. can not have any markings on the runway during competition).
> > The system will have to be able to accurately calculate the foot
position
> > along the length of the runway irrespective of where the foot lands
along
> > the width of the run-way.  Panning angle information would be available
> via
> > a potentiometer in the tripod which can be pre-calibrated with markers
on
> > the runway prior to competition.  I am looking for an 2D algorithm or
> method
> > of calibration to define the object space along the length of the runway
> so
> > that the angle information from the tripod and digitised coordinates of
> the
> > foot can be combined with the previously collected calibration file.
The
> > only 2D panning algorithm that I can find is from Chow, J.
(International
> > Journal of Sport Biomechanics: 1987, Vol 3. pp.110-127), although this
> > requires markers along the runway during competition.  I am also aware
of
> > systems using an instrumented tripod to provide real-time analysis (such
> as
> > used in swimming at the Australian Institute of Sport), however, these
> > require the athlete to be directly in the centre of the screen during
the
> > whole pan.  I have also searched through the Biomech-L archives for 2D
> > panning algorithms with no success.  Any assistance would be greatly
> > appreciated.  The address for replies is alyttle@wais.org.au and I will
> post
> > a summary of replies.
> > Many Thanks, Andrew Lyttle
> >
> > Andrew Lyttle
> > Sports Biomechanist
> > Western Australian Institute of Sport
> > Stephenson Ave, Mt Claremont WA 6910
> >
> > Tel:  (08) 9387 8166
> > Fax: (08) 9383 7344
> > Email: alyttle@wais.org.au
> >
> > ---------------------------------------------------------------
> > To unsubscribe send SIGNOFF BIOMCH-L to LISTSERV@nic.surfnet.nl
> > For information and archives:   http://isb.ri.ccf.org/biomch-l
> > ---------------------------------------------------------------


Dear Jim,
John Probe, director of technical support for Ariel Dynamics has used
the panning and he was involved in the implementation of the panning
software module, and he could answer your questions. He provided the
technical training of panning for me when I was in California.
I have been using my Hi 8 camcorder for the panning unit and an
panasonic VHS AG195 camera for the stationary camera view. I manually
panned the Hi8 for the high jumping and hurdler using a fluid tripod
head. There is on the market a Sony camera used in distance education
which will automatically track and pan/tilt on the subject, but I
have not had the money to purchase one for the lab yet.My cameras are
about 20 meters from the plane of movement anbd the panning cube
setup is about 20 ft long with a cube on each end and about 50
control points are visible. But only 6-12 pts per cube are necessary
and I have about 6-8 pts on the panning pole.  The distances between
the known markers are used in calculating the positions of the
cameras in 3D space and then the DLT is transforming the coordinate
information. Typically a 30 degree separation is needed  when using
the 2 stationary camera DLT but I am not sure what the requirements
are for the Ariel panning module. I would suggest that you email Dr.
Ariel about these questions.  Also, you have to make sure that you
have a trial/subject indicator in teh field of view for all cameras
in order to make sure you are using the same trial and I use a camera
strobe that is discharged in the views for the field/frame
synchronization. If you play with the size of a mask over the strobe
you'll be able to see the flash in just one field and the auto iris
will not start adjusting for the brighter light.
Let me now if I can provide any other information.
Al Finch


Hi Jim,
I am using the Ariel APAS panning software for the panning of high
jumping, hurdling and long jumping. In the background I have 2
calibrations cubes that have a pvc pole with markers along it to be
able to calculate the degree of pan. The PVC pole is in sections such
that I can have a 6,12, 18 ft separation. By having this flexibility
I have calibration point information for at least 6 points (18 pts
are available)per cube plus 5-10 points on the poles.The system then
calcualte the panning using the 3D DLTit determine the camera
position. I am presently working on a project comparing the panning
and standard 2 camera fixed 3D DLT and comparing the accuracy during
high jumping.
I hope this gives you some ideas, please feel free to contact me if
you have any other questions.
Sincerely,
Al Finch, Ph.D.
Director Biomechanics Lab
Indiana State University


Hello Normand,

The only difference with the panning camera is the requirement for two
calibration structures.  The left and right calibration structures should
encompass the area that will be used for the analysis.  Each structure
should have a minimum of 8 conrol points (though 14 or more are
recommended), with all points measured relative to a single origin.

The calibration for each camera is handled independently.  Therefore, if
the stationary camera only "sees" the left or the right cube, you will
experience errors as the subject moves out of the calibrated space.  The
same concept hold true for one or more cameras only viewing 1 calibration
structure.

The recommended use of the panning option is that the panning camera see
both cubes.  The stationary cameras should see at least some points on BOTH
calibration structures, though seeing all the points on both structures is
ideal.  When the stationary cameras only see one calibration structure, you
are losing the depth of the calibrated area and errors will increase as the
subject moves out of the calibrated space.  Any number of stationary
cameras can be used in combination with the panning.

I hope this information is helpful.  Please contact us for any additional
information.

Sincerely,

John Probe
Email:  ARIEL1@ix.netcom.com


At 11:27 AM 06/16/2000 -0400, you wrote:
>Dear John/Gideon,
>
>We're trying to master the panning technique and we're facing a few
>problems. We've already gone through all the documention on panning. Here
>are a few questions that should help us find our way around:
>
>1) For the stationnary cams, is there any pre-requisite with regard to the
>calibration. More specifically, is it necessary that each stationnary cam
>sees both calibration structures? or can the right cam sees only the right
>calib frame and left cam, only the left frame?
>
>2) Can we have more than 1 stationnary cam/view per calibration structure
>(e.g., 2 stationnary cams for the right and 2 stationnary cams for the left
>frame and one (or 2) panning cameras?
>
>Thanks in advance,
>
>Normand
>
>
>
>
>Normand Teasdale (Normand.Teasdale@kin.msp.ulaval.ca)
>Universit Laval, Laboratoire de performance motrice humaine, PEPS
>Facult de mdecine
>Dpartement de mdecine sociale et prventive
>division de Kinsiologie
>Qubec, Qubec G1K 7P4
>Tl: (418) 656-2147
>Fax: (418) 656-2441
>


Following is a summary of replies from my 2D panning algorithm query last
week.  I am grateful for all of the responses I received.

Original Message:
I am in the process of developing a system to determine an T&F athlete's
foot position along the run-way using a single panning camera.  The system
would be used for competition analysis and hence will be non-invasive in
nature (ie. can not have any markings on the runway during competition).
The system will have to be able to accurately calculate the foot position
along the length of the runway irrespective of where the foot lands along
the width of the run-way.  Panning angle information would be available via
a potentiometer in the tripod which can be pre-calibrated with markers on
the runway prior to competition.  I am looking for an 2D algorithm or method
of calibration to define the object space along the length of the runway so
that the angle information from the tripod and digitised coordinates of the
foot can be combined with the previously collected calibration file.  The
only 2D panning algorithm that I can find is from Chow, J. (International
Journal of Sport Biomechanics: 1987, Vol 3. pp.110-127), although this
requires markers along the runway during competition.  I am also aware of
systems using an instrumented tripod to provide real-time analysis (such as
used in swimming at the Australian Institute of Sport), however, these
require the athlete to be directly in the centre of the screen during the
whole pan.  I have also searched through the Biomech-L archives for 2D
panning algorithms with no success.

Summary:
Most of the replies I recieved provided references related to methods of
calibrating 2D or 3D panning cameras which involved control markers
remaining in the field of view throughout filming and/or the precise survey
location of the camera to be known.  Ideally, given that this sort of
testing would be done in competition settings, we would prefer this type of
analysis to be as non-invasive as possible (hence we would like to calibrate
the run-way prior to comp and then remove all of the markers from the
field).  Dr. Jim Walton and Dr. Young-Hoo Kwon were particuarly helpful in
providing information on panning techniques that could be accomplished
without markers remaining in the field of view during filming.  The main
issue of perspective error problems in 2D or 1D analyses remain and need to
be accounted for if possible.  This problem is greatly reduced using 2
cameras.

Replies:
----------------------------------------------------------------------------
--------
Jim Walton:
Have you given any thought to using IR lighting to create calibration
markers?  Your camera could "see" it, but nobody else could ... except
perhaps, ABC  :-)
If you want a simple demonstration of this "phenomenon", point a TV remote
at the lens of your camera ... you can see the "little lights" blinking
their codes out as the buttons are pushed.
Expand on this concept ... draw lines with IR lighting andyou can "project"
a "permanent" calibration grid onto your object-space that others can't
"see".

I described how to calibrate a two-dimensional object-space in my doctoral
work ...
Walton, J.S.  "Close-Range Cine-Photogrammetry:  A Generalized Technique for
Quantifying Gross Human Motion." Penn State, 1981.
Basically, I described how to reduce the DLT to a 2-D algorithm that can be
used to calibrate and track motion in a plane with a single camera. [This
can also be found in the Proceedings of the International Congress of Sports
Sciences. Edmonton, Canada, August, 1978 under the title of "Close-range
Cine-photogrammetry: Another approach to motion analysis"]

************************************************************
*   JAMES (Jim) S. WALTON, Ph.D., President, 4DVIDEO   *
*      825 Gravenstein Highway North, Suite 4      *
*       SEBASTOPOL, California 95472 USA       *
************************************************************
*   PHONE:  (707) 829-8883    FAX:  (707) 829-3527   *
*         INTERNET:  Jim@4DVideo.com         *
************************************************************
----------------------------------------------------------------------------
-------

Young-Hoo Kwon:
2-D panning I believe is no different from 3-D panning. As long as you do a
series of calibrations and express the DLT parameters as functions of the
panning position. Based on the observation from the simulated calibrations,
the parameter does not change radically as the panning position changes.
Cubic spline interpolation of the DLT parameters for panning position would
be sufficient. Obtaining the panning position from the instrumented tripod
makes the whole process much simpler.

The key is the use of 2-D DLT method. I would put several poles of known
length (range poles) at different locations along the track. As long as I
know where I put the poles, I will be able come up with the real-life
coordinates of the control points marked on the poles. With these, I can
perform a series of 2-D DLT calibrations and develop a set of paramenter
predictions equations. The only problem is how to sync the video images and
the panning position signal from the tripod.
If you deal with the foot only, you may even use the 1-D DLT instead of the
2-D. Combining the foot and hip definitely require s 2-D DLT-based approach.
Anyway, the process explained above is one of the standard features of my
motion analysis software, Kwon3D 3.0. I am finishing up the upgrade now. It
will be really interesting if I can have a chance to test the program with
your data.

- Young-Hoo Kwon, Ph.D.
- Biomechanics Lab, PL 202
- Ball State University
- Muncie, IN 47306  USA
- Phone: +1 (765) 285-5126
- Fax: +1 (765) 285-8596
- Email: ykwon@bsu.edu <mailto:ykwon@bsu.edu>
- Homepage: http://kwon3d.com <http://kwon3d.com>
- Korean kwon3d eGroup: http://kwon3d.com/korean/eGroup_kr.html
<http://kwon3d.com/korean/eGroup_kr.html>
- Int'l kwon3d eGroup: http://kwon3d.com/eGroup_i.html
<http://kwon3d.com/eGroup_i.html>
----------------------------------------------------------------------------
--

Gideon Ariel / John Probe:
The APAS system supports two different methods for the panning cameras.
Both methods are used for 3-D analysis.  The original method consisted of a
panning head that mounted to the tripod
(between the camcorder and the tripod).  This panning head had a cable that
connected to the character generator port of the camcorder and was used to
superimpose a horizontal line on the video image.  The length of this
horizontal line was proportional to the panning angle of the camera.  During
the digitizing process, instead of digitizing the "fixed point" (as with a
stationary camera) the user was required to digitize the endpoint on the
"paning bar."  Information on this algorithm was first presented at the
1993 ISB Congress in Paris.  The reference information is listed below.
   Stivers, K.A.; Ariel, G.B.; Vorobiev, A.; Penny, M.A.; Gouskov, A.;
Yakunin, N.; "Photogrammetric Transformation With Panning"; XIV ISB
Congress, Paris, France, July 4-8, 1993.
While the "panning head" method was very functional, Ariel Dynamics research
and development improved on the panning method and eliminated the need for
the panning head hardware.  The new algorithm handles this task entirely
within the software, thus allowing any camera to be used for the panning.
The software algorithm requires that there be two calibration cubes.  Each
cube must have a minimum of 8 control points (though 12 or more are highly
recommended) and ALL points are still measured relative to a single origin.
In essence, we are telling the software that there is one very large
calibration cube.  In between the two calibration fixtures, we use "panning
points" instead of the fixed point.  The user has the option of specifying
(and then digitizing) any of the panning points as they come into and go out
of the field of view.

Additional information on the panning procedures are listed in the pull-down
help menus of the Digitize software module.  Open the Digitize module and
select HELP, INDEX, PANNING CAMERAS to access this.

John Probe
Ariel Dynamics, Inc.
ARIEL1@ix.netcom.com
----------------------------------------------------------------------------
---------------

David Rath:
A couple of 2D papers you may want to follow up, this paper
http://www.orst.edu/hhp/exss/research/labs/BioMech/abstracts/panning.html
and the one referenced by Hay and Koh should be useful.  Another 2D
articleof interest is Gervais, et al. (1989), Kinematic Measurement from
Panned Cinematography, Canadian Journal of Sport Science. 14:2  107-111.
APAS have a panning head with their system which uses a pot and interfaces
with the viewfinder jack on some cameras (works with Panasonic MS4 and 5
from memory) and outputs a white bar onto the recorded video, the length of
which is relative to camera angle, this point is digitised instead of a
fixed point and negates the need for track markers.  We have this unit but
have never got reliable data from it, it's 3D not 2, but the pot side of
things may be relevant.

David Rath
AIS Biomechanics
RathD@ausport.gov.au
----------------------------------------------------------------------------
------

Michael Feltner:
Jesus Dapena and I used 2d panning in this research.
Dapena, J. & Feltner, M. E. (1987). The effects of wind and altitude on the
times of 100 meter sprint races. International Journal of Sports
Biomechanics, 3(1), 6-39.
If you check the references in the manuscript, a paper that Jesus authored
previously in Sciences et Motricite, "Three-dimensional cinematography with
horizontally panning cameras" (1978, 1(3), 3-15) is listed.
Together both papers should answer your questions.

Michael Feltner
Michael.Feltner@pepperdine.edu
----------------------------------------------------------------------------
---

Andrew Lyttle
Sports Biomechanist
Western Australian Institute of Sport
Stephenson Ave, Mt Claremont WA 6910
Australia

Tel:  +618 9387 8166
Fax: +618 9383 7344
Email: alyttle@wais.org.au

---------------------------------------------------------------
To unsubscribe send SIGNOFF BIOMCH-L to LISTSERV@nic.surfnet.nl
For information and archives:   http://isb.ri.ccf.org/biomch-l
---------------------------------------------------------------


 

 

 

[Back to Home Page]  [Back to FAQ]