CREATING A NEW DIGITIZING VIEW FILE
What we are calling views here can be referred to the cameras views. Each camera view represent one digitizing view. You can have up to 255 cameras with the Windows version of the digitizing program and 9 cameras with the older DOS programs. However, for all practical purposes, the most you will ever use, are probably 6 cameras. In the famous study of comparing different motion analysis systems, the investigators used 5 cameras to analyze the APAS system. However, for most gates and sports analysis, our users using 2 to 5 cameras. In many cases 2 cameras are good enough. Many of our customers using two cameras at 45 degrees to each other and getting excellent results. In the Atlanta Olympic Games, we used 5 cameras to collect data, and used only 3 of these for the analysis.
The view information includes the following parameters:
Title - The view title should be used to help identify this view when subsequently referenced. It is suggested to enter a label that is descriptive of this particular view. For example, "Subject 1 Gait, Left View." or "Left Camera" etc.
Frame Rate - The Frame Rate is the number of images per second that are captured in the image file. This item is always required and should be as accurate as possible in order to yield accurate velocity and acceleration computations. The frame rate normally is 60 fields per second. With the Buz or Marvel capture cards, it is always 60 fields per second which is equivalent to 30 frames per second. (You must input 60 and not 30 !). With the Redlake capture card it can go up to 1000 frames per second and since the Redlake capture card is not Interlaced, then the frames and the fields are at the same rate. This need additional explanation. Normally an NTSC signal from any video camera is sent at 30 frames per second. However, each frame consist of two fields which are interlaced. If you do not know this information I recommend very highly to read the following article on our web site. Additional information isavailable here. The one I am using the most is at: The Idiot Guide to Desktop Video. It is a fantastic source for the smart people. Idiots will not read it...
For most APAS applications, the Frame Rate can be calculated by dividing the video speed by the sum of the skip factor plus one. For example, if 300 images were captured from a 60 Hz video system with a skip factor of 1, the Frame Rate will equal 30 since ((60 / (1 + 1)) = 30 Hz. Refer to the chart below for additional examples.
Camera ID - Camera ID is an optional field provided to identify the type of camera used to record this view.
Camera X, Y, Z - Camera X, Y, and Z defines the coordinate values for the location of the camera in the frame of reference of your control points. The camera location is not required information to perform an analysis, however, if the location is known you may request a validity check of the transformation from digitizing space to image space. This may be of value when publishing studies and demonstrating the validity of the method used. Just remember, these values are not necessary. But can be used for accuracy measurement. These parameters were introduced in 1968, when we needed to know the distance of the cameras from the focal point of the camera to be able to calculate 3D measurement. This was before the DLT was introduced in 1972. In the Mexico Olympic Games I collected data on 16 mm cameras and had to input the distance of the cameras from the center of the calibration frame to calculate the 3D measurement. Also, at that time the 2 cameras had to be orthogonal to each other. Today, of course, this is not necessary. The most important factor is to have accurate calibration frame at the movement location.
View Type - View Type is used to specify each individual camera view as either STATIONARY or PANNING. Stationary views are cameras that record the activity of interest from a "fixed" camera. Fixed cameras can not be altered in any manner (zoomed, focused, moved etc) when recording the calibration points and activity data. Panning views are cameras that record the calibration points and activity by Panning the camera from left-to-right or vice-versa. As with the Stationary views, the zoom and focus should remain constant while recording the calibration points and activity. Please refer to the PANNING CAMERAS section for additional information on digitizing panning views.
3. Open the Image File
The next video file will show you step by step, all the processes up to here. (Digi_automatic2.avi .8MB)
4. Begin the Digitizing Process.
The following video file show this process. (Digi_automatic3.avi 1MB)
OPENING MULTIPLE VIEW FILES
In the Digitizing phase of the analysis, certain joints may often be obscured from a particular view. This situation often meant that the joint had to be "estimated" with limited information. The DIGI4 program allows up to four views to be opened and digitized either simultaneously or individually. This is a useful feature in cases of obscured joints. While one view may have a joint that is not visible, other views will show the same joint from a different perspective. This allows a more educated estimate to be made for digitizing the joint in the view where the joint is obscured.
The following Video File show this process. (Digi_automatic4.avi 2MB)
Up to 4 views can be digitized simultaneously. The APAS is the only system in the World that can achieve this function.
LOCKING MULTIPLE IMAGES
1. Select the LOCK command from the IMAGES menu.
The following Video File illustrate the Lock function and its effect: Lock_Function0001.avi (2MB)
CORRECTING DIGITIZED POINTS
Points that are digitized incorrectly can be corrected by one of three methods:
ENTERING MISSING POINTS
Points that are unable to be digitized should be entered as Missing by selecting the Missing command from the Images menu. An example of this function would be the case of a baseball batter hitting a ball. If the ball is digitized as one of the points it may move beyond the boundary of the image. In this case the ball would be entered as Missing when it is no longer visible. This command is useful for points that are missing in only a few images. For points not visible for extended periods, please refer to the help section on INVISIBLE POINTS.
The follwing video illustrates the Missing Point function: Missing_point0001.avi (.5MB)
If the point that missing can be detected from at least 2 other cameras, then, no interpolation will be needed. However, if a missing point is not seen by other cameras, then the APAS will use linear interpolation to estimate the location of this point.
OPENING A PREVIOUSLY CREATED VIEW
Again in this tutorial, the Mouse buttons are controlled as follows:
MOUSE BUTTON FUNCTIONS
The Digitizing phase of the analysis makes extensive use of the mouse. The arrow keys can be utilized for "fine-tuning" the digitized point, however, most users rely solely on the mouse. The APAS computer is supplied with either a two or three button mouse. There is a Left (L), Middle (M) and Right (R) button. Several functions have been incorporated into the mouse buttons to simplify the Digitizing process. These functions are listed when you select the ? icon from the DIGI4 Toolbar.
Left - Selects the current position of the cursor as the digitized point location
Middle - Corrects the last digitized point. When a joint is Digitized incorrectly, the Middle mouse button can be used to correct the point. The cursor will reverse one joint for each time the Middle button is pressed.
Right - Moves cursor to the "estimated" location of the next point to be digitized. Since this requires previous information to "estimate" the point location, this function has no affect in the first frame
Ctrl-Left [Drag & Drop] - Press the Control key and Left mouse button simultaneously to Drag the point nearest to cursor to desired screen location then release the mouse button. If you desire to Abort the Drag/Drop feature, simply drag off the client area and release the mouse button.
Shift-Left [ Redigitize] - Pressing the Shift key and Left mouse button simultaneously will cause a Dialog Box to appear selecting a joint. After a joint is selected, if AutoDigitizing is active, the program will perform an Initial Locate for the spot associated with the specified joint starting at the cursor location of the mouse click. If AutoDigitizing is inactive, then the specified joint is re-digitized at the cursor location of the click.
Double-Click Right - . When all the points for the current frame have been digitized and the Status Bar indicates that the image is Complete, the Right mouse button can be "double-clicked" to advance to the next image.
The Active "Hot Keys" are as follows:
ACTIVE HOT-KEYS FOR DIGITIZING
Even though the Digitizing phase of the analysis makes extensive use of the mouse, some users may prefer to use the keyboard for several of the more common commands. The arrow keys can be utilized for "fine-tuning" the digitized point. Pressing the ENTER key is identical to the Left mouse button and will enter the cursor location as the digitized point. Several other common commands are listed below:
Correct Previous Poing
ALT-Right Arrow - When all the points for the current frame have been digitized and the Status Bar indicates that the image is Complete, pressing the ALT key and the Right Arrow key will advance to the next image.
ALT-Left Arrow - When all the points for the current frame have been digitized and the Status Bar indicates that the image is Complete, pressing the ALT key and the Left Arrow key will reverse to the previous image.
DEL - Pressing the DEL key will erase the digitized location for the current point.
The following video files will demonstrate the digitizing process up to now utilizing as many of the functions that were discussed so far. The case studies that will be demonstrated will include the following:
To save disk space, the digitizing was included only part of the full sequence. Latter on, I will use a full sequence to produce results. These sequences made to demonstrate the process. Because of the highly compression factor, the video may look less clear at time. Normally the video is not compressed to such a high factor and the video is much clearer.
This was a simple sequence with only 35 frames since I tried to keep the file small. Also, you can notice degradation in the video quality and the markers because of the high compression. For you to see the "real thing", you must capture and digitize on your system. However, for educational purpose and demonstration of the process, I think, it is adequate. The following illustration show the typical laboratory set up.
From the Gait lab of:
The gait lab is equipped with five video cameras (Panasonic) synchronized to each other though genlock, two AMTI multi component force platforms and telemetric EMG equipment (BTS).
Two sets of photocells are used to measure the walking speed and put an electonic marker on each of the five video tapes. This marker is a white horisontal bar on top of a video field (see video clips below).
Three dimensional coordinates are derived from the five cameras by the APAS video analysis system.
As you see in this illustration, 5 cameras are used in order to create an environment where you can see each marker from at least 2 cameras. This is a requirement in order to create the 3D coordinate. The APAS allow you also to miss points as you will see latter and then interpolate between positions.
2. Automatic Digitizing process where some of the markers are not visible.
This case is the most typical one. If you are using only 2 or 3 cameras, then through the movement, any movement, some markers will be obscured. Some may say: "well, how many cameras do I need to be able to see the markers from at least two cameras at the time?". The answer to that is that you will need at least 5 and in most cases 6. In a laboratory situation this is not a problem. You can set the cameras and the video area and then you will have a fully automatic system with the APAS system. You can see an example how the APAS used in this kind of environment here. However, in most cases, when you video athletic events outside or in some remote area, it will be difficult to put 6 cameras and not to have some kind of disturbances. I am doing it for over 30 years and have the experience of collecting data in 7 different Olympics. I probably digitized more sequences then most people in the World, and was involve with many other systems. My conclusions, that it is the most efficient to use less cameras and combine the automatic and semi-automatic digitizing together. In this example I will show you how.
3. Combination of Automatic and Manual digitizing.
During the World Championships in Athletics in August 1995 in Gothenburg Sweden, a research group had access to videotape the final of the mens triple jump. In this competition Jonathan Edwards broke the existing world record twice. The research group consisted of Per Aagaard, Morten Havkrog, Erik B. Simonsen, Gideon Ariel and Leif Dahlberg. The hop, step and jump were recorded by separate cameras with a shutter of 1/1000 sec. Later the 9 best athletes of the final were analysed by the APAS system.
The world record in triple jump of 18.29m by J. Edwards, UK.
(The video clips must be viewed with MS Internet Explorer)
Hop Step Jump
In this case, only 3 cameras were used since it was impossible to place more on the field under the circumstances. Some may say: "Well it is not scientific enough", or they may say: "The level of error may be more then one Millimeter...". My answer to these so called "scientists": "How many World Records you analyze".
What I mean here, is that in some cases you have no choice but compromise the number of cameras and other factors in order to analyze a great performance. Of course, in the lab, you do not have this problem.
4. Fully Manual Digitizing with markers.
5. Full Manual Digitizing without markers.
[Go to Lesson 6] [Go to Lesson 8] [Go to Tutorials] [Go to Home Page]