Digitizing

 


Hello,
 
Thank you for your message.  I will provide answers (in Bold Italics) below each of your questions.  Please contact me for any further information.
 
Sincerely,
 
John Probe
Ariel Dynamics, Inc.
Email:  ARIEL1@ix.netcom.com
 
 
 
----- Original Message -----
From: "nikodelis thomas" <nikmak@phed.auth.gr>
To: <gideon@arielnet.com>
Sent: Thursday, November 14, 2002 2:27 AM
Subject: 2questions


> Hallo Dr. Gideon,
> How are you, I am one of the Ph.D. students of Dr. Kollias from Greece. I
> hope you remember us from your visit here.
> We are working on Apas and we would like you to clarify a few points for
us.
> 1. First of all in the digitization module, we noticed that the movement
of
> a limb for example, goes backward and forward, but that doesn't appear in
> the raw data. Does this has to do with the frequency from 25/30 frames and
> the dinterlace of the frames to 50/60 fields?
>
ADI:  This is caused by the interlaced fields being displayed in the wrong order and is easily corrected.  The details can be found in the "pull-down" TRIM module help screen.
   Open the TRIM module and select HELP, HELP TOPICS, OPTIONS, VIDEO OPTIONS.  The first description is the Field Order.  You simply need to set this to the opposite setting for your computer.  For example, if your software is currently set to NORMAL, then change it to REVERSE.  If it is set to REVERSE, then set it to NORMAL.  This is a one-time setting for your computer so you should not have to change this anymore.  Make certain to select SAVT TRIMMING so the correct image order will be saved with the file.  Then all other APAS modules will display the video in the correct order.
 
 

> 2. Another very important matter for us is to be able to read with apas,
> raw data, digitized with our applications in order to transform and
analyse
> them. Such a possibility exists? If so that would be great!
>
ADI:  We do not have a program that automatically inputs data collected with other systems into the APAS, however, we do provide the file structure format for the APAS data files on the Ariel internet site.  This would allow you to write your own programs to convert your data into APAS format.  The direct link to the APAS file format is:
 
   /adi2001/adi/services/support/manuals/apas/dos/adw-56w.asp
 
For video data, you would need to convert your data into the *.3D format.  For analog data, you would need to convert your data into the *.ana format.
 
 

> Thank you very much.
> Looking forward to hearing from you.
>
>

Hi Kim,

You will not have to re-digitize the projects. We only need to figure out the correct rate to enter in the View Information menu. Then re-transform & smooth and you are good-to-go!

Now as far as the correct rate to enter, I think you should be using a value of 60 Hz. If you are capturing every single image (which I think you are) then you should use 60. Even if you are skipping images in the digitizing, the "capture rate" should be 60. You will notice that when you advance without digitizing, the Time value is still incremented by the value of one image. At 60 pictures per second, this rate is 1/60 = .017 seconds for every advanced image.

If you are capturing one picture, skipping one or more, capturing a picture, etc... then the rate would be different from 60 (based on the formula in the help files).

Looking at your data, the velocity of the ball at release is approximately 350 cm/sec. This equates to approximately 54 miles per hour. If the Rate is changed to 60, then the ball velocity becomes 163 mph (not realistic).
Do you have any idea of the players pitching speed? I would expect velocities between 50-70 mph depending on the levelof players.

Did you capture at 60 Hz and then use a value of 2 for a skip factor (See attached picture)? That would be one method to do this. Another method would be to capture at 60, a skip factor of 0 and then digitize one image, advance two, digitize one etc.

Let me know what you think and the exact process you used.

John Probe
Ariel Dynamics, Inc.
Email: ARIEL1@ix.netcom.com




----- Original Message -----
From: "Kim Cox" <kcox@bpcc.edu>
To: "Gideon Ariel" <ariel1@ix.netcom.com>
Sent: Thursday, November 13, 2003 6:48 AM
Subject: RE: Kim Cox - BPCC - problem fixed


> John,
>
> Ohhhh Nooooo !!! I think we did do it wrong. I followed the
> instructions (sort of) on the help screen in digitize, but didn't read
> very closely, obviously. We captured at 60 frames/second. The video
> clip we digitized contained 105 frames (after trimming) not the 300
> frames used in the "example". I should have therefore, I think
> entered 105/(2 + 1) = 35 instead of 20... is that right??
>
> Attached are the files you requested. If I have messed this up is
> there a way to correct without completely re-digitizing? We're
> analyzing 5
pitchers
> with 3 camera views each and are in a bit of a time crunch...
>
> Your truly,
>
> "A little panicked"
>
>
> -----Original Message-----
> From: Gideon Ariel [mailto:ariel1@ix.netcom.com]
> Sent: Wednesday, November 12, 2003 4:49 PM
> To: Kim Cox
> Cc: gideon@arielnet.com
> Subject: Re: Kim Cox - BPCC - problem fixed
>
>
> Hi Kim,
>
> The frame rate is actually determined by the rate at which the video
> is captured (not digitized). This means that if you are capturing one
> image, skipping two, capturing one etc... the rate would be 20 Hz. If
> you are capturing all the images, and then digitizing one image,
> skipping two, digitizing another etc.... the Frame Rate should still be 60 Hz.
>
> With standard video cameras, the recording rate is 60 images per second.
> This equates to 1/60 = 0.167 seconds between each image. Thus when
> you specify the Frame Rate, you are actually inputing the time
> interval that will be used to calculate velocity and acceleration
> measures. That is the reason that I asked if your velocity values
> appeared reasonable. One way
to
> check this would be to compare your results with other research. You
could
> also digitize the ball and compute the velocity of the ball. I would
think
> that measure would be fairly well known. I don't have any softball
> data that I could compare to.
>
> Would you mind sending a sample file that I could look at? The files
would
> includethe *.cf, *.1t, *.2t, *.3d files. I do not need the video
> files
>
> Let me know if you have any questions.
>
> John
>
>
> ----- Original Message -----
> From: "Kim Cox" <kcox@bpcc.edu>
> To: "Gideon Ariel" <ariel1@ix.netcom.com>
> Sent: Wednesday, November 12, 2003 1:37 PM
> Subject: RE: Kim Cox - BPCC - problem fixed
>
>
> > John,
> >
> > I used 20 for the frame rate... which equates, I believe to a skip
factor
> of
> > 2. Is that correct?
> >
> > Kim
> >
> >
> > Also... in response to your other question. The velocities do make
sense,
> I
> > think. I'm getting maximum velocities of the upper arm around 1,000
> > to 1,500 degrees per second and ball velocities at release around
> > 55-60
mph.
> >
> > Kim
> >
> > -----Original Message-----
> > From: Gideon Ariel [mailto:ariel1@ix.netcom.com]
> > Sent: Wednesday, November 12, 2003 3:07 PM
> > To: Kim Cox
> > Subject: Re: Kim Cox - BPCC - problem fixed
> >
> >
> > Hello Again,
> >
> > In the View Information menu, I understood that you used a value of
> > 20
for
> > the Frame Rate. Is that correct? Or, did you enter a Skip Factor of 2?
> >
> > John
> >
> >
> >
> > ----- Original Message -----
> > From: "Kim Cox" <kcox@bpcc.edu>
> > To: <ARIEL1@ix.netcom.com>
> > Sent: Wednesday, November 12, 2003 12:24 PM
> > Subject: Kim Cox - BPCC - problem fixed
> >
> >
> > > John,
> > >
> > > I played around with that synch problem I had in the digitize
> > > module
and
> > > figured it out. After selecting VIEW and "synch point", a small
window
> > pops
> > > up with 6 available frames to choose from. Selecting frame 6
> > > resulted
> in
> > a
> > > negative time value like we discussed, but selecting frame "3"
> > > gave me
> the
> > > correct 0.000 time value. I still haven't completely resolved "why"
> this
> > > worked, but trial/error did the trick!
> > >
> > > Kim
> > >
> >
>
 

 

____________________________________________________________________________________

Hello Tamra,
 
Thank you for your message.  Yes, in theory, you are correct.... if it takes one minute to digitize a single image and you wish to digitize one second of video at 60 images per second, then it would take 60 minutes to digitize this sequence.  However, while this works out mathematically, this is not a realistic scenario.  I will explain below.
 
The "manual" mode of digitizing is also referred to as "semi-automatic" digitizing.  In the first image of a sequence, the software does not have any idea where the desired points are located, therefore, the user must move the cursor to each point and press the mouse button to digitize that point.  The APAS software incorporates a prediction algorithm that will position the cursor in the expected location based on previous information.  In the second image, the software only has one previous image to work with (Image #1) so the software will automatically place the cursor in the same location for each point that was digitized in the first image.  The user then makes any minor adjustments (if necessary) and clicks the mouse button to digitize the location of the point in the second image.  When the image is advanced to the third image in the file, the software now has two previous digitized locations to work with (Image #1 and #2) so the software uses both position and velocity measures to predict the location of each point in the third image.  As with the second image, the user makes any minor adjustments and clicks the mouse button to confirm the location of each point.  After 3 or 4 images have been digitized, the prediction algorithm uses position, velocity and acceleration measures and works very well and the user is only clicking the mouse for the digitizing process.  This "semi-automatic" mode greatly increases the speed of the digitizing process.
 
What this means for practical purposes....  An experienced user can manually digitize a full golf swing, or tennis swing from 3 cameras (60 images each) in 15 or 20 minutes.
 
There are other options that can be used to increase the speed of digitizing, though the use of these will depend upon your exact applications.
 
   -  The APAS does not require that every image be digitized.  The user can digitize every second or third image during portions of the video where the movements are relatively slow.  For example, on a simple gait analysis, if the subject is walking very slow, then the user could digitize one image, skip one or more images, digitize another image, skip one or more images etc....  The APAS will automatically use a linear interpolation to fill in the missing data
 
   - The APAS-XP software option includes the CAPDV software module where video can be captured directly to the computer hard disk drive from up to five cameras simultaneously.  This significantly reduces the time required to capture video individually from multiple cameras and then synchronize the video together.
 
I hope this helps to answer your questions.  Please feel free to contact us for any additional information.
 
Sincerely,
 
John Probe
Ariel Dynamics, Inc.
Email:  ARIEL1@ix.netcom.com
 
 
 
----- Original Message -----
From: Gideon Ariel
To: JP
Sent: Friday, December 03, 2004 8:09 PM
Subject: FW: manual digitizing

 

Hi John:

  Please help her here.

Gideon

 

 


From: Tamra Meier [mailto:TJPT@msn.com]
Sent: Friday, December 03, 2004 11:21 AM
To: gideon@arielnet.com
Subject: manual digitizing

 

Dear Dr. Gideon Ariel,

 

I sincerely appreciate your time on the phone yesterday. Could you please answer one question for me. If I want to manually digitize one second of video at 60 frames per second, assuming it takes one minute to digitize each frame; will it take 60 minutes to complete this task?

 

You mentioned that you could provide me with the email for Vic Braden.  That would be greatly appreciated.

 

Thanks again,

 

Tamra Meier 

 

 

 

 

 



John-
 
The Control | Read only reads for the active view [there is only 1 AVI specified in the dialog which follows]. Use the Control | ReadMultiple to read for more than 1 view.
 
Jeremy
 
----- Original Message -----
From: John Probe
To: jeremy@arielnet.com
Sent: Friday, November 08, 2002 6:17 PM
Subject: Control Point "Read" Questions

Hi Jeremy,
 
I have a question concerning reading the digitized location of the control points. 
 
When I select the READ option to read the digitized locations of the control points with multiple windows open, the points only seem to be displayed on the active window.  However, if I then select CONTROL -> FINISH and then CONTROL -> VIEW, the points are displayed in all views.
 
Is this a "normal" condition?  Or, is this a feature?  Unless one knows better, it is easy to think that the Read function only worked on a single window.  Plus it does not provide the option to verify the point locations.
 
Let me know what you think on this.  Thanks! 
 
John

Hello Tom,

Thanks for the message.  There are several ways to handle this situation.
First, you can leave the rate set to a value of 60 images per second and
simply advance images without digitizing.  The APAS software will perform a
linear interpolation between the "known" points to fill in the missing
points.

Another method to perform this would be to enter a "Skip Factor" in the menu
that preceeds the digitizing process.  If you enter a skip factor of 1, then
every time you press the advance button, the software will skip one image
(thus making the speed equal to 30 hz). A skip factor of 2 would be 20 Hz
rate.  A skip = 3 would be 15 hz etc.....

The effective rate is determined by original frame rate (60 Hz) divided by
(one plus the skip factor).

I hope this answers your questions.  Please contact us for any additional
information.

Sincerely,

John Probe
Ariel Dynamics, Inc.
Email:  ARIEL1@ix.netcom.com


----- Original Message -----
From: "Dr. Tom Cairns" <cairns@ens.utulsa.edu>

Subject: Question


> John,
> If I wish to digitize at fewer than 60 frames per second, when do I tell
> this ti the digitize module?
> Tom


 

Hello Nancy,

I will look into the possibility of any facilities that are close to you so
you could see the APAS in action.  I know that we have systems at
University of Missouri-Columbia (older APAS) and also at Washington
University Medical School-St. Louis but I do not know if they are willing
to provide any training or demonstrations.

In general, the 2D analysis should be fairly straight forward.  I will
provide a brief description below.  Also remember that each software module
has alot of information in the associated help screens!

Filming
=======
You should record a "calibration fixture" with a minimum of 4 coplanar
points that will encompass the plane of the activity that you intend to
analyze.  For example, the four corners of a door frame might work.  Then
record the activity of interest as it takes place in this same plane.

CAPTURE MODULE
==============
Using one of the video cameras, or any VCR, connect the video output to the
video input of the Iomega Buz frame grabber and open the Capture module.
Give the file a name and set the Capture Parameters.  Then play the video
tape and select the GO icon when you are ready to catpure the desired
portion of the video to an AVI file.  Each camera view should have two AVI
files: one for the calibration fixture and another for the sequence to be
analyzed.

TRIM Module
============
Since the computer captures video to an AVI file in real-time, you probably
captured much more data than is required for the analysis.  The TRIM module
is used to "clip-out" the desired portion that will be used for your study.

DIGITIZE Module
===============
Select FILE, SEQUENE and NEW to begin a new sequence file for analysis and
specify the requested information.  Select FILE and NEW VIEW, specify the
information and retrieve the desired AVI file for digitizing.  Then select
CONTROL, DIGITIZE to retrieve and digitize the control fixture.  Select
CONTROL and FINNISH and begin digitizing the AVI data file.

TRANSFORM Module
================
Open up the Transform module and select the desired data file to transform
into image space coordinates.  The software will automatically detect that
it is a 2D file based on the calibration points.  Select the 2D icon to
complete the transformation.

FILTER Module
==============
Open the Filter module and select the desired file to perform filtering of
the data.  The filter module is used to remove "random digitizing" error.
Proceed through each joint for the smoothing.

DISPLAY Module
==============
The Display module is used to presentation of the results.  Select the NEW
icon and 3D icon.  Then select NEW 3D button to specify the parameters to
display/graph.  You can also disply the stick figure and numerical data by
using the desired icons.

I hope this information is helpful.  I recommend that you also refer to the
Quick Reference section of the associated help screens.  This provides a
step-by-step approach to the basic APAS functions.  If you require a more
detailed explaination, you can find this in the help screens also.

Another option would be to perform the on-line tutorials from the Ariel
internet site.  The direct address is:

http://24.10.158.215/topics/Tutorials/default.htm

Please contact me for any additional information.






At 01:48 PM 11/11/1999 -0600, you wrote:
>
>
>We are trying to use Ariel for 2D applications, and have run into a spot of
>trouble.  It occured to me that our best bet for learning the system might
>be to watch someone who already uses it.  Could you tell us if any
>Universities or anyone else near the St. Louis area uses the Ariel systems?
>We could then contact them and perhaps speed up the learning curve!
>
>I'm glad Bob Proffer could help you with figuring out the payment situation.
>
>Thanks,
>
>Nancy
>
>
>
>Nancy Getchell, Ph.D.
>Division of Teaching and Learning
>University of Missouri-St. Louis
>8001 Natural Bridge Rd.
>St. Louis, MO  63121
>
>Phone: (314)516-5220
>fax:  (314)516-6442
>email:  Nancy_Getchell@umsl.edu
>
>


Hello Gran Sandstrm,

Thank you for your message.  The feature you are referring to can be found
in the TRIMMER module.  I assume that you are using the Ulead software to
capture the video.  The next step would be to open the captured AVI files in
the TRIMMER module.  Then select OPTIONS, VIDEO to open the Video Options
menu.  You will see an option in the upper right corner of the menu to
Separate Fields.  This option can be turned On/Off.  Once you have made the
necessary changes, then select FILE, SAVE TRIMMING.  This will allow the
Digitizing module to read the file as you specified.

If you do not have this option in the TRIM module, you can upgrade your
software by following the Upgrade Instructions posted on the internet.  The
direct address is:

   /topics/FAQ/APAS_upgrading_instructions.htm

As long as you follow the posted intructions, you will not affect the APAS
software license.  If you do not follow the intructions, the license may be
damaged/erased and result in additional charges for a new license.

Please feel free to contact me for any additional information.

Sincerely,

John Probe
Ariel Dynamics, Inc.
Email:  ARIEL1@ix.netcom.com




----- Original Message -----
From: "Gran Sandstrm" <goran.sandstrom@niwl.se>
To: <support@arielnet.com>
Sent: Wednesday, January 23, 2002 4:08 AM
Subject: APAS frame handling


> Hi
>
> We have a APAS system complete with dual 120Hz  DV-cameras, firewire (IEEE
> 1394) and APAS software (We bought it rougly two years ago).
>
> In a present project were we have collected the data from other video
> sources, the spatial resolution is of much higher importance then the
> temporal resolution. In fact we have even reduced the frame rate to reduce
> the total amount of data, since it involves manual marking of joints.
>
> Now, in the Digitize module the software seems to split up the odd and
> even line frames into individual frames in order to increase temporal
> resolution. The drawback is of course the reduced spatial resolution.
> We would like to have the possibility to select in the program that it
> should treat "Full frames" instead,  e.i. the combined odd and even line
> frames, as is done in ordinary video capturing, editing and displaying
> softwares (Adobe Premiere, MS media player etc). It is very evident that
> the spatial resolution is much better when viewing the video files in
> those softwares than in the Digitze module.
>
> Is there any updated version of the Digitize program with such features.
> If not, could it be implemented?
>
> Sincerely,
>
> Gran Sandstrm
> Res. Eng.
>
> National institute of working life
> Center for musculoskeletal Research
> Box 7654
> 907 41 Ume
> Sweden
>
> Tel. +46 90 176070
> Fax. +46 90 176116
>
>


 

Sun-

Thank you for your list of questions. I am happy to answer them! Indeed it
is
very important to understand these variables if one is to get the most from
the autodigitizing.

> > < in Global options >
> > 1. Initial #Pix Min/Max and should be set to
When the software performs the initial locating of a spot these numbers set
limits on the size a spot can be during this initial locate phase. If the
spot is very small and one sets the min too high the software may reject an
obvious spot because it does not contain a sufficient # of pixels, and
similarly for large spots and setting the max# too low.  These are outer
limits and should be set  with a large margin of error. Since one points to
the image location before this initial locate operation takes place that one
set Min#=2 and max#=500. If the spots are very small you might set min#=1.

> > 2. #Pix Min/Max : There is only one input
This is the default  percentage that a spot may grow or shrink by when
advancing to the next frame. For example if this is 50% and in one frame the
spot has 50  pixels, then in the next frame the spot cannot have more than
75 pixels or less than 25 pixels. This means that the software will reject
and spot that is outside these limits. If this percentage is too large the
software may find extraneous spots that are closer to the expected location.
If it is too small the software may reject an obvious spot because due to
lighting changes or motion relative to the camera has resulted is the spot
growing/shrinking by too much. This is particularly important for small
spots where a few pixels of change may be a sizable percentage.

> > 3. AutoCalc Min/Max
If this option is checked then for every frame the software dynamically
calculates a new Min/Max# pixels based on the size of a spot in the previous
frame. If it is NOT checked then the Min/Max# pixels is static, meaning that
they do not change from frame to frame but stay fixed unless explicit;y
changed.

> > 4. Relative
If this is checked then  the number entered in the #Pix Min/Max is a
percentage of a spot otherwise it is a fixed number of pixels. If the is not
checked then the spot size can grow or shrink by a fixed number of pixels
frame to frame.

> > 5. Auto enhance : may be Gamma enhance?
Yes it may but within the confines of the three spot locator levels. When
"Standard" is selected no image processing is applied to the image when the
spot locating takes place. When "Enhanced" is selected the following series
of filters is applied to the image area of interest before the spot locating
takes place, Median followed by Gamma.with the Gamma factor selected using
the AutoEnhance sliderbar. On the slider the range 0 to 100  correspond to a
Gamma of 1 to 2. If "Enhanced+Edge" is selected and edge filter is added as
well. The enhancements are applied to the region of the image where the spot
is expected so one does not see the whole image transformed. However, the
before/after images shown in the Auto-Locate Properties
indeed are the filtered images.
> >
> > < in the small window poping up on the first frame of auto digi >
> > 1. Min/Max
These are the percent of spotsize [#Pixels] that will be imposed on the next
frame for this spot if relative, or absolute #pixels if not. These numbers
initially are the same as those entered in the Global Options but may be
changed point by point if desired.

> > 2. Threshold
This is the pixel brightness that a pixel must have to be considered as part
of a spot. Pixel brightness ranges from 0 to 255. The higher the threshold
the fewer pixels will be considered as part of a spot.


I hope these responses help explain how the spot locating algorithm is
controlled. It is a pleasure to explain the system.

Best, Jeremy

> ----- Original Message -----
> From: Sun G. Chung <suncg@medicine.snu.ac.kr>
> To: Jeremy Wise <Wise80x86@aol.com>
> Cc: Gideon Ariel <gideon@arielnet.com>
> Sent: Wednesday, October 27, 1999 4:19 PM
> Subject: Help!!!
>
>
> > Hi Jeremy,
> >
> > How are you?
> > I do the whole process of gait analysis from capturing to report
> generation,
> > at least one patient per week. And the auto-digi works so fine. But I
> found
> > that the auto digitization ability varies very much according to the
> setting
> > on Global Options. Sometimes it is very excellent and sometimes it is
not.
> >
> > This kind of variation seems to be increased in V4.4.  So, if I should
> know
> > the meaning and effects of all the variables in Global options and if I
> > could use and handle at ease, our work will be done more easily.
> >
> > The list of things that I would like to know is
> >
> > < in Global options >
> > 1. Initial #Pix Min/Max
> > 2. #Pix Min/Max : There is only one input
> > 3. AutoCalc Min/Max
> > 4. Relative
> > 5. Auto enhance : may be Gamma enhance?
> >
> > < in the small window poping up on the first frame of auto digi >
> > 1. Min/Max
> > 2. Threshold
> >
> > If I can understand the above variables, I will be able to maximize the
> > ability of auto digi for each patient with various digitizing condition.
> > Small or large, bright or dark skin, ... ...
> >
> > If you have no time to write them, please direct me other webpages or
> > references. But your words would be the best.
> >
> > Please, help me.
> >
> > Sun


From: Wise80x86@aol.com
Date: Thu, 4 Feb 1999 10:40:14 EST
To: m.almond@ucsm.ac.uk
Cc: gideon@arielnet.com, J.Brond@mfi.ku.dk, john@arielnet.com
Subject: APAS Digi4 problem

Gideon asked me to check into the problem you reported regarding the Digi4_32
module when switching back to normal data upon completion of digitizing
control points. Thank you for reporting the problem. It is always our policy
to fix problems as quickly as possible.

This problem has been fixed and will be available to you as soon as the next
software revision is available. You will be able to download it from the net
at that time. I suggest that you keep in contact with Gideon and he can let
you know as soon as it is available.

In the mean time there are several work arounds for this problem. The problem
occurs when a second AVI file is opened from the "Control" menu item for use
with the control point digitizing while having an AVI file open for digitizing
data. There is a hitch in switching back to the original AVI file. Any one of
the following should circumvent the problem.

1) Before you go to digitize control points, select the AVI file which
contains the image for the control points as if you were selecting the AVI for
your regular data tracing. Then after you go to digitize your control points,
use the "Select Image" submenu item to select the image containing the control
points. After you have selected "finish" and are back to tracing data you will
need to reselect in the usual way the AVI file appropriate for data.

2) Digitize your control points first. When you first start a view, before you
open and AVI for tracing your data, digitize your control points.

3) After you have finished digitizing your data, close the view, reopen it
without opening an AVI and proceed to digitize your control points. At this
point there would not be a problem continuing to digitize your data.

I hope this helps & we apologize for any inconvenience.

Sincerely,
       Jeremy Wise
       Dir R&D, Ariel Dynamics



>Hi Gideon
>
>I seem to be experiencing a exception fault in the digitise module.
>I've copied the info that it gave me I don't know if it will help.
>
>The problem occurs when I've finished digitising the movement and then I
>open the control frame, and digitise that (the control frame is on a
>separate avi file) then when I come to close the file down I gives me
>the exception fault
>
>DIGI4_32 caused an exception 10H in module DIGI4_32.EXE at
>0137:00414975.
>Registers:
>EAX=01242c80 CS=0137 EIP=00414975 EFLGS=00010206
>EBX=00000001 SS=013f ESP=005af614 EBP=005af648
>ECX=01242c80 DS=013f ESI=01242e70 FS=2f6f
>EDX=005af678 ES=013f EDI=01242c80 GS=0000
>Bytes at CS:EIP:
>dd 5d ec 8b 55 dc db 82 ac 02 00 00 8b 45 dc da
>Stack dump:
>013f2999 bff74277 11988628 6d2417b7 01242c80 00288670 bff72999 bff62376
>01242c64 2f6f013f 0059864a 005af828 00448c70 005af6a4 0041b8b5 005af670
>
>Hope this helps
>Matt
>


Dear Dr. Dohle,

Gideon Ariel has asked me to answer your inquiry about the opposition
movement and automatic tracking of that.

We have had the apas system here for several years and use it on a daily
basis for 3D analysis of walking. It should be perfectly OK to use the
system for tracking the thumb movement in 3 dimensions. After experimenting
a little with the type of markers and their size, it will be possible to
track the movement automatically. You will have to use at least two
videocameras and a rather small calibration cube. You may build the cube
yourself, but I guess it is easier to order one from Ariel Dynamics.

The 3D coordinates can be exported in ascii, but you may also export the
apas files directly to Matlab, we always do that. Then it is easy to perform
an FFT in Matlab.

The apas system can also perform a frequency (FFT) analysis, but I think it
will be better to use Matlab or another signal processing program.

You are welcome to visit my lab if you want to see how we use the system
here. We have been very satisfied with the system and also with the service
from the company.

Sincerely yours

Erik B. Simonsen, associate prof. Ph.D.

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Erik B. Simonsen, Associate Professor, M.Sc. Ph.D.
Institute of Medical Anatomy section C.
Panum Institute. University of Copenhagen
Blegdamsvej 3., DK-2200 Copenhagen N
DENMARK
Phone:  +45 35 32 72 30 (work)  Fax: +45 35 32 72 17
Phone:  +45 45 80 93 04 (home)  
http://www.biomechanics.mai.ku.dk/ebs.htm
E-mail: E.Simonsen@mai.ku.dk
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++


Dear Dr. Dohle,

Gideon Ariel has asked me to answer your inquiry about the opposition
movement and automatic tracking of that.

We have had the apas system here for several years and use it on a daily
basis for 3D analysis of walking. It should be perfectly OK to use the
system for tracking the thumb movement in 3 dimensions. After experimenting
a little with the type of markers and their size, it will be possible to
track the movement automatically. You will have to use at least two
videocameras and a rather small calibration cube. You may build the cube
yourself, but I guess it is easier to order one from Ariel Dynamics.

The 3D coordinates can be exported in ascii, but you may also export the
apas files directly to Matlab, we always do that. Then it is easy to perform
an FFT in Matlab.

The apas system can also perform a frequency (FFT) analysis, but I think it
will be better to use Matlab or another signal processing program.

You are welcome to visit my lab if you want to see how we use the system
here. We have been very satisfied with the system and also with the service
from the company.

Sincerely yours

Erik B. Simonsen, associate prof. Ph.D.

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Erik B. Simonsen, Associate Professor, M.Sc. Ph.D.
Institute of Medical Anatomy section C.
Panum Institute. University of Copenhagen
Blegdamsvej 3., DK-2200 Copenhagen N
DENMARK
Phone:  +45 35 32 72 30 (work)  Fax: +45 35 32 72 17
Phone:  +45 45 80 93 04 (home)  
http://www.biomechanics.mai.ku.dk/ebs.htm
E-mail: E.Simonsen@mai.ku.dk
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++


 

Sun-

You do not need to select anything in the global options to activate centroid detection. The program logic has been changes so that any time both the Control & Shift keys are held down when a point is digitized with the mouse the program will search for a marker in the general location of the cursor. The only item in the global options you might want to change is the "Initial Locate Dialog". If this is checked, every time theis Control & Shift capability is used an "Initial Locate" dialog will appear giving information about the spot locating process. If unchecked, this dialog is skipped.

I would suggest checking the "weighted averages" options but it is not essential.

Best, Jeremy

 


Gideon-

There is a mis-understanding here. The Weighted Average factors in the "brightness" if the pixels when locating the centroid. Bright pixels count more that not so bright ones. Onder no circumstances is the "brightest" pixel used as the location of the marker. If the pixels are UnWeighted, all pixels considered as part of the marker [above marker threshold] are considered equally in locatine the centroid.

Best, jeremy



Hi Erik,

Nice to hear from you.

Yes, the contrast of the avi is very good and we can digitize an APAS view
about 1 or 2 min. It is secret how to get the contrast. But I will tell you
for the appreciation of the book introduction. Hahaha.

We use light sources for each of the cameras. The light should be directed
as to the direction of camera shot. And we use reflective markers. You may
do same as we.

The secret is to increase the shutter speed up to 1/2000 or 1/1000. Then,
you can have very contrasting marker shape even if you capture the video in
compression rate of 50-60 kb/frame.

Hope this would be helpful.

Sun


-----?? ???-----
?? ??: Erik B. Simonsen <E.Simonsen@spam.mai.ku.dk>
?? ??: suncg@medicine.snu.ac.kr <suncg@medicine.snu.ac.kr>
??: 1999? 11? 11? ??? ?? 7:47
??: markers


>Dear Sun,
>
>I am happy to hear that I have introduced you to the book of Kit V. Now I
>have a question for you. Looking at pictures from your gaitlab on Gideons
>homepage, I would VERY MUCH like to know, how you obtain such contrast for
>the white markers. I often dress the subjects in black to obtain better
>contrast, but the pictures from your lab look fantastic and easy to
digitize
>automatically. Are the pictures actually photographs or video ?????
>
>Looking very much forward to meet you.
>
>Erik
>
>++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
>Erik B. Simonsen, Associate Professor, M.Sc. Ph.D.
>Institute of Medical Anatomy section C.
>Panum Institute. University of Copenhagen
>Blegdamsvej 3., DK-2200 Copenhagen N
>DENMARK
>Phone:  +45 35 32 72 30 (work)  Fax: +45 35 32 72 17
>Phone:  +45 45 80 93 04 (home)
>http://www.biomechanics.mai.ku.dk/ebs.htm
>E-mail: E.Simonsen@mai.ku.dk
>+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++


Digitizing PCX Files:

Hello Mario,

Gideon has asked me to provide a better description of the digitizing
process using the PCX files.  We do not normally use the PCX format for
digitizing.  This was an option provided as a transition to the Windows
environment.  The older APAS-DOS software could capture in either VID or
PCX formats while the APAS-WINDOWS software could capture in either PCX of
AVI.  The PCX format was the "common format" between the two operating
environments.  Anyways, I will list the intructions below.

NOTE:
The following description assumes the hard disk drive has been partitioned
to the C, D and E drives.  The APAS program files are on C, the data (stick
figure) files are on the D drive and the video (PCX) files are on the E
drive.  This also assumes that you have already captured the desired files
in PCX format.

CONFIGURING THE PXC FILE
========================
The PCX file must be configured properly so the APAS software can read
them.  When images are captured in the PCX format using the "older" APAS
hardware and software this is automatically completed.  However, since the
APAS was not utilized to capture, this must be performed manually.

The individual PCX pictures must reside in a directory with the same name
as the capture file.  Using the files that I sent yesterday, there are 5
PCX images and the Capture file was named REACH1.  Therefore, the REACH1
directory contains the following files.

E:\REACH1\REACH1.PCX
E:\REACH1\REACH2.PCX
E:\REACH1\REACH3.PCX
E:\REACH1\REACH4.PCX
E:\REACH1\REACH5.PCX

There must also be a *.PCL file on the root directory.  This is just a text
file that tells the APAS software the order of the PCX images.  This file
must also have the same name as the PCX directory.  The E:\REACH.PCL file
contains the following information:

e:\Reach1\Reach1.pcx
e:\Reach1\Reach2.pcx
e:\Reach1\Reach3.pcx
e:\Reach1\Reach4.pcx
e:\Reach1\Reach5.pcx

This PCL file tells the APAS software the path and order of the PCX images
to be retrieved.


READY TO DIGITIZE
==================
1.  Open the DIGITIZE module from the APAS SYSTEM folder.
2.  Select FILE, SEQUENCE and NEW to name a new sequence.
3.  Enter the Sequence Parameters (Title, Units,#Pts, # Ctrl Pts, Type,
Height, Weight, Point IDs)
4.  Select FILE, NEW VIEW and enter the View Information.
5.  Select FILE, OPEN PCX Images (or click the PCX icon).
6.  Select the desired PCL file for digitizing.
7.  Select OK at the View File Information menu.
8.  The first PCX image will be displayed and you are now ready to begin
digitizing.

I have written these steps as I went through the process myself using the
current APAS Revision 4.9 software and the REACH1 PCX files.  The PCX files
appeared without any problems

I hope this information is helpful.  Please contact us for any additional
information.

Sincerely,

John Probe
Email:  ARIEL1@ix.netcom.com




----- Original Message -----
From: Mario Lamontagne <mlamon@uottawa.ca>
To: Gideon Ariel <gideon@arielnet.com>
Sent: Wednesday, January 19, 2000 7:24 AM
Subject: RE: PCX files


> Gideon,
>
> We must be very stupid but it is not working on all the systems we have in
> the lab. I have tried on my laptop which you have installed tha APAS. What
> do you mean we are doing wrong in the setup?
> Which information do you put in the sequence?
> What frame rate do you put in the view file?
> Any Help it would be appreciated.
>
> Thanks in advance
>

++++++++++++++++++++++++++++++++++
Ariel Dynamics, Inc.
4891 Ronson Court
Suite F
San Diego, California  92111  USA
(858) 874-2547 Tel
(858) 874-2549 Fax
Email:  ARIEL1@ix.netcom.com
Web Site:  /
++++++++++++++++++++++++++++++++++

-----Original Message-----
From: Ariel [mailto:ariel1@ix.netcom.com]
Sent: Tuesday, February 15, 2000 1:39 PM
To: mlamon@uottawa.ca
Cc: malha017@uottawa.ca
Subject: New Digitize Module

Hello Mario and Mouafak,

There are two options for getting the new Digitize file (that supports BMP format). You can download the latest version from the Ariel internet site, or you can simply replace the "old" file with the "new" file. As always, we strongly recommend that you make a backup (or save a copy) of the "old" file prior to any updates.

I have attached a copy of the new file. You should install it in the exact same location as the currnet DIGI4_32.EXE program and then follow the instructions as if it was a PCX file. The software now detects either PCX or BMP format.

Please let me know if you have any additional questions.

Sincerely,

John Probe
Email: ARIEL1@ix.netcom.com



========================================================
X-From_: ariel1@ix.netcom.com Tue Feb 15 11:57:28 2000
Reply-To: "ariel1 at netcom" <ariel1@ix.netcom.com>
From: "ariel1 at netcom" <ariel1@ix.netcom.com>
To: <mlamon@uottawa.ca>
Cc: "John Probe" <john@arielnet.com>,
"Jeremy - Sportsci" <jeremy@sportsci.com>
Subject: new version
Date: Tue, 15 Feb 2000 09:50:06 -0800
X-MSMail-Priority: Normal
X-MimeOLE: Produced By Microsoft MimeOLE V5.00.2314.1300

Hi Mario: The new version is on the net Gideon


Hello Mario and Mouafak,

There are two options for getting the new Digitize file (that supports BMP format). You can download the latest version from the Ariel internet site, or you can simply replace the "old" file with the "new" file. As always, we strongly recommend that you make a backup (or save a copy) of the "old" file prior to any updates.

I have attached a copy of the new file. You should install it in the exact same location as the currnet DIGI4_32.EXE program and then follow the instructions as if it was a PCX file. The software now detects either PCX or BMP format.

Please let me know if you have any additional questions.

Sincerely,

John Probe
Email: ARIEL1@ix.netcom.com



========================================================
X-From_: ariel1@ix.netcom.com Tue Feb 15 11:57:28 2000
Reply-To: "ariel1 at netcom" <ariel1@ix.netcom.com>
From: "ariel1 at netcom" <ariel1@ix.netcom.com>
To: <mlamon@uottawa.ca>
Cc: "John Probe" <john@arielnet.com>,
"Jeremy - Sportsci" <jeremy@sportsci.com>
Subject: new version
Date: Tue, 15 Feb 2000 09:50:06 -0800
X-MSMail-Priority: Normal
X-MimeOLE: Produced By Microsoft MimeOLE V5.00.2314.1300


Hello Susan,

I will provide answers below each of your questions.
Please contact me for any additional information.

John Probe
Email: ARIEL1@ix.netcom.com
============================

At 04:56 PM 03/31/2000 -0500, you wrote:
>I have been exploring how to use our somewhat new system and I have
>several questions/problems:
> 1. Can you graph the raw data (prior to filtering)? If so, how? If
>not, why not?
>
Yes! These procedures can be found in the Display program help file. Raw data can be graphed before or after filtering using the following steps:
a. Open the display program
b. Select the 3D icon and select the desired sequence.
c. Select the NEW 3D button to specify the data trial.
d. You will see the option to select RAW POSITION in the QUANTITY column.
e. Select the OK button to graph the data.


> 2. Can you export the raw data (before filtering)? If so, how? If
>not, why not?
>
Yes! These procedures can be found in the Display program help file. Raw data can also be exported using the following steps.
a. Follow steps a through e listed above
b. Select GRAPH, DATA to display the graph options. Make certain that RAW ONLY is selected in the Curves section.
c. Select OK to graph the data.
d. Select EXPORT, WORKSHEET, NEW and name the worksheet.
e. Select the OPEN button to create the worksheet.
f. Select EXPORT, WORKSHEET, SAVE to display the Export Channel Options menu.
g. Enter the Caption, X-Axis Start/End and X Increment values and press the OK button to save the data to the worksheet.


> 3. Why do some files for digitizing start at a time other than 0.0
>sec?
>
The 0.0 time is determined by the Synchronizing Point specified in the Digitizing Module. If no Synch Point is specified by the user, the software assumes that the first from is the Synch Point and therefore, sets the time for the first image equal to 0.0 seconds.


> 4. How can you reset the timer to start at 0.0 s?
>
The Time Value can be set to 0.0 only be specifying the Synch Point at the point where you wish the time to equal 0.0 seconds.


> 5. Why won't the export feature in 'Display' export the data points
>if the time does not start at 0.0? (I have a file that starts at 0.700
>sec going to 1.2 s. When I try to export, I get header only, no data)
>
I have tried to repeat this situation and the data exports as expected. Therefore, there is either something wrong in the procedure or something different in your data file. I suggest you follow the steps listed in Questions 1 & 2 above. If you are still experiencing problems exporting data, then you should attach the files to an Email and send them to me for examination. We would need the *.CF, *.1t (and all other related *.t files), and the *.3d file where * indicates your sequence name.


> 6. I tried to pad and manipulate the time in 'Display' and
>'Transform' but to no avail.
>
See Answers 3 & 4 above.


> 7. HEEEELLLLLLPPPPPPPP!!!!!!!!!!
>
We are always happy to assist! Hope you have a great weekend!


>Sue Chinworth
>Elon College Dept of Physical Therapy Education
>2085 Campus Box
>Elon College, NC 27244
>336-538-6861
>chinwort@elon.edu
>
>


Hello Jim,

Thank you for your Email message.  If you look at the SEQUENCE information
for a project & click the SEGMENTS button, one by one all the segments
defined in the project along with the joints defining the segments, a
fractional distance between the endpoints for the segment CG, and the mass
of the segment normally expressed as a percent of the total body mass. One
can use  default values or the user can specify whatever segmental model
they wish.

The center of mass for the entire body is then defined as the average of the
segmental center of masses weighted by the tfractional mass of each segment
Thus:
      Xcog=SUM(mass[i]*x[i]) / SUM(mass[i])
      Ycog=SUM(mass[i]*y[i]) / SUM(mass[i])
      Xcog=SUM(mass[i]*z[i]) / SUM(mass[i])

where the sum is over all segments and x[i],y [i], z[i]  are the [x,y,z]
coordinates of the ith segment cm, and mass[i] is the mass of the ith segment.

I hope this answers your questions.

Sincerely,

John Probe
Email:  ARIEL1@ix.netcom.com



At 10:53 PM 04/11/2000 -0800, you wrote:
>Dear Dr. Ariel,
>
>I am Jim from the Chinese University of Hong Kong. I would like to ask some
>question about the APAS system. As we have a "deal" before, I start to use
>the APAS-1999 system before you come to Hong Kong at June.
>
>Firstly, I would like to give a excellent comment to APAS system which
>design was easy to learn and user friendly. I only spent two days time for
>learning and working with the instruction, now I can perform the 2-D motion
>analysis under APAS-1999 system. However, I would like to solve a problem
>before I go further use of APAS-system.
>
>My problem is:
>When I creat a new sequence under the DIGI4 module, I need to creat a new
>model by selecting "Type" to "User-Defined" and then select "Segments" to
>define the segment name, segment connections, and segement mass
>information. I would like to get more instruction about the creation of
>model by "User-Defined".
>
>Sometimes, I would like to calculate the CG of the upper body only or creat
>some model for non-human study, therefore I need to creat the User-Defined
>model and to indicate the calculation method of CG. How can I get more
>information about the model design, method of CG calculation, and segment,
>joint defination method under new sequence creation ? Please let me know.
>
>I am looking forward to hearing form you. Thank you.
>
>Best regards,
>
>Jim
>
>
>
>
>
>Biomechanics Laboratory
>Department of Sports Science and Physical Education
>The Chinese University of Hong Kong
>Shatin
>Hong Kong
>Tel : (852) 2609 6079
>Fax : (852) 2603 5781
>
>


Dear John,

Thank you for your reply. I got a lot of idea on the model design.
Moreover, I have downloaded the user manual which is very helpful to me.

I Thank you for your attention.

Best regards,

Jim


At 03:02 PM 4/11/00 -0700, you wrote:
>Hello Jim,
>
>Thank you for your Email message.  If you look at the SEQUENCE information
>for a project & click the SEGMENTS button, one by one all the segments
>defined in the project along with the joints defining the segments, a
>fractional distance between the endpoints for the segment CG, and the mass
>of the segment normally expressed as a percent of the total body mass. One
>can use  default values or the user can specify whatever segmental model
>they wish.
>
>The center of mass for the entire body is then defined as the average of the
>segmental center of masses weighted by the tfractional mass of each segment
>Thus:
>      Xcog=SUM(mass[i]*x[i]) / SUM(mass[i])
>      Ycog=SUM(mass[i]*y[i]) / SUM(mass[i])
>      Xcog=SUM(mass[i]*z[i]) / SUM(mass[i])
>
>where the sum is over all segments and x[i],y [i], z[i]  are the [x,y,z]
>coordinates of the ith segment cm, and mass[i] is the mass of the ith
segment.
>
>I hope this answers your questions.
>
>Sincerely,
>
>John Probe
>Email:  ARIEL1@ix.netcom.com
>
>
>
>At 10:53 PM 04/11/2000 -0800, you wrote:
>>Dear Dr. Ariel,
>>
>>I am Jim from the Chinese University of Hong Kong. I would like to ask some
>>question about the APAS system. As we have a "deal" before, I start to use
>>the APAS-1999 system before you come to Hong Kong at June.
>>
>>Firstly, I would like to give a excellent comment to APAS system which
>>design was easy to learn and user friendly. I only spent two days time for
>>learning and working with the instruction, now I can perform the 2-D motion
>>analysis under APAS-1999 system. However, I would like to solve a problem
>>before I go further use of APAS-system.
>>
>>My problem is:
>>When I creat a new sequence under the DIGI4 module, I need to creat a new
>>model by selecting "Type" to "User-Defined" and then select "Segments" to
>>define the segment name, segment connections, and segement mass
>>information. I would like to get more instruction about the creation of
>>model by "User-Defined".
>>
>>Sometimes, I would like to calculate the CG of the upper body only or creat
>>some model for non-human study, therefore I need to creat the User-Defined
>>model and to indicate the calculation method of CG. How can I get more
>>information about the model design, method of CG calculation, and segment,
>>joint defination method under new sequence creation ? Please let me know.
>>
>>I am looking forward to hearing form you. Thank you.
>>
>>Best regards,
>>
>>Jim
>>
>>
>>
>>
>>
>>Biomechanics Laboratory
>>Department of Sports Science and Physical Education
>>The Chinese University of Hong Kong
>>Shatin
>>Hong Kong
>>Tel : (852) 2609 6079
>>Fax : (852) 2603 5781

Biomechanics Laboratory
Department of Sports Science and Physical Education
The Chinese University of Hong Kong
Shatin
Hong Kong
Tel : (852) 2609 6079
Fax : (852) 2603 5781


Hello Jim,

I will provide answers below each of your questions.

John Probe
+++++++++++++++++++

At 11:12 PM 04/12/2000 -0800, you wrote:
>Dear John,
>
>Thank you for your reply. I got a lot of idea on the model design.
>Moreover, I have downloaded the user manual which is very helpful to me.
>
>I would like to ask several questions about the User-Defined procedure
>under the SEQUENCE information.
>
When using the APAS to digitize points, the user has the option of
selecting System Defined or User Defined point types.  The choice of System
option allows the specification of body joints from a predefined standard
list of names.  The User-defined sequence type requires that the user enter
a name for each joint being digitized.  The System option should be used
whenever human subjects are being digitized.  User-defined would then be
used for non-human subjects, such as for analysis of race horces..  A
sequence may initially be defined using System joint names, then the
sequence type can be changed to User-defined to specify the names of
certain non-standard joints and segments.


>1. What is the meaning of "RadGyr"? It didn't mention in the user manual.
>
>2. There was "Abs" and "Rel" under the "Type". What does it mean ?
>
When using System-defined units, the parameter information is automatically
entered (based on Dempster's algorithm) for each segment.  However, when
User-defined points are used, the APAS does not know this information and
it must be manually entered by the user.  This information is entered by
selecting the SEGMENTS button in the Sequence Parameter menu.

When the SEGMENTS button is selected, the Point Connection Table will be
displayed.  The top line indicated the current point.  The second line
displays the connection information for the current point.  Each point may
connect to as many as 5 other (lower numbered) points.  For example, Point
#2 can connect to Point #1, however, Point#1 cannot connect to Point #2
because 1 is lower than 2.

In the connection information there are several parameters that must be
entered. 
ConnectTo specifies the point connection where the Current point will connect
Segment is used to name the segment defined by the connection of these two
points.
Mass is distribution information that may be entered for each segment if
center of gravity and kinetic data is required for analysis. 
CGFrac (Center of Gravity Fraction) is entered as a percentage of the
distance between the previous point and the current point that define the
segment.  This information is always entered in Relative terms.
RadGyr is the radius of gyration for the defined segment. and is entered
in the same format and terms as the segment CG information.
Type is used to specify the Mass information as Relative or Absolute.
Relative Mass is entered as a percentage of the total body mass.  Absolute
Mass is entered in weight units (kilograms or pounds).
Color is used to specify the color of the defined segment (currently not
implemented).


>3. What is the meaning of C.G. if I designed a football model which is a
>football player and the football ? Does the C.G. is the combine C.G. of the
>player and the ball ?
>
As stated above, the CG is the fractional distance between the two
endpoints for the center of gravity of the defined segment.  Therefore, the
answer to this question depends on how the user defines the segments.  If
the arm and ball are defined as separate segments, then each segment would
have its own segment information.  If the ball is included as part of the
arm of the player, then the ball information should be added to the arm
information to calculate the segment information for the single segment.


>4. If question 3 is yes, how can I calculate the football player C.G. and
>the ball C.G. separately?
>
See answer to #3 above.


>5. What is the fractional distance between the endpoints for the segment
>CG, and the mass
>of the segment you are using? Any references?
>
When System-Defined units are specified, the APAS software automatically
uses the equations from W. Dempster 1955.  Dempster's segment information
is based on the Height and Weight of the subject.  This information is
entered in the Sequence Information menu.  The segment information table
allows the user to enter the segment data from any desired source.

>I am looking forward to hearing form you. Thank you for your attention.
>
>Best regards,
>
>Jim
>
>
>
>
>At 03:02 PM 4/11/00 -0700, you wrote:
>>Hello Jim,
>>
>>Thank you for your Email message.  If you look at the SEQUENCE information
>>for a project & click the SEGMENTS button, one by one all the segments
>>defined in the project along with the joints defining the segments, a
>>fractional distance between the endpoints for the segment CG, and the mass
>>of the segment normally expressed as a percent of the total body mass. One
>>can use  default values or the user can specify whatever segmental model
>>they wish.
>>
>>The center of mass for the entire body is then defined as the average of the
>>segmental center of masses weighted by the tfractional mass of each segment
>>Thus:
>>      Xcog=SUM(mass[i]*x[i]) / SUM(mass[i])
>>      Ycog=SUM(mass[i]*y[i]) / SUM(mass[i])
>>      Xcog=SUM(mass[i]*z[i]) / SUM(mass[i])
>>
>>where the sum is over all segments and x[i],y [i], z[i]  are the [x,y,z]
>>coordinates of the ith segment cm, and mass[i] is the mass of the ith
>segment.
>>
>>I hope this answers your questions.
>>
>>Sincerely,
>>
>>John Probe
>>Email:  ARIEL1@ix.netcom.com
>>
>>
>>
>>At 10:53 PM 04/11/2000 -0800, you wrote:
>>>Dear Dr. Ariel,
>>>
>>>I am Jim from the Chinese University of Hong Kong. I would like to ask some
>>>question about the APAS system. As we have a "deal" before, I start to use
>>>the APAS-1999 system before you come to Hong Kong at June.
>>>
>>>Firstly, I would like to give a excellent comment to APAS system which
>>>design was easy to learn and user friendly. I only spent two days time for
>>>learning and working with the instruction, now I can perform the 2-D motion
>>>analysis under APAS-1999 system. However, I would like to solve a problem
>>>before I go further use of APAS-system.
>>>
>>>My problem is:
>>>When I creat a new sequence under the DIGI4 module, I need to creat a new
>>>model by selecting "Type" to "User-Defined" and then select "Segments" to
>>>define the segment name, segment connections, and segement mass
>>>information. I would like to get more instruction about the creation of
>>>model by "User-Defined".
>>>
>>>Sometimes, I would like to calculate the CG of the upper body only or creat
>>>some model for non-human study, therefore I need to creat the User-Defined
>>>model and to indicate the calculation method of CG. How can I get more
>>>information about the model design, method of CG calculation, and segment,
>>>joint defination method under new sequence creation ? Please let me know.
>>>
>>>I am looking forward to hearing form you. Thank you.
>>>
>>>Best regards,
>>>
>>>Jim
>>>
>>>
>>>
>>>
>>>
>>>Biomechanics Laboratory
>>>Department of Sports Science and Physical Education
>>>The Chinese University of Hong Kong
>>>Shatin
>>>Hong Kong
>>>Tel : (852) 2609 6079
>>>Fax : (852) 2603 5781
>>>
>>>
>>
>>
>
>Biomechanics Laboratory
>Department of Sports Science and Physical Education
>The Chinese University of Hong Kong
>Shatin
>Hong Kong
>Tel : (852) 2609 6079
>Fax : (852) 2603 5781
>
>


 

Hello Rick,

Thank you for your message.  In order to obtain 3-D information, each
digitized point must be seen simultaneously by a minimum of two cameras.
These cameras should be approximately 90 degrees apart. 

Each camera must also record a calibration fixture.  This calibration
device can be made of almost anything as long as the X, Y, Z coordinates of
the calibration points are precisely known (relative to a single origin).

In general, the process for a 3D analysis is listed below.
1.  You should capture the AVI files from each camera.  Each camera should
have two AVI files; one for the sequence to be analyzed and another for the
calibration points.
2.  Use the TRIM module to "clip" the desired portion of the AVI file that
will be used for the analysis.
3.  Use the DIGITIZE module to digitize the sequence to be analyzed as well
as the control point information from each view.
4.  Use the TRANSFORM module to transform the individual 2D images into a
single 3D image.
5.  Use the FILTER module to remove "random digitizing" errors.
6.  Use the DISPLAY module to present and/or analyze the results.

I recommend that you refer to the pull-down Help menu associated with each
of these programs.  Each module has a section named QUICK REFERENCE that
lists step-by-step directions for a basic analysis.

Please feel free to contact us for any additional information.

Sincerely,

John Probe
Email:  ARIEL1@ix.netcom.com





----- Original Message -----
From: <essner@greenapple.com>
To: <gideon@arielnet.com>
Sent: Wednesday, August 02, 2000 1:38 PM
Subject: APAS


> Gideon,
>
> I'm currently learning how to use the APAS and have run into an obstacle.
I
> would like to calculate 3-D coordinates from two camera views.  How
exactly do
> you determine the values for the control point coordinates in DIGI4?
Also, I
> would like to calculate angles from the 3-d coordinates.  Will APAS do
this
> calculation?  How can I view the 3-D coordinates after they've been
> transformed?
>
> Thanks!
> -Rick
>
> Rick Essner
> Department of Biological Sciences
> Ohio University
> Athens, OH 45701
> (740)593-9510
> essner@greenapple.com
>
>
>
>


Hello Rick,

The X,Y,Z coordinates would have to be precisely measured for each point
(relative to a single origin and following the right-hand-rule
orientation).  Once these numerical values are know, they can be entered in
the APAS software using the Digitizing module.  This is performed by
following the steps listed below.

1.  Open the Digitizing module
2.  Select FILE, SEQUENCE, NEW to create a new sequence file.  Name the
file and select the OPEN button to proceed.
3.  You will see the Enter Sequence Parameters menu.  Enter the Title,
Units of measure, #Points, # Control Points (8 in your case), Type of
points, Height and Weight are optional and only required for kinetic
measures.  It is usually a good idea just to enter a non-zero number rather
than leave 0.
4.  Select POINT IDS button to define each of the #Points entered in step 3
above.  NOTE:  This option is only available when System Defined points are
used.
5.  Select SEGMENTS button to make segment connections between points.
NOTE:  This is automatically done when using SYSTEM Type Points.
6.  Select CONTROL XYZs button to enter the coordinates for each of the
calibration points.  If you are using the 8 corners of the 6cm cube, then
your coordinates would look something like this:

Point X Y Z
1 0 0 0
2 6 0 0
3 6 0 6
4 0 0 6
5 0 6 0
6 6 6 0
7 6 6 6
8 0 6 6

7.  Select FILE, NEW VIEW and enter the View Information for the first view.
8.  Select FILE, OPEN AVI to open the AVI file for the first view.
9.  Select FILE, NEW VIEW and enter the View Information for the second view.
10. Select FILE, OPEN AVI to open the AVI file for the second view.

At this point, you should have two AVI files open and ready for digitizing.
 I usually recommend that the user get into a habit and digitize the
Control points first.  The steps for this are listed below.

11. Click on the first view window to make it the active window.
12. Select CONTROL, DIGITIZE to let the software know you would like to
digitize the control information.
13. Select CONTROL, OPEN VIDEO, AVI and open the AVI file with the VIEW 1
Calibration fixture.
14. Digitize the Calibration points in the same order entered in step #6. 

NOTE:  Each image will require that you digitize a "Fixed" point as the
first point.  This point can be anything visible in the video that does not
move and will not be obstructed by the movement being analyzed.  This must
be the same point within each view but does not have to be the same between
views.  For example, the fixed point must be the same point in the View 1
calibration and data files, but another point could be used for view 2.

15.  After digitizing the calibration points for view #1, click on the View
2 windows to make it the active window.
16.  Select CONTROL, OPEN VIDEO, AVI and open the AVI file for the second
view.
17. Digitize the Calibration points for view #2.
18. When finished, select CONTROL, FINISH and the display will be refreshed
with the first view from each camera.  You can now proceed with the
digitizing process for the data.
19.  When each image is digitized, both views can be "locked" together for
advancing by selecting the IMAGES, LOCK command.  The current status of the
LOCK command will be displayed in the lower right corner of the Digitize
window.

I hope this information is a little more helpful.

Sincerely,

John Probe
Email:  ARIEL1@ix.netcom.com




At 08:40 PM 08/03/2000 GMT, you wrote:
>Thanks for the help.  I'm still not sure how I determine the x,y,z
coordinates
>for the calibration points.  I planned on using a cube that has 8
intersecting
>points, with 6 cm sides.  In the program I was using previously, you
digitized
>the cube in two views and entered a distance scale.  How do I get these
>coordinates in APAS?  I haven't been able to find this information in the
APAS
>manuals.
>
>-Rick
>
>Rick Essner
>Department of Biological Sciences
>Ohio University
>Athens, OH 45701
>(740)593-9510
>essner@greenapple.com
>
>> Hello Rick,
>>
>> Thank you for your message.  In order to obtain 3-D information, each
>> digitized point must be seen simultaneously by a minimum of two cameras.
>> These cameras should be approximately 90 degrees apart. 
>>
>> Each camera must also record a calibration fixture.  This calibration
>> device can be made of almost anything as long as the X, Y, Z coordinates of
>> the calibration points are precisely known (relative to a single origin).
>>
>> In general, the process for a 3D analysis is listed below.
>> 1.  You should capture the AVI files from each camera.  Each camera should
>> have two AVI files; one for the sequence to be analyzed and another for the
>> calibration points.
>> 2.  Use the TRIM module to "clip" the desired portion of the AVI file that
>> will be used for the analysis.
>> 3.  Use the DIGITIZE module to digitize the sequence to be analyzed as well
>> as the control point information from each view.
>> 4.  Use the TRANSFORM module to transform the individual 2D images into a
>> single 3D image.
>> 5.  Use the FILTER module to remove "random digitizing" errors.
>> 6.  Use the DISPLAY module to present and/or analyze the results.
>>
>> I recommend that you refer to the pull-down Help menu associated with each
>> of these programs.  Each module has a section named QUICK REFERENCE that
>> lists step-by-step directions for a basic analysis.
>>
>> Please feel free to contact us for any additional information.
>>
>> Sincerely,
>>
>> John Probe
>> Email:  ARIEL1@ix.netcom.com
>>
>>
>>
>>
>>
>> ----- Original Message -----
>> From: <essner@greenapple.com>
>> To: <gideon@arielnet.com>
>> Sent: Wednesday, August 02, 2000 1:38 PM
>> Subject: APAS
>>
>>
>> > Gideon,
>> >
>> > I'm currently learning how to use the APAS and have run into an obstacle.
>> I
>> > would like to calculate 3-D coordinates from two camera views.  How
>> exactly do
>> > you determine the values for the control point coordinates in DIGI4?
>> Also, I
>> > would like to calculate angles from the 3-d coordinates.  Will APAS do
>> this
>> > calculation?  How can I view the 3-D coordinates after they've been
>> > transformed?
>> >
>> > Thanks!
>> > -Rick
>> >
>> > Rick Essner
>> > Department of Biological Sciences
>> > Ohio University
>> > Athens, OH 45701
>> > (740)593-9510
>> > essner@greenapple.com


Hello Slobodan Jaric,

Thank you for your message.  I am glad to hear that you are making progress
with the APAS system.

When using the Automatic Digitizing option, there are several methods I can
think of to handle the situation you described.

1.  Select AUTOMATIC, GLOBAL OPTIONS to display the Auto-Digitize
parameters.  In the section labeled General Options, you will see an option
for Auto-Advance.  When this option is selected, the digitizing will
automatically proceed to the next image.  If this option is not selected,
then the user must select the advance key to advance to the next image.
This provides the option of checking the digitizing prior to proceeding to
the next image.

2.  If you are monitoring the auto-digiting as it takes place, you can
select the AUTOMATIC, SUSPEND command to temporarily stop the
autodigitizing process.  This would allow you to make correction on the
current image.  Then select advance or reverse to continue with the
autodigitizing.

3.  Select the AUTOMATIC, STOP command to stop the autodigitizing process.
When this command is selected, the software assumes that you have ended the
autodigitizing, therefore, if you wish to start again, you must select
AUTOMATIC, START and digitize each of the points again.

I hope this information is helpful.  Please contact us for any additional
questions.

Sincerely,

John Probe
Email:  ARIEL1@ix.netcom.com



At 09:37 AM 08/03/2000 +0200, you wrote:
>Dear Dr. Ariel,
>I guess we mainly solved our problem proper markers using table tenis
>balls, as well as higher illumination. We collected our first experimental
>file today and we had very few problems within the total number of 10,000
>frames recorded by two cameras.
>
>However, one of problems we could not solve: switching from automatic to
>manual tracking, and back to automatic. Namely, what we need is the
>following: when authomatic tracking switches to a wrong marker in a
>particular frame, to correct it manually and, thereafter to continue
>automatic tracking from the following frame.
>
>Thank you very much in advance.
>
>Regards
>****************************************
>Slobodan Jaric
>Centre for Musculo-Skeletal Research
>National Institute for Working Life
>Box.7654
>S-907 13 Umea
>Sweden
>
>Tel: /46-90-176121
>Fax: /46-90-176116
>****************************************
>


Hello Rebecca,

I had a chance to look over your data.  It is hard to tell exactly without
the video, however, it appears that the 4 control points are in a different
plane than the activity being analyzed.  The 4 control points seem like
they might be on a treadmill while the activity is taking place above the
calibrated area.  Is this correct?  If so, this could account for the
irratic results.

For 2-D analysis, a minimum of 4 control (or calibration) points must be
used.  Ideally, these four points should encompass the area of the activity
to be analyzed and also lie in the same plane of the activity.  The Z
coordinate must be equal to zero so the activity should take place in the
XY plane.

Is there any method to use 4 or more points to make a "calibration box"
around the turkey leg?

It appears that you are performing the correct procedures but just need to
rearrange the calibration points.  Would it be possible to also send a
small video file (maybe 3 to 5 images).

Also, do not forget to contact Dr. Ariel in Brisbane.  You can Email him at
the address listed below to find his location or schedule a meeting with him.

I look forward to your reply.

Sincerely,

John Probe / Dr. Gideon Ariel
Email:  ARIEL1@ix.netcom.com



At 04:52 PM 9/5/00, you wrote:
>Hi John,
>
>OK - I got the additional options up in the display module after I filtered
>the data - I had thought that the filtering done in the transformation
>module was enough.
>
>Anyway, I still can't make any sense of the data once I'm into the display
>section.
>
>I have attached new files.
>
>My problem lies at this point:
>I open the 3D file in display, then I have been choosing Joint Angles,
>displacement, and then I don't understand what to do - I chose distal toe
>and prox toe, prox toe and TMT, and TMT and TT, in the hope that it would
>give me the JA info for digits, IT and ankle respectively.  Also chose the
>3D rather than x,y,z because I need the whole point.
>
>But then the thing that is graphed doesn't make any sense to me, nor do the
>headings on the tabulation that shows the data points that the graphs are
>made from.
>
>All I need to obtain is the joint angles from the digitised data, both as a
>stick figure and as raw data.
>
>I hope that you can help!
>
>Is there an Ariel display at the Pre-Olympics conference that is happening
>here in Brisbane this weekend?  Would there be someone there that I could
>talk to about this???
>
>Thanks again,
>
>Rebecca.
>
>Attachment Converted: "F:\MAIL\ATTUCH~1\t43sp4.1t"
>
>Attachment Converted: "F:\MAIL\ATTUCH~1\t43sp4.3d"
>
>Attachment Converted: "F:\MAIL\ATTUCH~1\t43sp4.cf"
>
>
>
>
>
>***************************************************
>Rebecca Campbell
>Department of Anatomical Sciences
>The University of Queensland
>QLD
>AUSTRALIA 4072
>
>Ph + 61 7 3365 2961
>Fax + 61 7 3365 1299
>
>Email:  Rebecca.Campbell@mailbox.uq.edu.au
>
>**************************************************


Hello Eric,

The DIGITZE module can display and digitize up to 4 views simultaneously. 

Synchronizing can be performed in numerous ways.  First, one can have a
"synchronizing event" in the field of view.  This can be as simple as a 35
mm camera flash, a foot making contact with the ground, or even a falling
object.  If you desire to spend more money on a gen-lock camera system,
that APAS will fully support that also, but then you lose the advantage of
portability.  Cameras can also be synchronized using using a "software
genlock" algorithm that is integrated in the TRANSFORM module.  You can
access a full description of this algorithm by opening the TRANFORM module
and select HELP, INDEX and SYNCHRONIZING.

Reliability and validity information is available from the Ariel internet
site.  The exact address to one such study is:

/topics/comparison/default.htm

You can also view the Bibliography section that provides a selection of
published articles using the APAS.

/Main/adw-86.html

I hope this answers your questions.  Please contact us for any additional
information.

Sincerely,

John Probe
Email:  ARIEL1@ix.netcom.com



At 01:15 PM 12/1/00 +0800, you wrote:
>Hello John,
>
>Thanks for your kind help. And I know how to use APAS system now.
>
>I think the APAS system is a nice software, but it can be better if it can
display 1-4 cameras at the same time and capture them synchronously to 1-4
AVI files. It's a big problem if the images from two cameras are not
synchronously. I think the APAS can provide a more convenient process to
make images synchronously, like the mention above.
>
>Finally, could you tell me some information about the reliability and
validity? and does someone use the APAS system in his published paper?
Could you tell me the title of the paper? It's very important for me if I
want to use it in my study.
>
>Eric Cheng
>
>
>


Hello Barry,

It sounds like you have things working now with the DV files.  Thanks for
the update!

Based you the information below, it seems that you want to digitize the
control points once and then use the same points for additional analysis in
other subjects.  This is perfectly acceptable as long as the camera does not
move once the calibration points are filmed and digitized.  There are
several options that allow this and I will try to provide a highlight of
each one below.

1.  In the Sequence Parameter menu, one of the buttons is "READ"  If this
button is selected, the user has the option to "Read" the sequence
information from a previous project.  This will read everything from the
previous file including Title, #Points, # control, Type Points, Height,
Weight and calibration coordinates.  This will NOT read the digitize
locations of those coordinates though.  Usually this option is selected and
then the Title is changed for the current sequence.

2.  From the same sequence paramter menu, you can select the Control X,Y,Z's
button for the calibration coordinates.  You will notice another "Read"
button at this point.  If this button is selected you can "read-in" the
calibration coordinates from a previous sequence but will not affect any of
the other information (ie. title, #points, point labels etc.).  Once again,
this is only the coordinates and not the digitized locations.

3.  If you wish to "import" the digitized locations from a previous
analysis, this can be accomplished using the CONTROL, READ option in the
pull-down menus.  This options will read and import the previous digitized
locations into the current sequence.

I hope this helps to clear up the questions in your message and the
information in the Ariel help files.

Please feel free to contact me for any additional information.  I would be
happy to assist.

Sincerely,

John Probe
Ariel Dynamics, Inc.
Email:  ARIEL1@ix.netcom.com





----- Original Message -----
From: "barry wilson" <barrydwilson@hotmail.com>
To: <ariel1@ix.netcom.com>
Sent: Monday, June 18, 2001 2:55 AM
Subject: Re: APAS and digital video


> Hi John
>
> Thanks for your patience. APAS 3d is now working if we use trimmed files
of
> the same DV avi length. APAS produces 3D images in digitiser and transform
> modules.
>
> When we first trialed APAS back in January 2001, with analogue video
capture
> on one trial of golf, one of Takraw and one of hammer throw and everything
> worked fine (but not digital video).
>
> This time we climbed right into a project of a weight lifting competition
> with 6 sessions and 4 setups of the calibration trials. I was expecting to
> be able to digitise the control points once and then the three views for
the
> 10 subjects (to get ".xt" files) without having to redigitise the control
> points with each subject t file. Our interpretation of the digitising
> instructions printed from the HELP menu was obviously not correct!!!
> Thankfully, your email instruction was most explicit.
>
> Our capture process is to open the Ulead Studio V4 software, start up the
> VSP project with appropriate project, location and subject descriptions
(the
> Global command selects MS Digital Manager). Project template is DV PAL. We
> use the capture icon to start and stop collect and the FINISH button to
> close Ulead.
>
> I think we have identified a problem with some of the computers working
> through the LAN. We keep the AVIs on a computer running Windows ME and
> access via others (some of which are Windows 98 2ndEd)thro the network. If
> accessed thro the net the trimmer operates slowly and sometimes will not
> save files "Insufficient disk space" of trimmed files are saved
incorrectly
> with no error message. Any comments regarding this observation would be
> appreciated.
>
> Temporarily storing files and trimming on the same computer seems to solve
> our trimmer problems.
>
> Sahar and I are off to ISBS for about a week so hopefully our problems are
> solved, and the rest of the team can analyse the trials while we are away.
>
> Thank you for you prompt assistance. I will email you again in a week or
so
> to let you know how we are getting on
>
> regards Barry
> ***************
>
>
> >From: "Gideon Ariel" <ariel1@ix.netcom.com>
> >Reply-To: "Gideon Ariel" <ariel1@ix.netcom.com>
> >To: "barry wilson" <barrydwilson@hotmail.com>
> >CC: "Gideon Ariel" <gideon@arielnet.com>
> >Subject: Re: APAS and digital video problems
> >Date: Fri, 15 Jun 2001 21:03:26 -0700
> >
> >Hello Barry,
> >
> >Thank you for sending the files.  I do not understand exactly what you
have
> >done with the control points.  They should be digitized and included in
the
> >same *.xt file as the data.  for example, everytime that the user selects
> >the "New View" option, a sequential *.xt file is created (where x is the
> >view number).  The first time will be the *.1t, the second time will be
the
> >*.2t etc...  As you are digitizing the data, you can select CONTROL,
> >DIGITIZE and then CONTROL, OPEN VIDEO, AVI and select the video file with
> >the control points.  Digitize the control points and then select CONTROL,
> >FINISH.  The "digitized" control points should not be in a separate file
> >because the software will not be able to find them.
> >
> >Based on your messages, you should digitize the control points for the
> >*.2t,
> >*.3t and *.4t files.  The message that you are receiving "Must have at
> >least
> >2 stationary views and control points for 3d" simply means that it cannot
> >find the control point data.
> >
> >Also, could you provide step-by-step instructions for the capture process
> >you are performing?  This will enable us to attempt to duplicate your
> >situation exactly and find a solution to the reported problem.
> >
> >I hope this information is helpful.
> >
> >Sincerely,
> >
> >John Probe
> >Ariel Dynamics,Inc.
> >Email:  ARIEL1@ix.netcom.com
> >
> >
>
> _________________________________________________________________________
> Get Your Private, Free E-mail from MSN Hotmail at http://www.hotmail.com.
>


Hello Randy,

Thank you for your message.  Basically, there are two methods for digitizing
data with the Ariel Performance Analysis System (APAS).  Manual (aka
semi-automatic) digitizing requires the user to digitize the point
locations.  The APAS software assists in this process by using information
from previous images to "predict" the current location, however, the exact
location is entered by the user.  For this mode, there are no special
rquirements for lighting.  As long as the desired points are visible in the
recorded video, accurate analysis can be easily performed.

The second digitizing mode is "automatic" digitizing and requires the use of
high contrast markers.  These markers can be light colored against a dark
background or vice-versa.  Most APAS users utilize retro-reflective markers
attached to the desired point locations.  These markers reflect light back
in the same plane as it is transmitted.  Therefore, it is best to have a
photographic light mounted on the camera or tripod as close to the camera
axis as possible.

Also check the following links for additional information.

/adi2001/adi/services/support/tutorials/gait/chapter3
/3.4.asp


/adi2001/adi/services/support/manuals/apas/3dkin/doc2
.asp


/adi2001/adi/services/support/tutorials/gait/chapter3
/3.3.asp


Please feel free to contact us for any additional information.

Sincerely,

John Probe
Ariel Dynamics, Inc.
Email:  ARIEL1@ix.netcom.com


----- Original Message -----
From: "Rowell, Randy" <Randy.Rowell@tenethealth.com>
To: <ariel1@ix.netcom.com>
Sent: Tuesday, July 17, 2001 6:36 AM
Subject: Ariel APAS system lighting requirements


> Dear Representative,
> I will be purchasing your APAS system to use in a Swing/Gait Analysis Lab
in
> a medical setting. I need to know if there are any lighting requirements.
We
> are planning to use down lighting that will be on a track system.
> Thank you,
>
> Randy Rowell
> Director, Institute for Human Performance & Orthopedic Surgery
> Graduate Hospital
> 1800 South Lombard St.
> Atrium Suite
> Philadelphia, PA  19146
> Office: 215-893-4535
> Fax:   215-893-6785
>  <<Rowell, Randy.vcf>>
>


Hello Clint,

The calibration frame is used to "calibrate" the video field from each
camera view.  This information is absolutely necessary from all camera views
utilized for the analysis.  Therefore, the calibration frame must be
digitized from ALL camera views that are used.

When digitizing the calibration frame, the user must manually digitize the
points.  The automatic digitizing process requires that the first frame be
manually digitized so the software knows where to look for the points.
Beginning with the second image, the software uses the previous information
to find the marker.  Since there is only one image required for the
calibration points the ability to autodigitize these points is not even an
option.

I hope this answers your questions.

John Probe
Ariel Dynamics, Inc.
Email:  ARIEL1@ix.netcom.com

----- Original Message -----
From: "Clint Mitchell" <clintmitchell@hotmail.com>
To: <ariel1@ix.netcom.com>
Sent: Friday, July 20, 2001 9:22 AM
Subject: Autodigitizing


> John Probe:
>
> We have solved the image size problem and are now currently attempting to
> autodigitize subjects w/ two different camera views.  We have an
established
> calibration frame around the walking area and are not sure whether to
> include both views (the frame acting as the control points) in the
> digitizing process or will one view suffice?
>
> We are currently using both views of the calibration frame and have both
> views of the subject open and go to automatic --> start  and a message
> appears that reads "All Views must be at fixed point"
>
> Thanks,
>
> Clint Mitchell
> Virginia Commonwealth University
>
> _________________________________________________________________
> Get your FREE download of MSN Explorer at http://explorer.msn.com/intl.asp
>


 

 


 

[Back to FAQ]