Register Calendar Latest Topics
 
 
 


Reply
  Author   Comment  
Geenaianni

Junior Member
Registered:
Posts: 27
Reply with quote  #1 
Hi All, 
two questions -- one specific and one general. 

1) what is the mathematical transformation that occurs in the 2D spatial transformation? I am looking for a description similar to: degree of visual angle = (voltage at a fix point - voltage at origin) x gain, for the origin/gain eye calibration method. 

2) has anyone used ISCAN in conjunction with monkeylogic successfully, and would be willing to provide info about their procedure for calibration? we are attempting to do so, and having difficulty getting an adequate calibration using either method in ML. a bit of background -- our ISCAN represents eye position as outputs of +/-5V, and our (unsuccessful) procedure for calibration is as follows -- 

NOTE: including screenshot of ISCAN below, for clarity:

A) obtain good pupil tracking, and with the subject fixated on the center of the screen, hit "CENTER" in ISCAN, (which is analogous to "set origin" in ML).
B) adjust the X&Y gains in ISCAN (via "IN/OUT" buttons) such that when the subject saccades left/right/up/down in the visual space, the resulting pupil parameter traces (seen in graphA/B) fluctuate across full y-range of the graph. According to ISCAN documentation, the analog output signal is represented full-scale over the entire height of the graph. For example, the +/- 5-volt range of the analog output signal will encompass the full scale from top to bottom of the graph.
C) The scaling controls for the graph parameter are then de-activated, by clicking "No Active Param Scale" in ISCAN. The analog output signals remain scaled with the values loaded into the scaling controls. 
D) At this point, we proceed with ML eye calibration, as exactly outlined in ML documentation, including entering screen diagonal & subject/screen distance in ML, to generate a PPD value. 
E) the origin/gain method appears to be inadequate because the voltages generated by the eye position, are not linearly spaced over the screen. the result is that even with a very large X-gain, the subject cannot reach left-targets as easily as right-targets.
F) the 2D spatial transformation appears to be better, in the sense that the eye trace is able to reach left targets after 3-4 rounds of calibration at each of the 9 points -- but the tracking is just not very accurate. A human subject, known to be fixating on a 1 degree target within a 1.5 degree radius for example, will show an eye trace (in ML 2d spatial transformation screen), in the general neighborhood of the target, but not reliably on target. In addition, the trace appears erratic/jumpy, occasionally wandering off the screen entirely for periods (I am assuming this has to do with the gains previously set in ISCAN?) 


anyone who successfully uses ISCAN/ML eye calibration in conjunction, any information you can provide regarding your procedure or settings would be incredibly helpful. 

thanks all. 
best,
Geena Screen Shot 2018-05-03 at 4.38.22 PM.png 




0
Jaewon

Administrator
Registered:
Posts: 689
Reply with quote  #2 
1) I referred to the following document to implement the function. Google "projective transform" and you will find a lot of related documents.

http://homepages.inf.ed.ac.uk/rbf/CVonline/LOCAL_COPIES/BEARDSLEY/node3.html
0
Geenaianni

Junior Member
Registered:
Posts: 27
Reply with quote  #3 
thanks Jaewon. 

right now when I open 2D spatial transformation to calibrate the eye, all of the numbers overlapping the fixation targets are already blue (even though I have not done any calibration). I need to right click twice each number, so it disappears and then reappears in red before starting calibration. I'm wondering if this is a bug in our version (jan 2nd 2018 build 53) or indicative of something wrong in how I'm going about this. 

thanks,
Geena
0
Jaewon

Administrator
Registered:
Posts: 689
Reply with quote  #4 
No, that is not a bug. Initially some values are assigned to the pre-selected fixation points so that raw eye signals can be displayed without calibration. You don't need to de-select and re-select them. Just do calibration as if they are all red. New values will overwrite old values.

Your version number doesn't seem correct. The build number was larger than 100 already around Jan 2018. Anyway, please use the latest version, if you don't have any particular reason to keep the old. I am trying to fix things as quickly as possible rather than wait to release a cumulative patch.
0
Geenaianni

Junior Member
Registered:
Posts: 27
Reply with quote  #5 

Hi Jaewon, 
thanks for the info. I updated to the latest version of ML. Regarding the "calibration matrix" referenced in the ML documentation of 2D spatial transform -- where can I find this? I am interested in seeing the recorded x/y coordinates of the eye position as we go through the calibration -- in order to ascertain how the transformation is made, and perhaps how accurate it is . Does this info have any relation to the values in "MLConfig.EyeTransform" structures, or what exactly is that info? Thanks.
best,
Geena

0
Jaewon

Administrator
Registered:
Posts: 689
Reply with quote  #6 
Hi Geena,

The matrix is in MLConfig.EyeTransform{3}.tdata. The structure is built to be the same as what Image Processing Toolbox creates, for compatibility. "The recorded x/y coordinates of the eye position as we go through the calibration" is always indicated by the eye tracer that you choose on the menu, so I don't understand what your intention is. If the eye tracer does not fall on the chosen fixation point, it means that the calibration is off. I am not going to explain how to convert the values with the matrix, since it is already in the document I linked and this is not a math class board.
0
Geenaianni

Junior Member
Registered:
Posts: 27
Reply with quote  #7 
Hi Jaewon, 
I see -- and yes, thanks for the link to the projective transform info. So I guess what I am asking is, are the x/y positions (in degrees of visual angle) of the eye and the targets presented during calibration, stored anywhere for later examination? or are these positions indicated only by the eye tracer during calibration, but not stored in the config file ?
0
Jaewon

Administrator
Registered:
Posts: 689
Reply with quote  #8 

Of course, all information necessary to reconstruct the calibration matrix is stored in the config, since NIMH ML needs it when you start the calibration tool next time. The positions of fixation points and their associated voltages are stored in MLConfig.EyeTransform{3}.fixed_point and MLConfig.EyeTransform{3}.moving_point, respectively, but you don't need to read them. They are already being displayed in the calibration tool, as shown in the screenshot below. The filled blue circles in the figure are the fixation points and the red open circles are the input voltages.

mapping.png 

X/Y positions of the eye is what you get as a result of calibration, so they don't need to be saved.


0
aboharbf

Member
Registered:
Posts: 49
Reply with quote  #9 
Hey Jaewon,

A quick related question - Once you've performed a calibration with the 2D Transform (after hitting save), Is there a way to call up a grid of Yellow blocks to observe the result of the calibration without beginning an experiment? Thank you for the help.
0
Jaewon

Administrator
Registered:
Posts: 689
Reply with quote  #10 
You can restart the calibration tool and set the "Reward" option to "On Fixation". Then, the reward is delivered automatically, when the subject fixates on the selected point, and you don't need to press the space key. So you can test the calibration without resetting recorded voltage values.
0
Previous Topic | Next Topic
Print
Reply

Quick Navigation:

Easily create a Forum Website with Website Toolbox.