Sign up Calendar Latest Topics
 
 
 


Reply
  Author   Comment   Page 2 of 2      Prev   1   2
Jaewon

Administrator
Registered:
Posts: 716
Reply with quote  #26 
I still don't get it. Are those two subjects participating in the experiment together? If so, you can record the eye positions of the second subject via General Input, as you said, and manually calibrate them later. You don't have to calibrate before recording, since the second subject doesn't do trials during the experiment.
0
kms

Member
Registered:
Posts: 37
Reply with quote  #27 
Yes, the two will be participating in the experiment together. How do you suggest I do manual calibration without reassigning Eye x and y? 

Thanks.
0
Jaewon

Administrator
Registered:
Posts: 716
Reply with quote  #28 
You need to know the gaze angle and the voltage reading when the subject is looking at a particular point. Is there any fixation point or an object that the subject likes to watch? What is the subject doing during trials?
0
kms

Member
Registered:
Posts: 37
Reply with quote  #29 
Thanks, Jaewon. The subject is free to look anywhere during the task but I want to make sure that all his saccades to an experimental area of interest [subject 1] are tracked well.

I think I now understand what you mean by manual calibration. I will give this a try though, of course, I will prefer something more automatic like the origin and gain method. 

Thanks.
0
Jaewon

Administrator
Registered:
Posts: 716
Reply with quote  #30 
Eye position data saved in the BHV is calibrated numbers that are represented in degrees, not raw voltages. Since General Input doesn't get that kind of conversion, you have to do things manually.

You can still get the calibration parameters of the second subject easily with the calibration tools in ML. You just need to save it in a different CFG file. Make a copy of the conditions file and rename it with a different name so that your original _cfg.mat file is not overwritten. Then load that conditions file and revise I/O so that the X & Y channels of the second subject (that you previously assigned to General Input) are mapped to Eye X & Y on the menu panel. Then calibrate the eye signals of the second subject from there and save the settings. Now the calibration parameter of the second subject is in the new cfg file.

Then load the original conditions file and do the experiment. The eye signals of the second subject will be in the General input uncalibrated. Then you can read the second cfg file in MATLAB, get the origin and gain and apply them to the General Input signals. The locations of the origin and gain variables in the CFG are:

MLConfig.EyeTransform{2}.origin
MLConfig.EyeTransform{2}.gain

Does it make sense?
0
kms

Member
Registered:
Posts: 37
Reply with quote  #31 
That sounds great; exactly what I was looking for as a work-around for my specific purpose without having to change the original ML code! 

Thanks a lot, Jaewon!
0
Jaewon

Administrator
Registered:
Posts: 716
Reply with quote  #32 
FYI, the origin and gain calibration works like this.

degree = (voltage - origin) * gain
0
castiel

Junior Member
Registered:
Posts: 27
Reply with quote  #33 
Hello, i have some questions about the calibration of monkeylogic, please help me

Thank you very much!!!

and i'm very confused about this:"Eye position data saved in the BHV is calibrated numbers that are represented in degrees, not raw voltages." what does this mean? and i alse see the formula called:
degree = (voltage - origin) * gain

and you said the origin and gain calibration works like this.

So my problem is that: should i use the formula to calculate the "gain" if my degree is 10°?because i can get voltage with the space bar. Or should i just pull the adjustment lever of GainX and GainY to get calculated? and Does the degree mean the angle between the upper circle and the middle circle?


Looking forward for your reply, will be so so greatful for your help.
0
Jaewon

Administrator
Registered:
Posts: 716
Reply with quote  #34 
Hi castiel,

I am sorry, but I don't understand what is the job you want to do, so I will just tell you what you should know.

1. In the main menu, type the diagonal size of the selected monitor and the viewing distance between the monitor and the subject correctly.
ppd.png 

2. Start the origin-gain calibration tool and set the interval between fixation points, as you want. It is 2 deg in the below figure, which means that the yellow squares (fixation points) on the left are 2 degrees apart one another.
3158636.png 

3. Click the fixation point at the center (or click the "Show Center" button). It will display a yellow square at the center of the subject screen and make the subject look at it. If the eye cursor (red dots on the left) does not overlap with the center fixation point at the moment the subject looks at the point, your calibration is off. Then, either by using the "Origin X" and "Origin Y" slider bars or by clicking the "Set Origin (Space)" button, make the eye cursor overlap with the center fixation point when the subject looks at the center.

4. Choose any fixation point other than the center point and click it. Then use only "Gain X" and "Gain Y" and make the eye cursor overlap with the point you just clicked when the subject looks at it.

5. Click the "Save" button.

6. Now the voltage readings will be converted to visual angles and saved in the data file.



0
kaciedougherty

Junior Member
Registered:
Posts: 12
Reply with quote  #35 

We have been using ML1 for several years, and have modified it to use with a stereoscope. Essentially we split the screen in half and angle the mirrors so that the animal can fuse what is shown on both sides of the screen. For eye tracking, we can rely on tracking just one eye. The center fixation target  becomes becomes the 1/4 way (instead of halfway) point along the horizontal axis on the monitor. We have an offset board between our Eyelink system and the DAQ board to adjust the X voltage to ~0 V for central fixation. 

We recently tried using a similar approach in ML2 with 2D transformation but were unable to get a successful calibration. Below are the points we selected (only points to be shown to the left eye) and the calibration was not solvable.   


Any ideas on how to proceed or get a good eye calibration? 


calibration_points.png  bad_calibration.png 


0
Jaewon

Administrator
Registered:
Posts: 716
Reply with quote  #36 
Is that how you calibrated eye signals in ML1? That doesn't sound right. Please describe how you calibrate in ML1 and how you use the offset board in the procedure.
0
kaciedougherty

Junior Member
Registered:
Posts: 12
Reply with quote  #37 
In ML1, we calibrate one half of the monitor with a 9 point grid in EyeLink with code they provided to us. Then, in ML1, we would show a 9 point grid to the left eye, and another 9 point grid to the right eye only. Because the eyes move together, if the mirrors on the stereoscope are angled in a way that allows for fusion of the left and right sides of the display, the eye will be in the same position when a target is shown to the center left and center right positions because it should appear to be in the same position to the animal.

Without the offset board, the X voltage for the center point would be -2.5 V(if the range was -5V to 5V and the left eye was tracked). We would use the offset board only to adjust the voltage so that X = ~0V corresponded roughly to the center of the visual field (or the point on the center right half and left half of the monitor). We have gone without the offset board though, I don't think it's essential.

In the end with the mirrors calibrated we get approximately matching X, Y voltages for corresponding points on the left and right sides [shown below]. 

I'm not sure what might be different in ML2.

Thanks for your help!stereocalibration.jpg 


0
Jaewon

Administrator
Registered:
Posts: 716
Reply with quote  #38 
When you analyze data, where you read eye traces from? From BHV or EyeLink's file?

What are the coordinates of the 9 points that you show in ML1? In visual angles.

How do you present your stimuli? Do you present two images on the left and right? Or do you have just one big image that covers both left and right and present it at the center of the screen?


0
kaciedougherty

Junior Member
Registered:
Posts: 12
Reply with quote  #39 
We read eye traces from BHV files.

We wrote a function to change the calibration points, and we usually make a 9 point grid 8 dva out from center (-8,0; 8,0; 0,8, etc). 

We present two images one left and one right. We wrote all of our stimuli code so odd numbered task objects correspond to left eye and even-numbered to right eye. 

We realized yesterday that using the raw signal in ML2 might be okay for now. We really are looking to use ML2 immediately only for the new RF mapper, and for that we would only need to know when the animal looks at one point on the screen (the fix target). One thing we're struggling with though is that we don't see the fixation task object during the task--does it hide behind the stimulus I wonder? Would there be a way to keep the fixation spot in one place on the screen? 

Thanks again for all of your replies!
0
Jaewon

Administrator
Registered:
Posts: 716
Reply with quote  #40 
Please let me know the coordinates that the left and right images are presented at (in visual angles). If you could send me your conditions file and timing script, it would be better.
0
kaciedougherty

Junior Member
Registered:
Posts: 12
Reply with quote  #41 
To find the X,Y coordinates we used the info in the TrialRecord structure to get the size of the monitor, convert it to visual degrees. Then we divide by 4 and and add or subtract this value from 0 to get a new center. This we did in the findScreenPos.m function attached.

I attached the other files that I think go with our basic fixation task in the stereoscope. In the conditions file, there's an image called "background" that we center on the monitor (actual center) to provide vergence cues. It's made for a monitor with resolution 1024x768.  background.jpg 

 
Attached Files
m findScreenPos.m (796 Bytes, 1 views)
rtf SHOW_FixCross_di.rtf (604 Bytes, 1 views)
m gFixCross.m (7.21 KB, 1 views)
m tFixCross_di.m (2.24 KB, 1 views)

0
kaciedougherty

Junior Member
Registered:
Posts: 12
Reply with quote  #42 
Not sure if this is helpful but just in case I'm attaching the code we use to get the coordinates we want for the calibration (left side, right side, or both), and the code to reset the targets in the calibration in ML1.

 
Attached Files
m setTargetPointsXY.m (1.24 KB, 1 views)
m getStereoCalPoints.m (1.00 KB, 1 views)

0
Jaewon

Administrator
Registered:
Posts: 716
Reply with quote  #43 
How did you find that the calibration you did with ML2 was not good? Didn't the eye tracer position match the fixation points in the calibration tool?
0
kaciedougherty

Junior Member
Registered:
Posts: 12
Reply with quote  #44 
The main problem was that the command line would print something saying the calibration matrix was insolvable.

We realized yesterday, though, that the RF mapper task we were attempting to run doesn't seem to show a stable fixation spot. It might still work despite that command line warning; that's something we could try. I had assumed (I think wrongly) that the reason the fixation spot wasn't appearing was related to the bad calibration. 
0
Jaewon

Administrator
Registered:
Posts: 716
Reply with quote  #45 
The error message is printed when the number of the fixation points you registered voltage for is not many enough. The 2D spatial transform requires 4 fixation points or more (and they should not be on the same straight line). So you might see the error when you just started calibration with the new fixation points but it shouldn't appear once you complete 4 or more fixation points.

The RF mapper task is an example. If you want a fixation point or additional scenes, please modify the code and create your own task.

From the questions I asked and the code you sent me, I believe the calibration you did in ML2 worked just fine. The functions that you posted above may help you pretend as if (0,0) is the center of the left (or right) visual field, but they don't actually change ML's coordinate system. So, in the eye data saved in the BHV, (0,0) is still not the center of the left (or right) visual field and you have to call a similar function again for correction when you read out the data. In my opinion, those functions just make things less intuitive. For example, without knowing what those functions really do, there is no way for me to tell how the calibration tool will work in your setup.

I think it will be better if you just remember that the centers of the left and right visual fields are off by an angle corresponding to 1/4 of the screen and add/subtract that angle to the position of the images depending on whether they will be presented on the right or the left. In other words, you just say that the center of the left field is, for example, (-10,0) rather than say it is (0,0).
0
kaciedougherty

Junior Member
Registered:
Posts: 12
Reply with quote  #46 
You're right about the ML1 code--we don't change ML's coordinate system but we keep track of where the the stimuli are presented in the visual field from the perspective of the animal. It's important for the experiments and it's a system that has worked for us for a few years. 

Thank you for helping me understand the error message, and for the rf mapper. We'll ignore the error message and try to make a task which hopefully leads us to a solution. 
0
Previous Topic | Next Topic
Print
Reply

Quick Navigation:

Easily create a Forum Website with Website Toolbox.