Forum
Register Calendar Latest Topics
 
 
 


Reply
  Author   Comment   Page 1 of 2      1   2   Next
Jaewon

Administrator
Registered:
Posts: 424
Reply with quote  #1 
I am just starting a new thread to explain how to calibrate eye and joystick signals with NIMH MonkeyLogic. NIMH ML currently provides two calibration methods, 1) Origin & Gain and 2) 2-D Spatial Transformation. The former is a new method that requires calibration just for 2 fixation points and is much easier to use with untrained subjects. The latter is the method that has been used in previous versions of MonkeyLogic and requires sampling voltages for at least 4 fixation points. Both methods come with a tool that you can conveniently manipulate with the mouse. If you click any object on the control screen (e.g., yellow squares below), a fixation point will be shown in the corresponding location on the subject screen.

* The Origin & Gain method (2-point calibration)

gain.png   


1) Click the "Show Center" button and wait until the subject look.
2) Click the "Set Origin" button while the subject is fixating. It will register the current voltage reading. Alternatively, you can click the center point with the mouse and press the space bar.
3) Click one of the peripheral points. Increase (or decrease) the X & Y gains if the subject's saccade undershoots (or overshoots).
4) Click the "Save" button. Clicking the "Cancel" button reverts any change that has been made.


* 2-D Spatial Transformation

tform.png   

1) Choose the fixation points (FPs) to use for calibration with right-clicks. Once you pick all FPs, you can turn them on and off with the keyboard (N key & P key) or left-clicks.
2) Hit the space key. Then the voltage reading at that moment is registered for the last FP shown to the subject.
3) Repeat 1) and 2) until the calibration looks good. Then click the "Save" button.
4) The eye trace does not appear until there are at least 4 FPs calibrated

0
stremblay

Member
Registered:
Posts: 65
Reply with quote  #2 
Hi Jaewon,

These new calibration tools look very useful.

I am wondering how easy it would be to replace the spacebar hit by a touchscreen hit on the fixation point during calibration? 
This would allow a non-eye-trained monkey to perform a calibration task on the touchscreen, leveraging the fact that the monkey always looks where he touches.

In other words, the question is: Could the calibration task detect touchscreen touches to record the eye voltage samples used for calibration, rather than requiring the user to press the spacebar when he believes the monkey is looking at the correct spot?

This would be crucial for touchscreen tasks which do not require fixations.

Thanks again
0
emadreza

Junior Member
Registered:
Posts: 15
Reply with quote  #3 
Hi Jaewon,

I am using October 11 2016 version of ML on 64-bit Matlab. I tried to test new calibration method. Using first method, 'Gain and origin" I have no problem running memory guided saccade task and also dms task. But when I calibrate with 2D transformation method. during task when the eye move, there is an error.
image is in attachment, Our system is not connected to web, then I take photo of screen by cell phone.

Attached Images
jpeg 20161102_191348.jpg (832.57 KB, 21 views)

0
Jaewon

Administrator
Registered:
Posts: 424
Reply with quote  #4 
Hi stremblay,

Implementing something that you described is possible, but I don't see why it is useful, since there is no guarantee that the subjects touch the fixation point exactly at the moment that they are looking at it. I think some human judgement is inevitable in this.

I am a little confused by why you would need eye calibration for touchscreen tasks which do not require fixations, but try the 'origin and gain' calibration. It is easy even for untrained subjects and will take less than 10 seconds to complete, once they get used to it.
0
Jaewon

Administrator
Registered:
Posts: 424
Reply with quote  #5 
Hi emadreza,

Thanks for the feedback. But I don't recognize the version number you are referring to. If you recently downloaded NIMH MonkeyLogic, could you try again with the latest version? It could be due to something that I already fixed.

http://forums.monkeylogic.org/post/nimh-monkeylogic-8118700?pid=1293849388
0
emadreza

Junior Member
Registered:
Posts: 15
Reply with quote  #6 
Hi Jaewon
Sorry for delay 
I used the last version in which the new method for eye calibration created.
I will test the new version too.
I will send my feedback
Thanks
 
0
stremblay

Member
Registered:
Posts: 65
Reply with quote  #7 
Quote:
Originally Posted by Jaewon
Hi stremblay,

Implementing something that you described is possible, but I don't see why it is useful, since there is no guarantee that the subjects touch the fixation point exactly at the moment that they are looking at it. I think some human judgement is inevitable in this.

I am a little confused by why you would need eye calibration for touchscreen tasks which do not require fixations, but try the 'origin and gain' calibration. It is easy even for untrained subjects and will take less than 10 seconds to complete, once they get used to it.



Hey Jaewon,

Having a touch triggering the registering of a fixation point is pretty useful in our case. Even if the monkey is not using eye movements to perform the touchscreen task, it is important to know what he was looking at (saccading to different options on the screen) before he made a choice. Eye movements can reveal a lot about the attention of the monkey. 

Now if you doubt that the monkey would look where he touches, I can guarantee you that he does, especially if the touch targets are small enough. Next time you play with your iPad and want to select a small link, notice how well your eyes will follow your finger!

We can try to implement the touch trigger ourselves. It would be helpful if you could let us know where to start (which line in xycalibrate).

Thanks!
0
Jaewon

Administrator
Registered:
Posts: 424
Reply with quote  #8 
Hi stremblay,

No, I don't doubt that the mk would look where he touches. I just don't think that he will fixate on his finger tip until the touch is registered. When I touch, my finger usually occludes the target I am looking at. So, by the time my finger actually contacts with the glass, I am already looking away or looking somewhere around the finger. My sight is needed only until I move my finger on the right location. I don't need it to lower down my finger from there.

For analysis, you can still find out the mk's gaze position at the time of touches from bhv. I just don't think it is a good way to calibrate eye because of the above reason.

If you have to implement something like that yourself, you can take a look at the run_scene() function either in xycalibrate_gain.m or xycalibrate_tformfwd.m. But I have to warn you of this. The program design of MonkeyLogic is not great, so you will have to modify everything related to it, from the UI code to the BHV format.

0
Wing

Junior Member
Registered:
Posts: 16
Reply with quote  #9 
Hi Jaewon,

    Is it true that we are not allowed to change the Fiaxtion wait/hold time(which is 2000/200 msec respectively) in the Origin & Gain method?
    
    I thought it would be helpful for the fixation training if we can edit especially the hold time?

    Other parameters like Fiaxtion window radius works fine.

    And thank for for the touchscreen issue, I just started the ML, so sorry for these naive questions..[smile]

0
Jaewon

Administrator
Registered:
Posts: 424
Reply with quote  #10 
I am confused. Did someone tell you that you are not allowed to change them? Or did you try but it didn't work? I wouldn't add edit boxes there if they were supposed to be fixed. Please try again. For MATLAB UIs, you should type ENTER after making any change. Otherwise, the changes won't be registered properly.
0
Wing

Junior Member
Registered:
Posts: 16
Reply with quote  #11 
Hi Jaewon,

    Sorry for the late reply.

    Yes, I tried to edit the time parameters in edit boxes(all time parameters and reward numbers) but failed, which reported error after I pressed the keyboard Enter key, and directed to

                  xycalibrate_tformfwd/UIcallback (line 483)
                     if 0<val, tform.(hObject.Tag) = val; end

   The same error happens in the 2-D spatial map calibration method.
    
    Something wrong with my edit way?
   
0
Jaewon

Administrator
Registered:
Posts: 424
Reply with quote  #12 
Hi Wing,

No, this is totally on me. I work mostly with R2016a and keep forgetting which is the new syntax that doesn't work with old versions. It is fixed now. I will upload it either later today or tomorrow. Sorry for the inconvenience and thank you for letting me know.

Jaewon
0
Wing

Junior Member
Registered:
Posts: 16
Reply with quote  #13 
Hi Jaewon,
 
     I found that during the calibration task(newest version), it sometimes happen that I couldn't trigger my reward machine through a remote controller.

     Currently I add a functional key call the reward() function, which could replace the function of the remoter controller.

     but, since the remote controller should work in a parallel way, I wonder which part disturbs the signal sent by the remote controller?

   
     More information:
     DAQ Board: NI-PCI 6220 
     Reward machine:5-RLD-E1 Liquid Reward System from CRIST INSTRUMENT CO.
     and I use a digital port to send signal.
     
0
Jaewon

Administrator
Registered:
Posts: 424
Reply with quote  #14 
Hi Wing,

I have to ask some questions to understand the situation. The first question is what is "a remote controller" and how does it work? The second is, how did you add the reward() function and did it solve the problem? Or you added the function and then the problem occurred?
0
Wing

Junior Member
Registered:
Posts: 16
Reply with quote  #15 
Hi Jaewon,
 
      Firstly,for the second question: I added the function to replace the remote controller, which works quite well(the problem occurred before I add the function.)

      As for the remote controller, here is a link of my reward machine,5-RLD-E1(https://www.yumpu.com/en/document/view/31097047/crist-instrument-co/67).The port of remote controller I referred is 7). a remote controll double banana jack input under the title 5-RLD-E1B.

      So, to add the reward() function meet my demand even though the problem(remote controller doesn't work during the task) still exists.
      Actually, the remote controller couldn't work for a certain period of time even though i exited from the task.
     


      By the way, I've worked using the calibration task for several days, it worked very well! 
      Really appreciate your efforts!
0
Jaewon

Administrator
Registered:
Posts: 424
Reply with quote  #16 
Is this a new problem that you haven't had before?

If the remote controller input is connected in parallel with the NI board, it may be just being drained to the NI ground, but I cannot tell for sure without knowing what kind of signal the remote controller generates and how it is wired.

----------

Now I understand some of your previous comment. So you added some code to call my reward() function when a function key is stoked, didn't you?
0
Wing

Junior Member
Registered:
Posts: 16
Reply with quote  #17 

Hi Jaewon,

Sorry for my bad explanation. I was trying to describe the same thing(#13&#15).

I may try to describe again what happened along the timeline.

1. I tried the calibration task, and found that I couldn't trigger the reward machine through the remote controller during the calibration task, which I usually used to give reward manually.
2. I added a functional key to call reward() function, so that I can give reward manually by pressing this key during calibration task, it works fine!
So currently, One problem solved(I can give reward manually during the calibration task ) and one problem remains(I couldn't give reward through the remote controller during the calibration task).

And, the remote controller input is connected directly to reward machine: the remote controller and NI board send signal independently to reward machine.

Here are 2 other information might be useful:
1. The remote controller doesn't work even after i exit from the task, it will recover only after I run once the corresponding hardware test on the ML main interface(e.g. give a pulse to the port&line I assigned as reward port).
2. In our lab, there's another member using GitHub version ML, the remote controller can work during the calibration task. Our hardware setting are the same.

I suppose the signal send through the remote controller is like a TTL high signal, I'll check this point more.

0
Jaewon

Administrator
Registered:
Posts: 424
Reply with quote  #18 
Hi Wing,

Thanks for the additional information. I think I understand fully now.

Since ML and the remote controller both work fine when they are used separately, I don't think this is a software problem that I can fix. If your remote controller is not just a short-circuit switch and sending out an active TTL, then it is likely a ground problem, as I mentioned. You can try connecting the digital GND of NI with the remote controller's ground.
0
Wing

Junior Member
Registered:
Posts: 16
Reply with quote  #19 
Thank you Jaewon!

    You're right! Problem solved!
0
kms

Junior Member
Registered:
Posts: 27
Reply with quote  #20 
Hi Jaewon, 

I have noticed that after I use the Origin and Gain method, the eye position data looks restricted to the display screen used for calibration i.e. the signal appears cut off or saturated at the perimeter of the display (see attached picture for reference, where red is eye signal). 

While this may not be of concern in most cases, I am interested in examining where the subject looks after making a decision on the screen, for e.g. an object of interest next to the screen that could influence her future decision making. For this, I would like my task to register signals outside the perimeter of calibration as well (as much as the eye tracker allows). Is it already possible to do that using any of the calibration routines? 

Thanks.

Attached Images
jpeg 1.jpg (8.93 KB, 14 views)

0
Jaewon

Administrator
Registered:
Posts: 424
Reply with quote  #21 
It is not the calibration method that cuts off the signal. It is because that is the limit that your eye tracker allows or because the eye tracker's output is beyond NI's input range (usually +-10V). You can decrease the output gain of your eye tracker and do the calibration again.
0
kms

Junior Member
Registered:
Posts: 27
Reply with quote  #22 
Thanks so much, Jaewon! Scaling the x and y range in the tracker settings seems to have made a difference!
0
kms

Junior Member
Registered:
Posts: 27
Reply with quote  #23 
Hi Jaewon, 

I would like to calibrate more than one subject's eye position in ML. Would it be possible to manually assign signals from channels other than Eye Signal X and Y (e.g. general input 1 and 2) for this purpose?

Thanks.


0
Jaewon

Administrator
Registered:
Posts: 424
Reply with quote  #24 
I don't know what you want to do with it, but the entire ML is hard-coded to use Eye X and Y, so it will be difficult for now.
0
kms

Junior Member
Registered:
Posts: 27
Reply with quote  #25 
The idea was to just calibrate the eye signal from the second subject before recording and saving it for the duration of the task. Since this subject will not be actively performing any trial, Eye X and Y can still mean signals pertaining to subject 1.

Thanks.

0
Previous Topic | Next Topic
Print
Reply

Quick Navigation:

Easily create a Forum Website with Website Toolbox.