Forum
Register Calendar Latest Topics
 
 
 


Reply
  Author   Comment  
kpurpura

Junior Member
Registered:
Posts: 2
Reply with quote  #1 
Suppose you want to give the subject a choice between two targets to look at. The choice will be up to the subject so you won't know before the target is acquired which target to assign to eyejoytrack for checking how long fixation is maintained on either of the two targets. My understanding, and from what I can see from the error messages when ML crashes, is that you cannot use something like the following ...
heldfix=eyejoytrack('holdfix',[taskObject1 taskObject2],'threshold','duration'); i.e. you can't use a vector of targets for testing if the eye is in either of two regions of the screen.
What's interesting is that the following code ...
onfix = eyejoytrack('acquirefix',[taskObject1 taskObject2],'threshold','duration') does not crash ML.
The other available form for eyejoytrack ...
helpfix=eyejoytrack('holdfix',taskObject1,threshold,duration,'holdfix',taskObject2,threshold,duration), also crashes ML.
Am I correct in that one cannot use a vector of targets for checking multiple potential fixations?
Any ideas about how to execute a free choice paradigm in ML?
Thanks.
0
Wael.Asaad

Administrator
Registered:
Posts: 53
Reply with quote  #2 
You need to find out which target was acquired (using the acquirefix option with a vector of possible targets, and returning the second output which tells you which one was chosen), then apply the holdfix option to the one that was chosen.  You cannot use a vector for holdfix, as that would imply the subject is simultaneously holding fixation on all of them (though it could have been designed as "any" of them, that was decided not to be the best method for sake of clarity).

0
ryklin

Administrator
Registered:
Posts: 250
Reply with quote  #3 
Like Wael said, you have to first acquire fixation before you can hold it. If you were to just check for hold fix the function would always return 0 unless the participant was already looking at the target before the trial started. This is not a good methodology. You should always first check to see if fixation is acquired, even if it is for just one sample, then move on to holdfix. Furthermore, you can only pass a vector of targets to acquirefix, not holdfix, also as Wael said. 

When you pass 1 target to acquirefix such as this:
ontarget = eyejoytrack('acquirefix', targetNum1, windowSize, fixDuration);

ontarget will be either a 1 if true or 0 if not.

However if you pass a vector of targets such as

ontarget = eyejoytrack('acquirefix', [targetNumA targetNumB targetNumC], windowSize, fixDuration);

Then in this case ontarget will be either 0 if no targets are acquired, or the target number that was chosen.

So be a little careful when coding here, because when testing one target you typically look for a positive return value such as if (ontarget), ... end. But with a vector of targets you need to code if (ontarget == 1), ... else if (ontarget == 2),... end, etc.

There are examples of the tasks folder, eyetracking subfolder. Eyetracking.m tests for 1 target at a time, while Eyetracking2.m tests for a vector of targets. So please take a look there for further guidance. Let us know if you have any other questions that weren't answered.



0
kpurpura

Junior Member
Registered:
Posts: 2
Reply with quote  #4 
Thanks very much. The solution was staring me in the face had I fully appreciated the vector output structure of the acquirefix function. I'm glad that the ability to monitor several targets for the acquisition of fixation was already a fundamental design feature of ML. It's also great that this forum was established and the response time was so fast.   
0
Wael.Asaad

Administrator
Registered:
Posts: 53
Reply with quote  #5 
It's also useful to know that, as a subject's eye position or cursor moves into a target, 'acquirefix' might return a "1" at that instant, but an immediately following 'holdfix' could nevertheless return a "0" because the signal has fallen back out of the target radius.  This is because there is typically some noise in the signal so that it can bounce in and out of the target until it moves far enough into the target radius that the noisy fluctuation doesn't - even for a very slight instant - take it back out again.  Because of this, we sometimes insert a small "idle" time (using the "idle" function) of a few 10s of milliseconds, before applying the 'holdfix.'

Hope that's helpful-
0
ryklin

Administrator
Registered:
Posts: 250
Reply with quote  #6 
That's an interesting solution to insert an idle(10). In the past, I have solved this problem differently. What would you do if during a 300 ms holdfix, the participant blinked? Under most circumstances you would abort the trial and score it as a premature loss of fixation. This can lead to many unnecessary aborts, so real-time blink compensation is a useful feature. Furthermore, if your task requires lengthy fixations, say 3-10 seconds, blinking is a behavioral certainty. Another issue is that this noise you describe can appear during any moment during the holdfix, not just the start, though with a large enough error window, you can typically handle any small noise with a spatial threshold, a really noisy signal (low voltage device) can still cause a problem. Anyway, this is an interesting topic to discuss, but I won't harp on about it.
0
Wael.Asaad

Administrator
Registered:
Posts: 53
Reply with quote  #7 
Typically, our subjects naturally learn not to blink after performing the task for a bit.  But you could certainly build-in some sort of blink-detection algorithm...
0
sballesta

Junior Member
Registered:
Posts: 1
Reply with quote  #8 
Hi,

I would like to implement a real-time blink compensation during relatively long fixations.

In this post ryklin has mentioned:

"In the past, I have solved this problem differently. What would you do if during a 300 ms holdfix, the participant blinked?"

Could you share your solution for that in MonkeyLogic?

Thanks
0
SeanC

Junior Member
Registered:
Posts: 9
Reply with quote  #9 
Hi all,

Did anyone come up with a real-time blink compensation method in the end? I don't want to penalise my subject for blinking during a fixation period

Thanks
0
Jaewon

Administrator
Registered:
Posts: 382
Reply with quote  #10 
Hi SeanC,

Currently MonkeyLogic does not provide a convenient way to detect complex behavioral patterns, like blinks, so you should program it yourself with given functions. (It is in my to-do list, but I need some time to work on it.)

You can run another eyejoytrack() with 'acquirefix' and your tolerance parameter (for example, 200-300 ms) when your subject breaks fixation and let her/him finish the rest of the fixation period if she/he successfully re-fixates in time.
0
SeanC

Junior Member
Registered:
Posts: 9
Reply with quote  #11 
Hi Jaewon,

Thanks for this, 

That would be a really useful addition to MonkeyLogic, as in my task the monkey must maintain central fixation while objects are being presented either side. Using an idle command then seeing if they re-acquire, say within 50ms, would then affect the temporal consistency of stimulus presentation. However, this would be a good solution if the only goal was to maintain central fixation for a set period without other stimuli being presented. 


0
ryklin

Administrator
Registered:
Posts: 250
Reply with quote  #12 
You need to make sure that your program can discriminate between blinks or artifacts, and short saccades. Otherwise the monkey will be able to cheat. 
0
Previous Topic | Next Topic
Print
Reply

Quick Navigation:

Easily create a Forum Website with Website Toolbox.