Sign up Calendar Latest Topics
 
 
 


Reply
  Author   Comment  
kpurpura

Junior Member
Registered:
Posts: 2
Reply with quote  #1 
Suppose you want to give the subject a choice between two targets to look at. The choice will be up to the subject so you won't know before the target is acquired which target to assign to eyejoytrack for checking how long fixation is maintained on either of the two targets. My understanding, and from what I can see from the error messages when ML crashes, is that you cannot use something like the following ...
heldfix=eyejoytrack('holdfix',[taskObject1 taskObject2],'threshold','duration'); i.e. you can't use a vector of targets for testing if the eye is in either of two regions of the screen.
What's interesting is that the following code ...
onfix = eyejoytrack('acquirefix',[taskObject1 taskObject2],'threshold','duration') does not crash ML.
The other available form for eyejoytrack ...
helpfix=eyejoytrack('holdfix',taskObject1,threshold,duration,'holdfix',taskObject2,threshold,duration), also crashes ML.
Am I correct in that one cannot use a vector of targets for checking multiple potential fixations?
Any ideas about how to execute a free choice paradigm in ML?
Thanks.
0
Wael.Asaad

Administrator
Registered:
Posts: 51
Reply with quote  #2 
You need to find out which target was acquired (using the acquirefix option with a vector of possible targets, and returning the second output which tells you which one was chosen), then apply the holdfix option to the one that was chosen.  You cannot use a vector for holdfix, as that would imply the subject is simultaneously holding fixation on all of them (though it could have been designed as "any" of them, that was decided not to be the best method for sake of clarity).

0
Edward

Administrator
Registered:
Posts: 245
Reply with quote  #3 
Like Wael said, you have to first acquire fixation before you can hold it. If you were to just check for hold fix the function would always return 0 unless the participant was already looking at the target before the trial started. This is not a good methodology. You should always first check to see if fixation is acquired, even if it is for just one sample, then move on to holdfix. Furthermore, you can only pass a vector of targets to acquirefix, not holdfix, also as Wael said. 

When you pass 1 target to acquirefix such as this:
ontarget = eyejoytrack('acquirefix', targetNum1, windowSize, fixDuration);

ontarget will be either a 1 if true or 0 if not.

However if you pass a vector of targets such as

ontarget = eyejoytrack('acquirefix', [targetNumA targetNumB targetNumC], windowSize, fixDuration);

Then in this case ontarget will be either 0 if no targets are acquired, or the target number that was chosen.

So be a little careful when coding here, because when testing one target you typically look for a positive return value such as if (ontarget), ... end. But with a vector of targets you need to code if (ontarget == 1), ... else if (ontarget == 2),... end, etc.

There are examples of the tasks folder, eyetracking subfolder. Eyetracking.m tests for 1 target at a time, while Eyetracking2.m tests for a vector of targets. So please take a look there for further guidance. Let us know if you have any other questions that weren't answered.



0
kpurpura

Junior Member
Registered:
Posts: 2
Reply with quote  #4 
Thanks very much. The solution was staring me in the face had I fully appreciated the vector output structure of the acquirefix function. I'm glad that the ability to monitor several targets for the acquisition of fixation was already a fundamental design feature of ML. It's also great that this forum was established and the response time was so fast.   
0
Wael.Asaad

Administrator
Registered:
Posts: 51
Reply with quote  #5 
It's also useful to know that, as a subject's eye position or cursor moves into a target, 'acquirefix' might return a "1" at that instant, but an immediately following 'holdfix' could nevertheless return a "0" because the signal has fallen back out of the target radius.  This is because there is typically some noise in the signal so that it can bounce in and out of the target until it moves far enough into the target radius that the noisy fluctuation doesn't - even for a very slight instant - take it back out again.  Because of this, we sometimes insert a small "idle" time (using the "idle" function) of a few 10s of milliseconds, before applying the 'holdfix.'

Hope that's helpful-
0
Edward

Administrator
Registered:
Posts: 245
Reply with quote  #6 
That's an interesting solution to insert an idle(10). In the past, I have solved this problem differently. What would you do if during a 300 ms holdfix, the participant blinked? Under most circumstances you would abort the trial and score it as a premature loss of fixation. This can lead to many unnecessary aborts, so real-time blink compensation is a useful feature. Furthermore, if your task requires lengthy fixations, say 3-10 seconds, blinking is a behavioral certainty. Another issue is that this noise you describe can appear during any moment during the holdfix, not just the start, though with a large enough error window, you can typically handle any small noise with a spatial threshold, a really noisy signal (low voltage device) can still cause a problem. Anyway, this is an interesting topic to discuss, but I won't harp on about it.
0
Wael.Asaad

Administrator
Registered:
Posts: 51
Reply with quote  #7 
Typically, our subjects naturally learn not to blink after performing the task for a bit.  But you could certainly build-in some sort of blink-detection algorithm...
0
sballesta

Junior Member
Registered:
Posts: 2
Reply with quote  #8 
Hi,

I would like to implement a real-time blink compensation during relatively long fixations.

In this post ryklin has mentioned:

"In the past, I have solved this problem differently. What would you do if during a 300 ms holdfix, the participant blinked?"

Could you share your solution for that in MonkeyLogic?

Thanks
0
SeanC

Junior Member
Registered:
Posts: 10
Reply with quote  #9 
Hi all,

Did anyone come up with a real-time blink compensation method in the end? I don't want to penalise my subject for blinking during a fixation period

Thanks
0
Jaewon

Administrator
Registered:
Posts: 716
Reply with quote  #10 
Hi SeanC,

Currently MonkeyLogic does not provide a convenient way to detect complex behavioral patterns, like blinks, so you should program it yourself with given functions. (It is in my to-do list, but I need some time to work on it.)

You can run another eyejoytrack() with 'acquirefix' and your tolerance parameter (for example, 200-300 ms) when your subject breaks fixation and let her/him finish the rest of the fixation period if she/he successfully re-fixates in time.
0
SeanC

Junior Member
Registered:
Posts: 10
Reply with quote  #11 
Hi Jaewon,

Thanks for this, 

That would be a really useful addition to MonkeyLogic, as in my task the monkey must maintain central fixation while objects are being presented either side. Using an idle command then seeing if they re-acquire, say within 50ms, would then affect the temporal consistency of stimulus presentation. However, this would be a good solution if the only goal was to maintain central fixation for a set period without other stimuli being presented. 


0
Edward

Administrator
Registered:
Posts: 245
Reply with quote  #12 
You need to make sure that your program can discriminate between blinks or artifacts, and short saccades. Otherwise the monkey will be able to cheat. 
0
Geenaianni

Junior Member
Registered:
Posts: 27
Reply with quote  #13 
Hi All, 
regarding ML's ability to discard or otherwise deal with blinks during fixation periods -- Jaewon's suggestion from May 31st is helpful but just wondering if there are any updates.
Thanks.
best,
Geena
0
Jaewon

Administrator
Registered:
Posts: 716
Reply with quote  #14 
For the original ML, blink detection is structurally a difficult problem to solve. I still think you just have to call eyejoytrack('holdfix') and eyejoytrack('acquirefix') back and forth, until the whole fixation period passes.

If you write the script with NIMH ML's new runtime functions, you can use the LooseHold adapter, which allows fixation breaks, if they are shorter than the given limit. You can take a look at the example tasks included in the latest package (see "task\runtime v2\5 LooseHold" or "task\runtime v2\7 timer demo").
0
Geenaianni

Junior Member
Registered:
Posts: 27
Reply with quote  #15 

Hi Jaewon, 
thanks much for this -- LooseHold works nicely for us. I have a question regarding the property "threshold" -- when using the function "SingleTarget" for eye signals, there is an option to specify a threshold -- which appears to be an allowable radius around the target, within which the eyes can be & still proceed with the trial. It does not appear that LooseHold has this option -- so I am curious, how is this radius defined during scenes containing LooseHold? I am attaching the code for clarity. Lines 26 and 36(now commented out), are the relevant portions. 

thanks as always,
Geena

 
Attached Files
m lh_test_GI.m (3.48 KB, 2 views)

0
Jaewon

Administrator
Registered:
Posts: 716
Reply with quote  #16 
SingleTarget is the one that checks whether the eye is within the threshold or not. WaitThenHold and LooseHold receive input from SingleTarget and just analyze the duration of the fixation. So LooseHold itself doesn't have a threshold option. Change the threshold of the SingleTarget that is fed to LooseHold. Or you can set another SingleTarget for LooseHold as below.

% scene 2: fixation hold
fix2 = SingleTarget(eye_);
fix2.Target = fixation_point;
fix2.Threshold = hold_radius;
lh2 = LooseHold(fix2);
lh2.HoldTime = hold_fix;
lh2.BreakTime = break_time;
scene2 = create_scene(lh2,fixation_point);
0
Geenaianni

Junior Member
Registered:
Posts: 27
Reply with quote  #17 
yup, that's what I needed to know. thanks.
0
cooperb138

Junior Member
Registered:
Posts: 6
Reply with quote  #18 
Hello,
It looks like I am trying to do something similar, so I thought I would jump on to this thread.

I have a free-view search task for a target in a field of distractors. By following Jaewon's example, I set a SingleTarget for FreeThenHold & the task performs the exactly the way I would like it.
However, I was wondering if there is a way use MultiTarget with FreeThenHold such that I could send eventmarkers for distractor fixations. For example, with a task like 'shapes' is there a way to utilize ML2's OOP to send eventmarkers for views of distractor shapes while only rewarding the star.

 


__________________
where ever you go, there you are.
0
Jaewon

Administrator
Registered:
Posts: 716
Reply with quote  #19 
You may need your own adapter for doing this, but it is possible. How many distractors do you have? If there are too many, checking fixation for each of them may take too long.
0
cooperb138

Junior Member
Registered:
Posts: 6
Reply with quote  #20 
Thanks Jaewon, I will play around with this. was just curious if there was already a method that I had overlooked.


__________________
where ever you go, there you are.
0
Previous Topic | Next Topic
Print
Reply

Quick Navigation:

Easily create a Forum Website with Website Toolbox.