Sign up Calendar Latest Topics
 
 
 


Reply
  Author   Comment  
T_J_Arakeri

Junior Member
Registered:
Posts: 10
Reply with quote  #1 
Hi,
I have three questions:
1) 
Similar to a question that was posted earlier. Which is, how would one send an event marker at a precise time point? For example, when a movement enters a given window. I saw the following comments posted previously: 
"Note that OnOffMarker sends out eventcodes at the beginning of the next frame when fixation is acquired or broken, because behavior is checked at the same rate as the screen refresh rate in the scene framework. But you can retrieve the exact time with no problem." 

"You can store the time in the property of an adapter (you will probably need to make your own adapter for this) and then save to the data file with bhv_variable." 

How would I go about retrieving the exact time?
While writing an adapter what could I use to detect the exact time on a trial without using "p.scene_time()"?

2)
When I tried using the "bhv_variable" function in the timing file to save an output property of an adapter that I wrote (for example the first time the joystick cursor crossed a certain window,) what I obtain is only the fist value taken by the property at the start of the trial, which is usually just an empty array. Do you have any idea how I could fix this?

3) 
This is probably a silly question.
Since I'm new to behavioral experiments, the other question is regarding something I read on temporal performance during data acquisition. It mentions in there that having a sampling rate around 70 Hz is less than ideal for behavioral control. My question is, as long as we have high sampling rate on the data acquisition system collecting data for posthoc analysis, why/when would we need a system to have a high sampling rate for online behavioral control? Could you please give me an example of where you might need  high sampling rates for online behavioral control (especially when it's not possible to have behavioral changes at such high rates, like , arm movement direction)? 

Thank you and regards,
Tapas
0
Jaewon

Administrator
Registered:
Posts: 855
Reply with quote  #2 
1) First of all, the post was written in the context of online behavior detection. Marking the time of stimuli is not an issue. For stimuli, it is you that determine when to present them, but, for behavior, the subject determines when to move. So basically you don't know the time of behavior until it occurs.

You should think it through whether you actually need to know the time of window crossing online. In most cases, you need to know it when you analyze data, not while a trial is running. Plus, the online detection based on a window is very arbitrary. You will get a different number if the size of the window changes. Typically you need to use a velocity criterion and calculate the number offline to get it done correctly.

p.scene_time() returns the current time, not the time of behavior detection. The behavior already occurred, like 10-20 ms ago. If you use the SingleTarget adapter, it has the Time property which indicates the last time the window crossing occurred. SingleTarget calculates the time, based on the acquired behavior samples, not based on the elapsed time. For the background info, see the ML manual.

ftp://helix.nih.gov/lsn/monkeylogic/ML2_doc/NIMH%20MonkeyLogic%20Manual.pdf#%5B%7B%22num%22%3A81%2C%22gen%22%3A0%7D%2C%7B%22name%22%3A%22XYZ%22%7D%2C70%2C707%2C0%5D

2) I can't tell anything without seeing the code. Use the Time property of SingleTarget or analyze it offline from the recorded data.
ftp://helix.nih.gov/lsn/monkeylogic/ML2_doc/runtimefunctions.html#SingleTarget

3) I think you can figure this out yourself. If the sampling rate is, for example, 1 Hz, it means that you will know the occurrence of behavior 1 second later at best. People in the field typically want to detect behavior with milliseconds resolution.
0
T_J_Arakeri

Junior Member
Registered:
Posts: 10
Reply with quote  #3 
Thanks for the reply!

I guess I wasn't clear with my questions.
1) I realize p.scene_time() does not return the time of behavioral detection. What I was unsure about was if the time returned is tied to the frame rate (frame_no*frame_Interval). If it is then how would you return a time stamp that's not just a multiple of frame interval?

2) In the SingleTarget example itself, how would I go about saving the Time property to the behavioral data file (.bhv2)?


0
Jaewon

Administrator
Registered:
Posts: 855
Reply with quote  #4 
I think you should explain with an example or show your code. I am not sure whether you misunderstood something or I don't understand the point.

p.scene_time() just returns the time elapsed from the scene start. It has nothing to do with timestamps and it is not tied to the frame rate.

If the name of the SingleTarget object is 'fix', then you can call bhv_variable() like the following.

fix = SingleTarget(eye_);
bhv_variable('Time', fix.Time);
0
T_J_Arakeri

Junior Member
Registered:
Posts: 10
Reply with quote  #5 
Thanks!
The first question is clear now.

That is exactly what I tried. However, when I look at the Time variable in behavioral data file later it was an empty array on every trial.
0
Jaewon

Administrator
Registered:
Posts: 855
Reply with quote  #6 

FYI,
p.trialtime(): time elapsed from trial start
p.scene_time(): time elapsed from scene start

p.eventmarker(eventcode): mark the eventcode when the next frame begins
p.DAQ.eventmarker(eventcode): mark the eventcode immediately


Did you call the bhv_variable function after run_scene()? The function stores the value contained at the time when it is called.

fix = SingleTarget(eye_);
scene = create_scene(fix);

run_scene(scene);
...

bhv_variable('Time', fix.Time);

0
T_J_Arakeri

Junior Member
Registered:
Posts: 10
Reply with quote  #7 
That makes sense! Thank you so much for clearing that up!
0
Previous Topic | Next Topic
Print
Reply

Quick Navigation:

Easily create a Forum Website with Website Toolbox.