Sign up Calendar Latest Topics
 
 
 


Reply
  Author   Comment  
aboharbf

Member
Registered:
Posts: 68
Reply with quote  #1 
So I'm attempting to synchronize the clocks on my MonkeyLogic Machine and my Blackrock NSP. I'm using the timestamps associated with the eventmarkers on both machines to find the appropriate function to move from one space to the other. I'm running into some difficulty.

Initially, I used all the eventmarkers corresponding to the trial start, I created arrays with timestamps for each and fit a linear regression of the MonkeyLogic computer timestamps to the Blackrock timestamps (logVsBlkModelTrial = fitlm(trialStartTimesBlk, trialStartTimesLog); in MATLAB). This produced a very good fit, I think.

trialFit.png 
Comparing the original blackrock time stamps and the calculated ones based on the model reveals they're nearly identical.

I then proceeded to do the same thing with the eventmarkers corresponding to the stimulus showing up on the screen, using basically all the same methods and run data as for this previous one.

trialFit.png 
Offsets seem to be all over the place now, with a few outliers. We also have the strobe feature on (Where scene transitions come with strobe transitions) and shifting the Blackrock time stamps to not be when the eventmarker arrived, but when the strobe transition took place, doesn't seem to help any.

Any ideas about what may be going on?

Should I accept the photodiode transition as the "ground truth" of stimulus presentation and disregard MonkeyLogic's eventmarker time stamps?

One of the reasons I'm working on this is also toward the event synchronizing eye signals from MonkeyLogic with those in my Blackrock machine, with the belief there may be a slight lag on MonkeyLogic's end (ISCAN --> BNC-2090 --> DAQ Card) vs (ISCAN --> Directly into Analog Blackrock inputs). Any inputs on this thinking would also be appreciated.

Thanks a lot.

0
Edward

Administrator
Registered:
Posts: 251
Reply with quote  #2 
If I understand correctly the trial start times and stimulus show-up times can differ a lot depending on how much "background" work there is between the start of the trial and the display of the initial stimulus. This is why you have the discrepancy.
0
aboharbf

Member
Registered:
Posts: 68
Reply with quote  #3 
Hey Edward,

Given the function of eventmarkers, this seems like a bug. There shouldn't be this much variability between an event being saved and timestamped as an eventmarker in the log file and that marker being sent out to the digital ins of a machine. I realize that there is going to be some variability, but the spread here is huge for a processer, especially given that these commands should theoretically be next to each other in code, or at least I imagine it going as follows:

Save Eventmarker to Log
Send digital out
Start stimuli

Unless the starting of the stimulus sits in between the saving and the sending, this discrepancy doesn't make sense.

I think it's clear the discrepancy is a bug and could be resolved by rearranging the sequence of events in the code.
0
pschade

Junior Member
Registered:
Posts: 6
Reply with quote  #4 
A photodiode is critical for alignment to stimulus presentation. The delay between the event marker and the photodiode varies considerably. Timing differences can be quite different across setups too. 
0
aboharbf

Member
Registered:
Posts: 68
Reply with quote  #5 
Quote:
Originally Posted by pschade
A photodiode is critical for alignment to stimulus presentation. The delay between the event marker and the photodiode varies considerably. Timing differences can be quite different across setups too. 


I use the photodiode feature, presented in the lower right, and record it on an analog input on the blackrock. What is shocking is the discrepancies between time stamps for the 9 on each machine vs for the 20 (which is my stim on time for me). It seems like there isn't a reason for the variability that exists in the former to not be matchable in the latter.
0
pschade

Junior Member
Registered:
Posts: 6
Reply with quote  #6 
if you are using runtime v1, the variability may come from the 'toggleobject' function. 
0
aboharbf

Member
Registered:
Posts: 68
Reply with quote  #7 
I'm using runtime v2, and the stimuli are uncompressed videos, preloaded into the RAM. I can make the ISI as low as 200 ms without issue.
0
Jaewon

Administrator
Registered:
Posts: 727
Reply with quote  #8 
Hi aboharbf,

I am not sure what I am seeing here. I need more information about what the raw data points are and how you prepare them.

If I understand correctly, you have timestamps in both ML and Blockrock. Then why do you run regressions? You should directly compare the timestamps. Since the clock bases of both systems may not be at the same frequency, all the timestamps in one trial should be relative measures from the trial start code. In other words, you need to subtract the time of 9s from the time of 20s and compare the resultant numbers between the systems.
0
aboharbf

Member
Registered:
Posts: 68
Reply with quote  #9 
Hey Jaewon,

Details:
I have the MonkeyLogic computer running the task, with Eventmarkers assigned to digital outputs which are received by the Blackrock NSP (which also records the electrode) in the Digital inputs. These events are stored in Blackrock's .NEV file format. the MonkeyLogic data format i used is .bhv2. the digital inputs to Blackrock are sampled at 30kS/S.

To prepare the files I'm using, I use Blackrock's openNEV function to create a NEV struct, and go to NEV.Data.SerialDigitalIO.UnparsedData - This is where you'll see sequences of "9, X, 10, 20, 30, 40, 18" which are coming into the digital in. The time stamps for each of these is NEV.Data.SerialDigitalIO.TimeStampSec, which I multiply by 1000 to get to ms. for this particular trial I've been showing, the first number comes out to 3.0678 sec, while the last is 1403.5371 sec (so ~23 minutes b/t the two). the Unparsed values are saved as "packetData" and the accompanying time stamps, converted to ms, are "packetTimes".

Then I just do this:   trialStartTimesBlk = packetTimes(packetData == 9);

to find the accompanying time stamps for Monkeylogic, I load the bhv2, and do the following:

trialStartTimesLog = [data(😉.AbsoluteTrialStartTime]' %Get all the trial start times

tmpStruct = [data(😉.BehavioralCodes]'; %Pull all the accompanying codes and their times for each trial from the larger data struct

for ii = 1:length(trialStartTimesLog)
  trialStartTimesLog(ii) = trialStartTimesLog(ii) + tmpStruct(ii).CodeTimes(1);      %for every absolute start time, add the within trial time of the marker of interest (in this case, the 9), to the absolute start time of its accompanying trial. 
end                                                                                                                 %This creates the absolute start time for each 9. 

then I use the linear model function (fitlm(trialStartTimesBlk, trialStartTimesLog)) to move the time stamps of the MonkeyLogic log file into the space of the Blackrock clock. This is basically a way to account for any potential differences in clock speed. if everything is going as it should, the  m is typically .9999 or 1.0001 - very nearly flat, with some offset (y0) which is the difference in when I hit start on Blackrock's recorder and play on MonkeyLogic. This offset should also account for the delay between monkeylogic recording/sending an eventmarker and Blackrock receiving it, and the error on that intercept (i interpret as) represents the variability in that delay. 

I then collect the variables from the model:

m = logVsBlkModel.Coefficients.Estimate(2);
y0 = logVsBlkModel.Coefficients.Estimate(1);

Use them to transform the times from the monkeyLogic log file:

trialStartTimesFit = (1/m)*(trialStartTimesLog - y0);


and see the quality of the fit by seeing the error in the model, basically (how off each new point is compared to what Blackrock actually saw and recorded).

eventTimeAdjustments = trialStartTimesFit-trialStartTimesBlk;

This code produced the first image, with a slope of 1 (clocks at the same speed), an intercept of ~3 seconds (reasonable delay between me hitting record on Blackrock and play on MonkeyLogic), and a SE of 0.002 ms (very reliable recording of timestamp in log file and receiving and recording of the same time stamp in Blackrock).

Now when I do this all for the eventmarker 20, instead of 9, I first remove all failed trials (Trials go from 314 --> 303):

1. For Blackrock, I loop through all the packetData, collecting every eventmarker temporarily and saving it only once I run into my eventmarker signifying reward, which denotes all the successful trials (40):

  for ii = 1:length(packetData)
    if packetData(ii) > 100
      stimCondTemp = packetData(ii);
    elseif packetData(ii) == stimStartMarker
      stimStartTemp = packetTimes(ii);
    elseif packetData(ii) == stimEndMarker
      stimEndTemp = packetTimes(ii);
    elseif packetData(ii) == rewardMarker %This assumes the "juice end time" is right after this marker.
      taskEventIDsBlk(trueTrialcount) = stimCondTemp;
      taskEventStartTimesBlk(trueTrialcount) = stimStartTemp;
      taskEventEndTimesBlk(trueTrialcount) = stimEndTemp;
      juiceOnTimesBlk(trueTrialcount) = packetTimes(ii);
      juiceOffTimesBlk(trueTrialcount) = packetTimes(ii + 1);
      trueTrialcount = trueTrialcount + 1;
    end
  end

I realize I handle the juice end marker imprecisely, but i don't use it down the line really so I'm not worried for now. 

for the monkeyLogic times, I do the same thing as I do for the trial, but using a line of code which doesn't hardcode "20" as my start eventmarker

taskEventStartTimesLog(ii) = mklTrialStarts(ii) + tmpStruct(ii).CodeTimes(tmpStruct(ii).CodeTimes(strcmp(behavioralCodes.CodeNames,'Stimuli On'));

i.e Nth Event Start = Nth Trial Start + Timestamp which lines up with "20" in CodeTimes for Nth Trial.

I do the same model, shift points, and subtract from Blackrock recorded times and create a histogram of the differences, and they're huge.
0
Jaewon

Administrator
Registered:
Posts: 727
Reply with quote  #10 
Thanks for the information. I am not familiar with Blockrock's file format, but I can see what you tried to do.

As I mentioned above, the regression is not the right way to compare the timestamps. You should use the raw timestamps. It doesn't make sense to compare the estimates of the model with the measures from the other system. At least, if you thought your first model (from the code 9) had a good fit, you should have used the very same model to get the estimates of code 20, rather than fitted another model.

I need you to do the calculation one more time. Collect the timestamps of 9 & 20 from both data files, like the following table. If some trials do not have code 20, you can skip those trials.

            MonkeyLogic       Blackrock
          Code 9   Code 20  Code 9  Code 20
         (Col A)  (Col B)  (Col C) (Col D)
Trial 1     .       .         .      .
Trial 2     .       .         .      .
Trial 3     .       .         .      .

Subtract Column A from Column B and store it in another variable, let's say, ML. Do the same calculation for Blockrock and store it in BR (i.e., BR = (Col D) - (Col C)). Then compute the difference between ML and BR. That will let you know what the errors really are.
0
aboharbf

Member
Registered:
Posts: 68
Reply with quote  #11 
I can see why regression isn't an optimal, but there is no reason it should not work. As noted, both models have the slope at very nearly 1, meaning the bulk of the work being done is looking for some offset which shifts the data from one space into the other. I did the column math you suggested and noticed a much larger than normal difference for my monkeyLogic derived numbers, which sent me into the earlier code where I built these vectors and I saw that the line I had which looked for the index of a "Stimulus On" in the code names and used the matching code number wasn't the correct structure to reference (used the MLConfig codenumbers instead of the ones in each trial mistakenly).

After sorting this, the models produce identical results for the offset. I was initially worried there may be variability in the stimulus start times when Monkeylogic is in the middle of the trial, but that seems to not be the case. Thanks a lot for the help.
0
Jaewon

Administrator
Registered:
Posts: 727
Reply with quote  #12 
You don't see the problem now, because your Blackrock's sampling is fast and accurate. But the regression line is supposed to pass through the mean of the variable, which is the middle of your timestamp range. Since the offset (the intercept) is at the leftmost position of your timestamp distribution, a tiny change in the slope due to the deviation of data will result in a huge difference around Time 0, which is the same issue that your model of code 20 had.

As for the offset, you should calculate it by subtracting the very first timestamp of ML from the very first timestamp of Blackrock. That is simpler and more accurate.
0
aboharbf

Member
Registered:
Posts: 68
Reply with quote  #13 
When you say "tiny change in the slope" due to a deviation of the data, I'm confused.

In a normal circumstance, where latencies are correct and consistent, the line should pass very close to all of these points and the intercept should be rather firmly anchored by the many points which would have much higher residuals were the line to shift.

In a circumstance where a deviation in the data did exist, it seems like I have no reason to believe this deviation didn't happen for the first trial as much as it may have happened for the last. If there are deviations, and they are in a single point, then the regression line is robust to this because of the 100+ other points, while if I get unlucky and the deviation is in the first point, and I use the Blackrock(1) - MonkeyLogic(1) method, then I'm screwed. The regression line seems like it can do no worse than the method denoted (other than it saves a couple lines of code) in good circumstances and better in the bad circumstances. 

Am I missing something about common problems that exist in these systems/data points?
0
Jaewon

Administrator
Registered:
Posts: 727
Reply with quote  #14 
Your Blackrock samples at 30kS/s. If you are using 15 lines, the error of the sampling time is at most 0.5 ms. It doesn't matter how you do the math, if the error is small like that. I just think that Blackrock(1) - MonkeyLogic(1) is better, because it is a sort of the definition of "offset" and you can calculate it even with one timestamp.
0
Previous Topic | Next Topic
Print
Reply

Quick Navigation:

Easily create a Forum Website with Website Toolbox.