Sign up Calendar Latest Topics
 
 
 


Reply
  Author   Comment   Page 1 of 2      1   2   Next
aboharbf

Member
Registered:
Posts: 96
Reply with quote  #1 
So I'm attempting to synchronize the clocks on my MonkeyLogic Machine and my Blackrock NSP. I'm using the timestamps associated with the eventmarkers on both machines to find the appropriate function to move from one space to the other. I'm running into some difficulty.

Initially, I used all the eventmarkers corresponding to the trial start, I created arrays with timestamps for each and fit a linear regression of the MonkeyLogic computer timestamps to the Blackrock timestamps (logVsBlkModelTrial = fitlm(trialStartTimesBlk, trialStartTimesLog); in MATLAB). This produced a very good fit, I think.

trialFit.png 
Comparing the original blackrock time stamps and the calculated ones based on the model reveals they're nearly identical.

I then proceeded to do the same thing with the eventmarkers corresponding to the stimulus showing up on the screen, using basically all the same methods and run data as for this previous one.

trialFit.png 
Offsets seem to be all over the place now, with a few outliers. We also have the strobe feature on (Where scene transitions come with strobe transitions) and shifting the Blackrock time stamps to not be when the eventmarker arrived, but when the strobe transition took place, doesn't seem to help any.

Any ideas about what may be going on?

Should I accept the photodiode transition as the "ground truth" of stimulus presentation and disregard MonkeyLogic's eventmarker time stamps?

One of the reasons I'm working on this is also toward the event synchronizing eye signals from MonkeyLogic with those in my Blackrock machine, with the belief there may be a slight lag on MonkeyLogic's end (ISCAN --> BNC-2090 --> DAQ Card) vs (ISCAN --> Directly into Analog Blackrock inputs). Any inputs on this thinking would also be appreciated.

Thanks a lot.

0
Edward

Administrator
Registered:
Posts: 260
Reply with quote  #2 
If I understand correctly the trial start times and stimulus show-up times can differ a lot depending on how much "background" work there is between the start of the trial and the display of the initial stimulus. This is why you have the discrepancy.
0
aboharbf

Member
Registered:
Posts: 96
Reply with quote  #3 
Hey Edward,

Given the function of eventmarkers, this seems like a bug. There shouldn't be this much variability between an event being saved and timestamped as an eventmarker in the log file and that marker being sent out to the digital ins of a machine. I realize that there is going to be some variability, but the spread here is huge for a processer, especially given that these commands should theoretically be next to each other in code, or at least I imagine it going as follows:

Save Eventmarker to Log
Send digital out
Start stimuli

Unless the starting of the stimulus sits in between the saving and the sending, this discrepancy doesn't make sense.

I think it's clear the discrepancy is a bug and could be resolved by rearranging the sequence of events in the code.
0
pschade

Junior Member
Registered:
Posts: 10
Reply with quote  #4 
A photodiode is critical for alignment to stimulus presentation. The delay between the event marker and the photodiode varies considerably. Timing differences can be quite different across setups too. 
0
aboharbf

Member
Registered:
Posts: 96
Reply with quote  #5 
Quote:
Originally Posted by pschade
A photodiode is critical for alignment to stimulus presentation. The delay between the event marker and the photodiode varies considerably. Timing differences can be quite different across setups too. 


I use the photodiode feature, presented in the lower right, and record it on an analog input on the blackrock. What is shocking is the discrepancies between time stamps for the 9 on each machine vs for the 20 (which is my stim on time for me). It seems like there isn't a reason for the variability that exists in the former to not be matchable in the latter.
0
pschade

Junior Member
Registered:
Posts: 10
Reply with quote  #6 
if you are using runtime v1, the variability may come from the 'toggleobject' function. 
0
aboharbf

Member
Registered:
Posts: 96
Reply with quote  #7 
I'm using runtime v2, and the stimuli are uncompressed videos, preloaded into the RAM. I can make the ISI as low as 200 ms without issue.
0
Jaewon

Administrator
Registered:
Posts: 939
Reply with quote  #8 
Hi aboharbf,

I am not sure what I am seeing here. I need more information about what the raw data points are and how you prepare them.

If I understand correctly, you have timestamps in both ML and Blockrock. Then why do you run regressions? You should directly compare the timestamps. Since the clock bases of both systems may not be at the same frequency, all the timestamps in one trial should be relative measures from the trial start code. In other words, you need to subtract the time of 9s from the time of 20s and compare the resultant numbers between the systems.
0
aboharbf

Member
Registered:
Posts: 96
Reply with quote  #9 
Hey Jaewon,

Details:
I have the MonkeyLogic computer running the task, with Eventmarkers assigned to digital outputs which are received by the Blackrock NSP (which also records the electrode) in the Digital inputs. These events are stored in Blackrock's .NEV file format. the MonkeyLogic data format i used is .bhv2. the digital inputs to Blackrock are sampled at 30kS/S.

To prepare the files I'm using, I use Blackrock's openNEV function to create a NEV struct, and go to NEV.Data.SerialDigitalIO.UnparsedData - This is where you'll see sequences of "9, X, 10, 20, 30, 40, 18" which are coming into the digital in. The time stamps for each of these is NEV.Data.SerialDigitalIO.TimeStampSec, which I multiply by 1000 to get to ms. for this particular trial I've been showing, the first number comes out to 3.0678 sec, while the last is 1403.5371 sec (so ~23 minutes b/t the two). the Unparsed values are saved as "packetData" and the accompanying time stamps, converted to ms, are "packetTimes".

Then I just do this:   trialStartTimesBlk = packetTimes(packetData == 9);

to find the accompanying time stamps for Monkeylogic, I load the bhv2, and do the following:

trialStartTimesLog = [data(😉.AbsoluteTrialStartTime]' %Get all the trial start times

tmpStruct = [data(😉.BehavioralCodes]'; %Pull all the accompanying codes and their times for each trial from the larger data struct

for ii = 1:length(trialStartTimesLog)
  trialStartTimesLog(ii) = trialStartTimesLog(ii) + tmpStruct(ii).CodeTimes(1);      %for every absolute start time, add the within trial time of the marker of interest (in this case, the 9), to the absolute start time of its accompanying trial. 
end                                                                                                                 %This creates the absolute start time for each 9. 

then I use the linear model function (fitlm(trialStartTimesBlk, trialStartTimesLog)) to move the time stamps of the MonkeyLogic log file into the space of the Blackrock clock. This is basically a way to account for any potential differences in clock speed. if everything is going as it should, the  m is typically .9999 or 1.0001 - very nearly flat, with some offset (y0) which is the difference in when I hit start on Blackrock's recorder and play on MonkeyLogic. This offset should also account for the delay between monkeylogic recording/sending an eventmarker and Blackrock receiving it, and the error on that intercept (i interpret as) represents the variability in that delay. 

I then collect the variables from the model:

m = logVsBlkModel.Coefficients.Estimate(2);
y0 = logVsBlkModel.Coefficients.Estimate(1);

Use them to transform the times from the monkeyLogic log file:

trialStartTimesFit = (1/m)*(trialStartTimesLog - y0);


and see the quality of the fit by seeing the error in the model, basically (how off each new point is compared to what Blackrock actually saw and recorded).

eventTimeAdjustments = trialStartTimesFit-trialStartTimesBlk;

This code produced the first image, with a slope of 1 (clocks at the same speed), an intercept of ~3 seconds (reasonable delay between me hitting record on Blackrock and play on MonkeyLogic), and a SE of 0.002 ms (very reliable recording of timestamp in log file and receiving and recording of the same time stamp in Blackrock).

Now when I do this all for the eventmarker 20, instead of 9, I first remove all failed trials (Trials go from 314 --> 303):

1. For Blackrock, I loop through all the packetData, collecting every eventmarker temporarily and saving it only once I run into my eventmarker signifying reward, which denotes all the successful trials (40):

  for ii = 1:length(packetData)
    if packetData(ii) > 100
      stimCondTemp = packetData(ii);
    elseif packetData(ii) == stimStartMarker
      stimStartTemp = packetTimes(ii);
    elseif packetData(ii) == stimEndMarker
      stimEndTemp = packetTimes(ii);
    elseif packetData(ii) == rewardMarker %This assumes the "juice end time" is right after this marker.
      taskEventIDsBlk(trueTrialcount) = stimCondTemp;
      taskEventStartTimesBlk(trueTrialcount) = stimStartTemp;
      taskEventEndTimesBlk(trueTrialcount) = stimEndTemp;
      juiceOnTimesBlk(trueTrialcount) = packetTimes(ii);
      juiceOffTimesBlk(trueTrialcount) = packetTimes(ii + 1);
      trueTrialcount = trueTrialcount + 1;
    end
  end

I realize I handle the juice end marker imprecisely, but i don't use it down the line really so I'm not worried for now. 

for the monkeyLogic times, I do the same thing as I do for the trial, but using a line of code which doesn't hardcode "20" as my start eventmarker

taskEventStartTimesLog(ii) = mklTrialStarts(ii) + tmpStruct(ii).CodeTimes(tmpStruct(ii).CodeTimes(strcmp(behavioralCodes.CodeNames,'Stimuli On'));

i.e Nth Event Start = Nth Trial Start + Timestamp which lines up with "20" in CodeTimes for Nth Trial.

I do the same model, shift points, and subtract from Blackrock recorded times and create a histogram of the differences, and they're huge.
0
Jaewon

Administrator
Registered:
Posts: 939
Reply with quote  #10 
Thanks for the information. I am not familiar with Blockrock's file format, but I can see what you tried to do.

As I mentioned above, the regression is not the right way to compare the timestamps. You should use the raw timestamps. It doesn't make sense to compare the estimates of the model with the measures from the other system. At least, if you thought your first model (from the code 9) had a good fit, you should have used the very same model to get the estimates of code 20, rather than fitted another model.

I need you to do the calculation one more time. Collect the timestamps of 9 & 20 from both data files, like the following table. If some trials do not have code 20, you can skip those trials.

            MonkeyLogic       Blackrock
          Code 9   Code 20  Code 9  Code 20
         (Col A)  (Col B)  (Col C) (Col D)
Trial 1     .       .         .      .
Trial 2     .       .         .      .
Trial 3     .       .         .      .

Subtract Column A from Column B and store it in another variable, let's say, ML. Do the same calculation for Blockrock and store it in BR (i.e., BR = (Col D) - (Col C)). Then compute the difference between ML and BR. That will let you know what the errors really are.
0
aboharbf

Member
Registered:
Posts: 96
Reply with quote  #11 
I can see why regression isn't an optimal, but there is no reason it should not work. As noted, both models have the slope at very nearly 1, meaning the bulk of the work being done is looking for some offset which shifts the data from one space into the other. I did the column math you suggested and noticed a much larger than normal difference for my monkeyLogic derived numbers, which sent me into the earlier code where I built these vectors and I saw that the line I had which looked for the index of a "Stimulus On" in the code names and used the matching code number wasn't the correct structure to reference (used the MLConfig codenumbers instead of the ones in each trial mistakenly).

After sorting this, the models produce identical results for the offset. I was initially worried there may be variability in the stimulus start times when Monkeylogic is in the middle of the trial, but that seems to not be the case. Thanks a lot for the help.
0
Jaewon

Administrator
Registered:
Posts: 939
Reply with quote  #12 
You don't see the problem now, because your Blackrock's sampling is fast and accurate. But the regression line is supposed to pass through the mean of the variable, which is the middle of your timestamp range. Since the offset (the intercept) is at the leftmost position of your timestamp distribution, a tiny change in the slope due to the deviation of data will result in a huge difference around Time 0, which is the same issue that your model of code 20 had.

As for the offset, you should calculate it by subtracting the very first timestamp of ML from the very first timestamp of Blackrock. That is simpler and more accurate.
0
aboharbf

Member
Registered:
Posts: 96
Reply with quote  #13 
When you say "tiny change in the slope" due to a deviation of the data, I'm confused.

In a normal circumstance, where latencies are correct and consistent, the line should pass very close to all of these points and the intercept should be rather firmly anchored by the many points which would have much higher residuals were the line to shift.

In a circumstance where a deviation in the data did exist, it seems like I have no reason to believe this deviation didn't happen for the first trial as much as it may have happened for the last. If there are deviations, and they are in a single point, then the regression line is robust to this because of the 100+ other points, while if I get unlucky and the deviation is in the first point, and I use the Blackrock(1) - MonkeyLogic(1) method, then I'm screwed. The regression line seems like it can do no worse than the method denoted (other than it saves a couple lines of code) in good circumstances and better in the bad circumstances. 

Am I missing something about common problems that exist in these systems/data points?
0
Jaewon

Administrator
Registered:
Posts: 939
Reply with quote  #14 
Your Blackrock samples at 30kS/s. If you are using 15 lines, the error of the sampling time is at most 0.5 ms. It doesn't matter how you do the math, if the error is small like that. I just think that Blackrock(1) - MonkeyLogic(1) is better, because it is a sort of the definition of "offset" and you can calculate it even with one timestamp.
0
dbarack

Junior Member
Registered:
Posts: 26
Reply with quote  #15 
I am also using a blackrock system and performing timing tests. I am having two problems with event code timing.
1. Using Jaewon's method, for each trial, I subtract the timestamp for '9' from each subsequent timestamp as recorded in monkeylogic. I also subtract the blackrock timestamp from the '9' from each subsequent blackrock timestamp. I then compare the two sets of timestamps for each trial by subtracting the monkeylogic times from the blackrock times. I anticipate that I would see small positive numbers, but instead I get a range of small negative numbers (~0 - -1.2 ms). See image below. Thoughts on this?
EventTimingTest1.jpg 
2. In addition to the above issue, I occasionally record duplicate timestamps in my blackrock machine. They have a highly stereotyped structure: the duplicate timestamp always occurs about 0.0001 ms before a different timestamp that corresponds to a real event. So, e.g., I may see a '9' in my blackrock record, then some time later (say, 500 ms), I will see another '9' followed by a '13' (or whatever) about 0.0001 ms later. This happens only rarely (~10 times for every 500 event timestamps), and the monkeylogic event record does not contain these extra timestamps. Thoughts?

0
Jaewon

Administrator
Registered:
Posts: 939
Reply with quote  #16 
By subtracting the time of 9 in each trial, you aligned them all at the same line. There is no reason that the differences should be positive numbers. What surprises me is that they are skewed to one direction only. By the way, did you not plot the event 9? Since its time is 0, there should be a line at Time 0 but I don't see it.

I am not familiar with how Blackrock reads in the digital signals, but, according to aboharbf, it seems that it samples at 30 kS/s. So the precision of timing differs depending on how many digital lines you are using. This is not something NIML MH can change. You should talk to Blackrock for the error range you should expect.

Regarding to the second issue, it means that Blackrock reads eventcodes before the digital output of ML is stabilized. It can still be a hardware issue, but there is something you can try for this on the ML side. Increase the duration of T1 in the strobe "Spec" menu. The default value is 125 us, which I tested many times with Plexon and TDT, but your setup may need a little longer time.

https://monkeylogic.nimh.nih.gov/docs_MainMenu.html#Strobe
0
dbarack

Junior Member
Registered:
Posts: 26
Reply with quote  #17 
I did not plot event 9. Good catch. I'm running a new timing test now with a longer T1 duration for the strobe. I'll be sure to plot event 9 once I get that data.

I'll get in touch w/ Blackrock about the error range.

I am confused about how there can be negative numbers. Is it because the process is:
ML generate eventcode ---> Send signal to BNC 2090 ---> Record timestamp
                                                         |
                                                         |
                                                         v
                                                   Blackrock -----> Record timestamp and eventcode
time on x-axis ------------>

and so the Blackrock timestamp can occur before ML can write its timestamp to the file?
0
Jaewon

Administrator
Registered:
Posts: 939
Reply with quote  #18 
It doesn't matter how the event signal is generated. It is the nature of sampling. (a > b in the below figure)
Untitled.png

By the way, is the polarity of your strobe signal correct? Which one between rising edge and falling edge does your Blackrock receive?

0
aboharbf

Member
Registered:
Posts: 96
Reply with quote  #19 
If the digital ports are sampled at 30 kHz, isn't there some cap on how much of a discrepancy can exist that is well below 5 ms?
0
Jaewon

Administrator
Registered:
Posts: 939
Reply with quote  #20 
Yes, there must be some cap.

30 kS/s is not 30 kHz, though. 30,000 samples/sec means that there is one digitizer capable of 30-kHz sampling and all digital lines shares it. If there are two lines being used, each line is sampled at 15 kHz. If there are three, the sample rate for each line is 10 kHz. So, as the number of digital lines increases, the error cap becomes larger.

However, the errors shown in dbarack's figure are still too large, even considering those things. I suspect the polarity of the strobe might not be correct. That can actually explain why there was a redundant event marked just before a real one.
0
dbarack

Junior Member
Registered:
Posts: 26
Reply with quote  #21 
The blackrock reads on the rising edge of the strobe.
0
Jaewon

Administrator
Registered:
Posts: 939
Reply with quote  #22 
So did you set NIMH ML to send rising-edge strobes?

I checked a manual of a Blackrock system and it seems that the sample rate of Blackrock digital input is indeed 30 kHz, not 30 kS/s. (Page 13 of https://blackrockmicro.com/wp-content/ifu/LB-0175_NeuroPort_Biopotential_Signal_Processing_System_Users_Manual.pdf) So the max error should be <33 us (= 1/30000).

Every aspect of the numbers you showed is kind of weird: the large temporal errors (up to 1.2 ms), that all differences are negative, and very frequent erroneous redundant eventcodes. My first guess is the incorrect strobe polarity. Keep us posted if you find anything.
0
dbarack

Junior Member
Registered:
Posts: 26
Reply with quote  #23 
Ok! I ran some more tests Friday and yesterday and will do some more today. ML is set to send rising-edge strobes. I first increased T1 to 250 us and tested on that setting on Friday. Here are those results:
EventTimingMonk1.jpgThis is real data now, with a subject in the set-up. Note that I am now plotting the 0 times. As you can see, there are some positive timestamps, so things have improved in that sense. However, timestamps are still almost entirely negative. And, I still saw duplicate timestamps.
I reasoned that, given how things had improved, perhaps I should test more T1 or T2 changes. Yesterday, I changed T2 to 250 us as well (just to see if it helped). Here are those results:
EventTimingMonk2.jpg  Pretty similar: a few positive differences in the timestamps but overwhelmingly negative.

I'm going in later today to run my subject, and will try a new T1/T2 setting. I will also doublecheck the polarity setting on the strobe.

0
dbarack

Junior Member
Registered:
Posts: 26
Reply with quote  #24 
The polarity is set to 'on rising edge'. In BR, my DI is set to '16-bit on word strobe', which means that BR reads 16 bits on the rising edge of the strobe pin. These are consistent.
0
Jaewon

Administrator
Registered:
Posts: 939
Reply with quote  #25 
I just realized that you did not draw the figure in the way I thought you would do. You are not comparing the interval between one marker and next one. You are comparing the elapsed time from the time of 9. What your figure shows is that your computer's clock is slightly faster than Blackrock's. What is the duration of one trial in your task?
0
Previous Topic | Next Topic
Print
Reply

Quick Navigation:

Easily create a Forum Website with Website Toolbox.