Calendar Latest Topics
 
 
 


Reply
  Author   Comment  
calebsponheim

Junior Member
Registered:
Posts: 1
Reply with quote  #1 

I am a new postbac at NIH, and our lab utilizes MonkeyLogic for stimulus presentation during training and scanning. We’re moving to a different stimulus set which involve dynamic videos of faces and objects. Because of the issues surrounding parallel tasks in ML, we opted to concatenate the individual videos into a small number of 30s videos with stimuli presented one after another. However, after importing them into ML, they would not load when the task itself started, regardless of whether they were preprocessed by ML before starting the task. Matlab then spat out this error, which seems to suggest that it had trouble buffering the long video file for preparation of presentation:

 

<<<*** MonkeyLogic ***>>> Task Loop Execution Error

Error using ==> xglmex

Could not create offscreen surface

 

Error in ==> xglcreatebuffer at 26

[lhs1] = xglmex (21, rhs1, rhs2);

 

Error in ==> mlvideo at 154

        result = xglcreatebuffer(devicenum, [xsize ysize pf]);

 

Error in ==> monkeylogic>buf_mov at 1483

                    vbuf = mlvideo('createbuffer', ScreenInfo.Device, xisbuf, yisbuf, ScreenInfo.BytesPerPixel);

 

Error in ==> monkeylogic>create_taskobjects at 1868

            [firstbuffer, lastbuffer, vbuffer, vbufnum, xis, yis, xscreenpos, yscreenpos, numframes] = buf_mov(mov, xpos, ypos, vbuffer, vbufnum, ScreenInfo, usepreprocessed);

 

Error in ==> monkeylogic at 1073

    [TaskObject ScreenInfo.ActiveVideoBuffers StimulusInfo] = create_taskobjects(C, ScreenInfo, DaqInfo, TrialRecord, MLPrefs.Directories, fidbhv, pl);

 

Error in ==> mlmenu at 2339

                monkeylogic(condfile, datafile, testflag);

 

We were wondering whether any of you have run into this similar problem, and if you might be able to help us understand MonkeyLogic’s eccentricities when it comes to processing video.

 

Thank you so much!

0
Adrienne

Junior Member
Registered:
Posts: 10
Reply with quote  #2 
I also had a problem getting videos to work well, we think due to buffering constraints.

- When I tried to select individual frames from two (loaded and displayed) videos simultaneously, it was able to display things correctly for one of the stimuli, but not the other. (The other just remained as a static image on the screen, as I recall).

- When I tried to use a single ~720kb movie (360 frames) in conjunction with a set of 36 images it technically worked, but needed 15s in between trials.

I ended up not using movies but using the make_condition function to generate a set of a couple 1000 conditions, each with static images instead of the movie frames I wanted. Takes a while to load initially but runs fine.
0
Jaewon

Administrator
Registered:
Posts: 971
Reply with quote  #3 
I haven't used movies in ML, but I first would check the size of the memory that your movie takes. What is the size and the frame rate of your movie?

If your movie is 640 x 480 and running at 30 frames/sec, then the size of the memory you need in order to load the full movie is

640 * 480 * 4 (bytes = 32bit color) * 30 (frames/sec) * 30 (sec, movie length) = 1,105,920,000 bytes ~= 1 Gb

So you will run out of video memory soon if you load a couple of movies like that, depending on your hardware spec. I don't think ML supports video streaming so you may need a workaround as Adrienne does.
0
Previous Topic | Next Topic
Print
Reply

Quick Navigation:

Easily create a Forum Website with Website Toolbox.