Calendar Latest Topics

  Author   Comment  

Posts: 971
Reply with quote  #1 
I am starting a new thread to explain and discuss how to write a task with ML2's new trialholder functions (create_scene & run_scene). I didn't have time to document the details yet, but many of them will be covered in my SfN poster (Tuesday 11/14 AM).

Here are some examples what you can do with the new functions. These examples are included in the "task\runtime v2" directory of ML2 (Oct 12, 2017 or later). This board does not allow me to post videos. To watch the videos, please download them by clicking the links. These videos are created with mlplayer and transcoded to mp4 for web posting.

* Random dot motion

This is one of the most complex visual stimuli, but you can create it on-line during trials, instead of using movies created in advance.

rdm.mp4_000006951.jpg    (video: rdm.mp4, 873 KB)

* Timer demo

This is an example of dynamic behavior-responsive stimuli. The annulus-shaped timer counts up, only while the fixation is held.

timer.mp4_000009617.jpg    (video: timer.mp4, 105 KB)

* Shapes

You can use any shape that you can draw as a visual stimulus and change its position, size and color on the fly.

shape.mp4_000001558.jpg    (video: shape.mp4, 71 KB)


Posts: 971
Reply with quote  #2 
Another cool example task. You can control the speed and direction of random dots with your mouse. And the position and the size of the aperture can be changed as well with the left click + drag and the right click + drag, respectively. The task code is included in the latest ML2 package.

* Receptive field mapper

rf_mapper.png  (video: rf_mapper.mp4, 988 KB)


Posts: 971
Reply with quote  #3 
The idea behind the runtime v2 is how to make stimuli dynamic and responsive to the subject's behavior. In the previous runtime, we used toggleobject() to present stimuli and eyejoytrack() to track behavior (which still work fine with ML2), but this method is disadvantageous in making dynamic stimuli due to the following reasons.

1) toggleobject() and eyejoytrack() process stimuli and behavior separately, so there is no way to change stimuli during behavior detection.
2) While tracking behavior, eyejoytrack() tries to read a new sample at 1-ms intervals or faster, which leaves us too short time to perform sophisticated computation or draw complex stimuli.
3) Because toggleobject() and eyejoytrack() have many optional arguments, the cost of switching between two functions is high.


The runtime v2 takes a different approach. In this new runtime, behavior tracking and stimulus presentation are both handled by one function, run_scene(). In addition, samples collected during one refresh interval are analyzed all together at the beginning of the next refresh interval and the screen is redrawn based on this sample analysis. Therefore, the cycle of [analyzing samples]-[drawing screen]-[presenting] is repeated each frame and, by tapping into this cycle, we can see what happened in behavior and then decide what to show on the screen.


One disadvantage of this approach is that we don't know when the behavior occurred until the next frame begins. (See the time of the behavior occurrence (green arrow) and the time of behavior detection in the above figure) However, this cannot be a big issue for the following reasons.

1) We cannot update the screen contents until the next vertical blank time anyway, so it is not always necessary to detect behavior immediately. (If you use audio stimuli only, that is a different story and you can stay with toggleobject() and eyejoytrack() in that case.)

2) We may detect behavior a little later (by one refresh cycle at most), but we don't lose information. We can still get the exact time when the behavior occurred. What is not possible is to call eventmarker() to stamp the reaction time as soon as the behavior occurs. However, the window-crossing time cannot be an accurate measure of the reaction time, considering the size of the fixation window is arbitrary. If you are serious about reaction times, you probably want to use a velocity criterion, which requires some offline analysis.

In spite of some limitations, this approach has advantages in dynamic and precise frame-by-frame control of visual stimuli. In fact, that is the way how most of game software handles graphics.


Posts: 971
Reply with quote  #4 
    scene = create_scene(adapter [,taskobject]);
    flip_time = run_scene(scene [,eventcode]);

create_scene() receives an "adapter" as an argument, as well as TaskObject #, and returns a "scene". The adapter is a MATLAB class object. You can make your own adapter or use the built-in ones. I already made ~30 adapters that covers almost everything you can do with the runtime v1 and more. To make your own, make a copy of ext\ADAPTER_TEMPLATE.m and fill in the code. I will explain the details later.

As the function name indicates, toggleobject() of the runtime v1 turns on and off the stimulus object at each call.

    toggleobject(1);  % turn on Object #1
    toggleobject(1);  % turn off Object #1

So, if you don't make the second call, the object stays on the screen. However, in the create_scene(), the meaning of the taskobject argument is "the objects that are needed to compose the scene", so they stay on the screen only while the scene is being presented. If you want to show the objects across multiple scenes, the object numbers should be provided to every create_scene() calls.

The return value of create_scene(), scene, becomes the input of run_scene(). The optional argument of run_scene(), eventcode, is the markers to stamp at the moment the stimuli are presented on the screen. And the return value, flip_time, is the time when it occurs.

Posts: 971
Reply with quote  #5 
Multiple adapters can be concatenated as a chain, to detect complex behavior or draw complex stimuli. For example,

----- Beginning of green_start.m -----
% create a chain of [NullTracker]-[PolygonGraphic]-[TimeCounter]
star = PolygonGraphic(null_);
tc = TimeCounter(star);

% set the properties of the adapters
star.EdgeColor = [0 1 0];  % [r g b]
star.FaceColor = [0 1 0];
star.Size = 2;             % 2 deg by 2 deg
star.Position = [0 0];
star.Vertex = [0.5 1; 0.375 0.625; 0 0.625; 0.25 0.375; 0.125 0; 0.5 0.25; 0.875 0; 0.75 0.375; 1 0.625; 0.625 0.625];
tc.Duration = 5000;        % in milliseconds

% run scene
scene = create_scene(tc);
----- End of green_start.m -----

This example displays a green star on the center of the screen for 5 sec. To create this scene, three adapters are used.

The first adapter is NullTracker (null_). All adapter chains must start with a special adapter called Tracker. There are 5 trackers and they are all pre-defined with reserved names: eye_, joy_, touch_, button_ and null_. Each tracker reads new samples from the device that its name designates. null_ does not read any data.

The second adapter is PolygonGraphic which draws a star in green.

The third one is TimeCounter that measures elapsed time.


Attached Files
zip green (679 Bytes, 11 views)


Posts: 971
Reply with quote  #6 
This is what the TimeCounter adapter looks like.

----- Beginning of TimeCounter.m -----
 1:classdef TimeCounter < handle
 2:    properties  % user variables, readable & writable
 3:        Duration = 0;
 4:    end
 5:    properties (SetAccess = protected)  % read-only to users
 6:        Success  % status variable that indicates whether Duration is passed
 7:    end
 8:    properties (Access = protected)  % internal variables, not accessible to users
 9:        Adapter  % lower-level adapter, PolygonGraphic in this case
10:    end
12:    methods
13:        function obj = TimeCounter(varargin)  % constructor
14:            if 0==nargin, return, end
15:            obj.Adapter = varargin{1};  % store the lower-level adapter, PolygonGraphic
16:        end
17:        function continue_ = analyze(obj,p)  % sample anaylsis
18:            obj.Adapter.analyze(p);  % call PolygonGraphic's analyze()
19:            obj.Success = obj.Duration <= p.scene_time();
20:            continue_ = ~obj.Success;
21:        end
22:        function draw(obj,p)  % draw the screen
23:            obj.Adapter.draw(p);  % call PolygonGraphic's draw()
24:        end
25:    end
----- End of TimeCounter.m -----

Each adapter has two functions, analyze() and draw(). These functions are called by run_scene() during each frame in turns. The first thing they do is to call the same functions in the lower-level adapter (Line 18 & 23). You should not modify these lines, so as not to break the chain.

In analyze() of this adapter, we check whether the time that elapsed from the scene start passed Duration (Line 19). If it did, set Success true (or false otherwise). The return value of analyze(), continue_, determines whether we will keep running the scene in the next frame. Here continue_ becomes false when Success is true, so the scene ends when the elapsed time is equal to or longer than Duration.

This adapter does not update any graphic, so we just call the lower-level adapter's draw() and finish.


Posts: 971
Reply with quote  #7 
The input argument, p, in analyze() and draw() is an instantiation of the RunSceneParam class. It contains many useful variables and provides access to some other runtime functions within the adapter.

p.SceneStartTime: absolute time when the scene started
p.SceneStartFrame: frame number when the scene started
p.EventMarker: eventcodes assigned to this variable are stamped at the time when the next frame is presented.

p.scene_time(): time from the scene start
p.scene_frame(): number of frames presented from the scene start

p.trialtime(): the same function that you call in the timing file.
p.goodmonkey(): the new 'nonblocking' option is especially useful when you call goodmonkey() in an adapter
p.dashboard(): display user texts on the control screen

Posts: 971
Reply with quote  #8 
The next adapter is PolygonGraphic, which adds a green star to the screen.

----- beginning of PolygonGraphic.m -----
classdef PolygonGraphic < Graphic
        Vertex = [0 0; 0 1; 1 1; 1 0]
        function obj = PolygonGraphic(varargin)
            obj = obj@Graphic(varargin{:});
        function set.Vertex(obj,val)
            [m,n] = size(val);
            if m<2 || 2~=n, error('Vertex must be a n-by-2 vector (1<n)'); end
            obj.Vertex = val;
    methods (Access = protected)
        function create_graphic(obj)
            obj.GraphicID = mgladdpolygon([obj.EdgeColor; obj.FaceColor],obj.ScrSize,[obj.Vertex(:,1) 1-obj.Vertex(:,2)]);
----- end of PolygonGraphic.m -----

To handle graphic objects directly, you need to know how to use MGL (MonkeyLogic Graphics Library). The following is an example MGL code that shows a circle and a rectangle on the screen.

----- beginning of example code -----
mglcreatesubjectscreen(1,[0 0 0],[0 0 800 600],0);  % create the subject screen
mglcreatecontrolscreen([800 0 1200 300]);           % create the control screen

id = mgladdcircle([0 1 0; 1 0 0],[100 100]);        % add a circle
mglsetproperty(id,'origin',[400 300]);              % move the circle to the center
id2 = mgladdbox([1 1 1; 0 0 1],[150 150]);          % add a rectangle
mglsetproperty(id2,'origin',[400 300]);             % move the rectangle to the center

mglrendergraphic();                                 % render the circle and the rectangle
mglpresent();                                       % present to the screen

mglactivategraphic([id id2],false);                 % turn off the circle and the rectangle
mgldestroygraphic([id id2]);                        % destroy the objects

mgldestroycontrolscreen();                          % destroy the control screen
mgldestroysubjectscreen();                          % destroy the subject screen
----- end of example code -----

When you write your own adapter, the gray lines above are not necessary because MonkeyLogic takes care of them. What you need to do is 1) create objects (mgladdXXXX), 2) change their properties (mglsetproperty), 3) turn them on/off (mglactivategraphic) and 4) destroy them (mgldestroygraphic).

There are 9 functions that add graphic/sound objects. The sound object can be activated/deactivated by mglactivatesound and destroyed by mgldestroysound. To play it, use mglplaysound and mglstopsound.

id = mgladdbitmap(filename);  % or mgladdbitmap(bitmap_info);
id = mgladdbox([edgecolor; facecolor],[width height]);
id = mgladdcircle([edgecolor; facecolor],[width height]);
id = mgladdline(color,numPoints);  % and mglsetproperty(id,'addpoint',[x1 y1; x2 y2; ...]);
id = mgladdmovie(filename);  % or mgladdmovie(frame_info);
id = mgladdpie([edgecolor; facecolor],[width height],start_angle,central_angle);
id = mgladdpolygon([edgecolor; facecolor],[width height],[x1 y1; x2 y2; ...]);  % x & y: 0-1, normalized coordinates
id = mgladdtext(string);
id = mgladdsound(filename);  % or mgladdsound(y,fs);

All these functions return an object id which you can manipulate the property of the object with. The objects are active (i.e., presented on the screen) by default, when they are created. If you don't want them to be shown, turn them off by calling mglactivategraphic(id,false).

Each object has different properties. For the manipulatable properties, see mglsetproperty.m


Posts: 971
Reply with quote  #9 
To play a sound object or send a TTL pulse for a given duration, you can run a scene with TimeCounter.

tone = 1;  % TaskObject# of a SND object
tc = TimeCounter(null_);
tc.Duration = 500;  % ms
scene = create_scene(tc,tone);

Note that the cycle of calling TimeCounter is synchronized with the screen refresh rate in this framework. So the duration of the scene can be longer than 500 ms by the length of a frame.
Previous Topic | Next Topic

Quick Navigation:

Easily create a Forum Website with Website Toolbox.