To study the visual system, I built a closed-loop behavioral setup that allowed us to present visual stimuli to a walking fruit fly and update the stimulus in real time based on the fly’s position and orientation. Stripping away the scientific context, the goal was essentially to display a pinwheel disc to the fly such that:

  • it always remains centered on the fly, and
  • it rotates in sync with the fly’s turns.

Below is a pseudocode outline of how these closed-loop experiments were conducted.

define total_trial_num
define frames_per_trial

while trial<frames_per_trial
    while frame<frame_num
        grab image from camera
        extract location (loc) and orientaion (ori) of fly
        define parameters of stimulus using loc and ori
        send stimulus parameters to project
        projector shows updated stimulus
        save loc and ori to a csv file
        save image to a video
        frame=frame+1
    trial=trial+1

The corresponding python code—only the important bits—would look something like this.

while (trials < num_trials):
    while(True):
        _, loc, ori, _ = get_fly_postion_and_orientaion()
        stimulus.pos = loc # set position of stimulus
        stimulus.ori = ori # set orientation of stimulu
        stimulus.draw() # draw the stimulus onto projector buffer
        win.flip() # flip the image onto the projector

        if frame==trial_length:
            break
        frame+=1
    trials+=1

There are two key aspects of this pseudocode that deserve a deeper look:

extract location (loc) and orientaion (ori) of fly

And

send stimulus parameters to project
projector shows updated stimulus

Lets look into these two sections in some more depth.

How to extract a fly

Extracting the location and orientation of the fly is surprisingly easy with OpenCV. First, we use cv2.threshold(cv_image, 60, 255, cv2.THRESH_BINARY) to binarize the image—pixels darker than 60 become black (0), and brighter pixels become white (255). The threshold value of 60 is hardcoded here, but can also be determined automatically using Otsu’s method.

Next, we extract contours of all closed shapes in the image and apply an area filter to discard shapes that are too large or too small to be the fly. These area bounds are also hardcoded. Ideally, this step leaves us with a single contour that matches our criteria.

Finally, we use cv2.fitEllipse to fit an ellipse to the selected contour. The centroid (cx = int(M['m10'] / M['m00']), cy = int(M['m01'] / M['m00'])) and the orientation of the ellipse’s major axis give us the fly’s position and orientation.

def get_fly_postion_and_orientaion(camera, loc, size):
    image, timestamp = image_methods.grab_image(camera, 'ptgrey') # ptgrey is class/type of camera
    cv_image = ROI(image, loc, size) # cut out a section of the image, can be ignored
    ret, diff_img = cv2.threshold(cv_image, 60, 255, cv2.THRESH_BINARY) # binarise image, only the 
    this, contours, hierarchy = cv2.findContours(diff_img, cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE) 
    pos, ellipse, cnt = fly_court_pos(contours, size_cutoff=200, max_size=4000)
    fly_ori = ellipse[2]
    return image, pos, fly_ori, timestamp

def fly_court_pos(contours,size_cutoff=200, max_size = 99999999):
    pos = []
    ellipse = [0,0,0]
    cnt = []
    for i in range(0, len(contours)):
        if len(contours[i]) > 15 and len(contours[i]) < 500:
            M = cv2.moments(contours[i])
            Area = cv2.contourArea(contours[i])
            if Area > size_cutoff and Area < max_size:
                cx = int(M['m10'] / M['m00'])
                cy = int(M['m01'] / M['m00'])
                pos.append([cx, cy])
                cnt.append(contours[i])
                ellipse = cv2.fitEllipse(contours[i])
    return pos,ellipse,cnt

How to make a fly see things

Displaying visual stimuli is fairly straightforward using the PsychoPy library. The process involves three main steps:

  • Define the display window: This is the frame inside which the stimulus appears. We can set its size and brightness. In our case, the window exactly overlaps with the arena boundaries. This alignment is crucial since the fly’s position is extracted in pixel coordinates. It must map precisely onto the display window to ensure accurate stimulus placement. There’s some nuance here, which I’ll cover in a future blog post.
  • Define the stimulus: PsychoPy offers a wide array of stimuli. For my experiments, I used ImageStim, which allows displaying an image that can be moved and rotated as needed.
  • Update and present: On every frame, the stimulus is updated, drawn onto the projector buffer, and then the window is flipped (i.e., the buffer is pushed to the screen) to present the stimulus.
import psychopy.visual

win = psychopy.visual.Window(
    size=size,
    monitor=0,
    pos=pos,
    color=(0, 0, 0),
    fullscr=False,
    waitBlanking=True
)
stimulus = psychopy.visual.ImageStim(
    win=win,
    image=stimulus_image, # path to image
    mask='circle', # clip a circle out of the image
    pos=(0, 0),
    ori=0,  # 0 is vertical,positive values are rotated clockwise
    size=stim_size # goes from 0 to 2 by default
)
self.stimulus.autoDraw = False

Parallelism to the rescue

While the above schema works, it has a major limitation: all steps (image capture, extraction of position and orientation, stimulus update, and projector refresh) run within the same loop. This means the entire process is constrained by the slowest step, which is the projector update, which is limited to 60 Hz, in our case. However, the camera can capture images at a much higher rate (upto 150Hz, in our case), and we can track the fly’s position and orientation fast enough to keep up.

Ideally, we want to decouple image acquisition from stimulus presentation by running them in parallel processes. This allows faster image capture without being bottlenecked by projector update. This is where Python’s multipoecessing module comes to our rescue.

imaging_process()
    while trial<frames_per_trial
        while frame<frame_num
            grab image from camera
            extract location (loc) and orientaion (ori) of fly
            put loc,ori data into queue1
            save loc and ori to a csv file
            save image to a video
            frame=frame+1
            put frame into queue2
        trial=trial+1

stimulus_presentation_process()
    while trial<frames_per_trial
        while frame<frame_num
            get loc,ori data from queue1
            define parameters of stimulus using loc and ori
            projector shows updated stimulus
            frame=read frame from queue2
        trial=trial+1

main()
    define multiprocessing manager
    start manager

    use manager to create LIFO (last in first out) queue1
    use manaeger to create LIFO (last in first out) queue2

    define stimulis process
    define imaging process

    start stimulus process
    start imaging process