Neural Engineering

Transformative Technologies
Work package: WP2 - Synthetic Cognition
Programme: P3
Deliverable: Deliverable 3.3 Demo of audiomotor model system

Deliverable due date: month 35

This deliverable serves as output of project: P3 - Eye-head gaze control to sounds in complex acoustic scenes. The planned tasks related to this deliverable are in progress.
We are developing a gaze-control model, which extends previous work on eye-control through dynamic linear ensemble coding. Dynamic gaze-control model will be used to decode the spiking activity in the midbrain superior colliculus (SC) (Deliverable 3.2) into joint head and eye movements for the total gaze shift.
Scientific Deliveries: Preliminary results from the model of eye-head motor gaze behavior will be presented at the meeting of Society for Neural Control of Movement in April 2016. We are planning to report detailed results with an extension to account for the SC spiking activity in a peer-reviewed scientific publication by July 2016.
We are currently working on the extension and implementation of gaze-control model to account for spiking activity of the superior colliculus. Current model allows generating separate traces of the eye-in-head and head-in-space to a given set of stimuli. However, the model is strongly dependent on the assumed input signal properties to generate realistic gaze shifts.
Technical Deliveries: Implementation details of the different versions of our models, and related documentation will be available, along with an example simulation script under the website: https://github.com/bkasap
This work also covers milestone 15.

Contributors: Bahadir Kasap, John van Opstal, Bert Kappen (RU)