Neural Engineering

Transformative Technologies
Work package: WP2 - Synthetic Cognition
Programme: P3
Deliverable: Deliverable 3.2 Optimal control model of eye-head orienting

Deliverable due date: month 30

This deliverable serves as output of project: P3 - Eye-head gaze control to sounds in complex acoustic scenes. The planned tasks related to this deliverable have been accomplished.
We developed a one-dimensional spiking neural network model for ocular gaze-shifts by the midbrain superior colliculus neural population to a single target. This model will serve as the optimal control signal generator to a given target point and its design allows further extensions and combinations with other deliverables for implementations of sound source detection (Deliverable 3.1) and eye-gaze shifts (Deliverable 3.3).
Scientific Deliveries: The model's functional properties have been presented in Scientific Poster form at two international conferences: Neural Engineering (2015) and Computational Neuroscience (2015). A journal paper has been submitted to a peer-reviewed journal "Biological Cybernetics" to ensure reproducibility, open-access and acknowledgement of the scientific community: Kasap B and Van Opstal AJ: A spiking neural network model of the midbrain superior colliculus that generates saccadic motor commands.
Currently, we work on three important extensions of our prototype model, each of which will be submitted for publication: (i) incorporating the effects of local microstimulation in a two-dimensional population of SC cells. This addition to our model will allow the SC cells to be largely independent of the detailed input activation profiles, as has been observed in neurophysiological studies. (ii) Inclusion of the full eye-head motor gaze behavior to auditory and visual sensory stimuli, and the tight interplay with the vestibular-ocular reflex (Deliverable 3.1 and 3.3). Both extensions are expected to be ready by July 2016. (iii) Incorporation of dynamic gaze control in response to multiple auditory and visual stimuli that can be presented at unpredictable times, even in midflight of ongoing gaze shifts. This extension requires dynamic spatial updating of sensory inputs into world-centered coordinates, on the basis of dynamic gaze-motor feedback (Deliverable 3.3).
Technical Deliveries: Implementation details of the different versions of our models, and related documentation will be available, along with an example simulation script under the website: https://github.com/bkasap

Contributors: Bahadir Kasap, John van Opstal, Bert Kappen (RU)