Neural Engineering

Transformative Technologies
Work package: WP2 - Synthetic Cognition
Programme: P3
Deliverable: Deliverable 3.1 Demo of auditory model system with real time acoustic tracking

Deliverable due date: month 24

This deliverable serves as output of project: P3 - Eye-head gaze control to sounds in complex acoustic scenes. The planned tasks related to this deliverable have been accomplished.
The extent of this deliverable is changed due to the complexity of the problem. The dynamic updating of a target signal requires an integration of auditory and visual stimuli. So that, the system can provide input signal for the optimal control system (Deliverable 3.2) by dynamically updating of the presented input. The target presented in world coordinates should be translated into eye-centered coordinates for a stable target presentation during gaze-shifts. That is due to the ever-changing eye coordinates during movement.
Scientific Deliveries: This is not a stand-alone deliverable but complementing the whole project by providing input signal for the controller (Deliverable 3.2). Thus, it will be presented as a section of the prospective journal publication (expected in July 2016).
Currently, the auditory and visual signals can be presented to the controller separately. Auditory signals do not need a transformation as they are already in head coordinates. However, the visual signals require a transformation based on the eye and head position at the time the signal is presented. Thus, an integration and/or selection of the auditory and visual stimuli are needed. This will be incorporated along with the vestibular-ocular reflex, which keeps the eye on target even when the head continues moving.
Technical Deliveries: Implementation details of the different versions of our models, and related documentation will be available under the website: https://github.com/bkasap

Contributors: Bahadir Kasap, John van Opstal, Bert Kappen (RU)