Recently, three-dimensional sound rendering methods for immersive sound spaces have been developed. Such systems are called virtual auditory displays (VADs). In a VAD system, it is common to render a sound associated with a specific sound object by convolving it with particular head-related transfer functions (HRTFs) to render positional information. Furthermore, sound images can be made stable in their absolute positions by continuously switching HRTFs in response to a listener’s head movements. When this is done, the positions of sound images are stable in the world coordinate, which is similar to the real world, and the sound localization accuracy is much better than when HRTFs are not switched. The authors have put much effort into developing high-performance VAD software on personal computers. This VAD works with a native CPU of a personal computer on Windows (Microsoft Corp.) or Linux and outputs audio signals for headphones with a three-dimensional position sensor. This chapter overviews the VAD middleware we developed. Furthermore, to improve the realism of the virtual sound space, subjective evaluations were performed to clarify the relation between the perceived reality of virtual sound space with ambient sound and the listener’s head movements.
|Title of host publication||Principles and Applications of Spatial Hearing|
|Publisher||World Scientific Publishing Co.|
|Number of pages||18|
|ISBN (Print)||9814313874, 9789814313872|
|Publication status||Published - 2011 Jan 1|
ASJC Scopus subject areas
- Computer Science(all)