Effective rendering of ambient sounds in virtual auditory display

Yukio Iwaya, Makoto Otani, Yoiti Suzuki

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Three-dimensional sound auralization systems, virtual auditory displays (VADs), have been developed actively in the last few decades. In conventional VADs based on headrelated transfer functions (HRTFs), a sound source position alone is rendered. However, because various sounds surround us in daily life, we usually hear not only a target direct sound but also ambient sounds in an actual sound space. A lack of ambient sound often engenders an unnatural perception of virtual auditory space presented by VAD based on HRTFs. Therefore we investigated an effective rendering method of ambient sound. Furthermore, using subjective evaluations, we discussed the relation between the realism of sound space with ambient sounds and a listener's head movement. This paper presents a review of the results.

Original languageEnglish
Title of host publication41st International Congress and Exposition on Noise Control Engineering 2012, INTER-NOISE 2012
Pages6084-6090
Number of pages7
Publication statusPublished - 2012 Dec 1
Event41st International Congress and Exposition on Noise Control Engineering 2012, INTER-NOISE 2012 - New York, NY, United States
Duration: 2012 Aug 192012 Aug 22

Publication series

Name41st International Congress and Exposition on Noise Control Engineering 2012, INTER-NOISE 2012
Volume7

Other

Other41st International Congress and Exposition on Noise Control Engineering 2012, INTER-NOISE 2012
Country/TerritoryUnited States
CityNew York, NY
Period12/8/1912/8/22

ASJC Scopus subject areas

  • Acoustics and Ultrasonics

Fingerprint

Dive into the research topics of 'Effective rendering of ambient sounds in virtual auditory display'. Together they form a unique fingerprint.

Cite this