Sound image localization can be controlled by convolving a listener's head-related transfer functions (HRTFs) corresponding to each sound source position to be rendered. Using this technique, a virtual auditory display (VAD) can be constructed, which can display sound images at arbitrary positions. The VAD applying this architecture should have a set of HRTFs of a listener to be given in advance. However, it is difficult to measure HRTFs in all directions around the listener. Therefore, an interpolation method of HRTFs measured at discrete positions is needed for such a VAD system. Previous studies have investigated methods in which linear interpolation is used in the time or frequency domain. These methods provide good accuracy when directions corresponding to the HRTFs used in interpolation are sufficiently close to each other. In contrast, when the directions are not close, the accuracy of the interpolation decreases markedly. In particular, the frequencies of spectral peaks and notches in HRTFs are inaccurate in the interpolated HRTFs because the frequencies of such peaks and notches vary according to the sound source position. On the other hand, the frequencies of notches are important cues of sound localization in elevation localization. Therefore, an interpolation method that can represent the frequencies of peaks and notches of HRTFs accurately is necessary to realize high-definition VAD based on the HRTF synthesis technique. This paper presents a proposal of a novel method for HRTF interpolation. In the method, HRTFs are first modeled using the common-pole and zero model in the z-plane. The interpolated HRTF is obtained by transforming the z-plane to the frequency domain. The accuracy of the proposed method was evaluated, demonstrating that the accuracy of reproduced spectral notches can be improved.