Changing tempo of the music signal is one of the most basic signal processing applied to music signals. Traditional algorithms such as phase vocoder or Time-Domain Harmonic Scaling (TDHS) uniformly stretch and shrink the input signal. Therefore, those methods change not only the tempo but also the structure of the signal of the instrumental sound, such as attack and decay time, which changes the timbre of the instruments. To change the tempo of the music signal while keeping the Attack-Decay-Sustain-Release structure of the instruments, we need a non-linear modification of the time scale. To realize this, we propose a two-stage modeling of the music signal. The first stage represents the music signal using the sinusoidal model that expresses the harmonic part of the signal. Because non-harmonic component of the signal cannot be represented using the sinusoidal model, the residue of the sinusoidal model is analyzed using the linear prediction coding (LPC) in the second stage, which expresses the reverberation of the impulsive sound. Then we estimate the "stretchable parts" by observing the temporal smoothness of the spectrogram, and then only the stretchable parts are modified. We conducted experiments to modify the tempo of piano sounds, and compared the result with the conventional time stretch methods.