Changing tempo of the music signal is one of the most basic signal processing applied to music signals. Traditional algorithms such as phase vocoder and PSOLA uniformly stretch and shrink the input signal. Therefore, those methods change not only the tempo but also the structure of the signal of the instrumental sound, such as attack and decay time, which changes the timbre of the instruments. To change the tempo of the music signal while keeping the Attack-Decay-Sustain-Release structure of the instruments, we need a non-linear modification of the time scale. To realize this, we propose a two-stage modeling of the music signal. The first stage models the music signal using the sinusoidal model that expresses the harmonic part of the signal using sum of sinusoids with temporally-variable amplitude and frequency. Because non-harmonic component of the signal cannot be modeled using the sinusoidal model, the residue of the sinusoidal model is analyzed using the linear-prediction coding (LPC) in the second stage, which expresses the reverberation of the impulsive sound. Then the residue of the LPC analysis is stretched or shrunk non-linearly according to the short-term power, where only the parts with small power are modified because the parts with larger power correspond to the attack parts. Finally, the modified residue is used to synthesis the modified signal using the LPC synthesis filter and sinusoidal synthesizer. We conducted experiments to modify the tempo of piano sounds, and compared the result with the conventional time stretch methods.