This paper proposes a technique for emotional speech recognition which enables us to extract paralinguistic information as well as linguistic information contained in speech signal. The technique is based on style estimation and style adaptation using multiple-regression HMM. Recognition process consists of two stages. In the first stage, a style vector that represents the emotional expression category and intensity of its variation of input speech is estimated on a sentence-by-sentence basis. Then the acoustic models are adapted using the estimated style vector and standard HMM-based speech recognition is performed in the second stage. We assess the performance of the proposed technique on the recognition of acted emotional speech uttered by both professional narrators and non-professional speakers and show the effectiveness of the technique.