Most machine learning models use hyperparameters empirically defined in advance of their training processes in a time-consuming and try-and-error fashion. Hence, there is a strong demand for systematically finding an appropriate hyperparameter configuration in a practical time. Recent works have been interested in Bayesian Optimization to tune the hyperparameters with a less number of trials, using a Gaussian Process to determine the next hyperparameter configuration being sampled for evaluation. Most of the works use some criteria including the probability of improving (GP-PI), the expected improvement (GP-EI), and the upper confidence bounds (GP-UCB), without consideration of the execution time of each trial. In this paper, we focus on minimizing the total execution time to find an appropriate configuration. Specifically, we propose to take the execution time of each trial into account. We demonstrate the feasibility of the proposed approach and show that our proposal can find an optimal or suboptimal hyperparameter configuration faster than other Bayesian optimization-based approaches in terms of execution time.