We propose a novel rule-based explanation method for an arbitrary pre-trained machine learning model. Generally, machine learning models make black-box decisions that are not easy to explain the logical reasons to derive them. Therefore, it is important to develop a tool that gives reasons for the model's decision. Some studies have tackled the solution of this problem by approximating an explained model with an interpretable model. Although these methods provide logical reasons for a model's decision, a wrong explanation sometimes occurs. To resolve the issue, we define a rule model for the explanation, called a mimic rule, which behaves similarly in the model in its region. We obtain a mimic rule that can explain the large area of the numerical input space by maximizing the region. Through experimentation, we compare our method to earlier methods. Then we show that our method often improves local fidelity.