Recently, the necessity of developing sensor fusion systems has become much higher. In this paper, a method for sensor fusion of vision and touch using an internal model with both global and local deformation is introduced. We utilize superquadrics with local deformations as internal models. Our proposed method consists of two phases. At the first phase, we recover the object shape parametrically by changing the parameters of superquadrics using visual data. The internal model constructed in this phase is, of course, a rough representation of object shape, since visual data are given merely from the visible portion of the object, and parametric models such as superquadrics have inevitable limitation in shape representation. But thanks to this parametric model, we can easily extract regions which are invisible and/or have large errors. Thus, at the second phase, we make the tactile sensor explore to get information of the above-mentioned regions, and deform the internal model locally based on the defined energy functions. The feasibility of the proposed method is confirmed by simulations.