Conventional model-based visual tracking assumes a mathematical state prediction model in advance. Thanks to the prediction model, a tracker can locate a target in visual clutters. However, if the target moves against the pre-defined prediction model, the tracker can easily miss the target. To overcome this problem, we introduce memory-based state prediction that a tracker can learn object's motion on-the-fly at the tracking. In addition, we propose a new framework in visual object tracking which integrates the memory-based state prediction into a conventional mathematical based state prediction. Our experiments suggest that our new framework permits a visual tracker to learn and track unexpected motion in the real world.